Analysing a string of symbols, according to the rules of a formal grammar
POPULARITY
Подкаст RadioDotNet выпуск №111 от 23 марта 2025 года Подкаст поддерживает международный разработчик высоконагруженного ПО Altenar. Узнать подробнее про их митапы и не только: https://t.me/+_TzcYVVVqEgyZGIy Реклама. ООО «Аистсофт». ИНН 3327121697. Erid: 2VtzqwZ8Y7z Сайт подкаста: radio.dotnet.ru Boosty (₽): boosty.to/RadioDotNet Темы: [00:01:55] — .NET 10 Preview 2 devblogs.microsoft.com/dotnet/dotnet-10-preview-2 [00:17:30] — System.Linq.Async is part of .NET 10 steven-giesel.com/blogPost/e40aaedc-9e56-491f-9fe5-3bb0b... [00:25:25] — Visual Studio 2022 Preview 2 learn.microsoft.com/visualstudio/releases/2022/release-not... [00:27:50] — Parse, Don't Validate deviq.com/practices/parse-dont-validate [00:51:25] — Support for SLNX, a new, simpler solution file format devblogs.microsoft.com/dotnet/introducing-slnx-support-dotnet... [00:58:55] — Кратко о разном officialaptivi.wordpress.com/mono-is-back-mono-6-14-0-released minidump.net/pro-net-memory-management marketplace.visualstudio.com/items dotnext.ru/callforpapers youtube.com/watch youtube.com/watch youtube.com/watch youtube.com/watch youtube.com/watch youtube.com/watch youtube.com/watch youtube.com/watch Фоновая музыка: Максим Аршинов «Pensive yeti.0.1»
“I am the embodiment of the worst case scenario. I am what every athlete fears and what pregnant parents dread. I am the catalyst for losing faith and questioning God.”Those stunning words were written by today's guest Dr. Amy Kenny, a brilliant author and scholar who lives with disabilities. That phrase— “worst case scenario”—resonates so deeply with me since I was born with a congenital brain defect that, essentially, exploded out of the clear blue when I was in my twenties.Today I have the privilege of hearing from Dr. Amy Kenny. This brilliant woman's work has helped me see that the “worst case scenario” might actually be the best case scenario for my life. She's guided me in learning that my body—disabilities and all—is not an impediment to being an image bearer of God; it's actually a channel through which I can bear God's image most fully. On this episode, we will… Unpack the controversial concept of a disabled God Investigate our standards of bodily perfection Parse out the critical difference between being cured and being healed Celebrate disability as a creative force Identify the traces of divinity to be found in our bodiesIf you need permission to believe your body bears God's image—whether you have disabilities or not!—this episode is for you.Show Notes:My Body Is Not a Prayer Request: Disability Justice in the Church - https://a.co/d/3DZTIt1Dr. Amy Kenny at Calvin University - https://youtu.be/URPc3nMll5s?feature=sharedScriptures referenced in this episode:Luke 14 - The Parable of the BanquetJohn 9***There's so much more to the story.For more messages of hope, free resources, and opportunities to connect with me, visit https://hopeheals.com/katherine.Follow me in Instagram: https://www.instagram.com/hopeheals/Subscribe to The GoodHard Story Podcast!Apple Podcasts: https://podcasts.apple.com/us/podcast/good-hard-story-podcast/id1496882479Spotify:https://open.spotify.com/show/0OYz6G9Q2tNNVOX9YSdmFb?si=043bd6b10a664bebWant a little hope in your inbox? Sign up for the Hope Note, our twice-a-month digest of only the good stuff, like reflections from Katherine and a curated digest of the Internet's most redemptive content: https://hopeheals.com/hopenoteGet to know us:Hope Heals: https://hopeheals.com/Hope Heals Camp: https://hopeheals.com/campMend Coffee: https://www.mendcoffee.org/Instagram: https://www.instagram.com/hopeheals/
In this special year-end episode of OpenObservability Talks, we are thrilled to host Charity Majors, co-founder and CTO of Honeycomb, for an insightful conversation on the state of observability. Charity and our host Horovits recently delivered keynotes at Open Source Observability Day, which sparked fascinating discussions on the evolution of open observability and its impact on the broader industry. Together, they run a 2024 yearly postmortem on the key insights and trends, exploring what the observability community and industry have accomplished this year. Looking ahead, they also discuss what's on the horizon for observability in 2025 and beyond. Charity Majors pioneered the concept of modern Observability, drawing on her years of experience building and managing massive distributed systems at Parse (acquired by Facebook), Facebook, and Linden Lab building Second Life. She is the co-author of Observability Engineering and Database Reliability Engineering (O'Reilly). Join us for this fireside chat as we wrap up the year with the influential voices in observability. The episode was live-streamed on 9 December 2024 and the video is available at https://www.youtube.com/watch?v=D7ssNKAmYMs You can read the recap post at https://medium.com/p/94f80fff77e8/ OpenObservability Talks episodes are released monthly, on the last Thursday of each month and are available for listening on your favorite podcast app and on YouTube. We live-stream the episodes on Twitch and YouTube Live - tune in to see us live, and chime in with your comments and questions on the live chat. https://www.youtube.com/@openobservabilitytalks https://www.twitch.tv/openobservability Show Notes: 00:00 - intro 01:51 - major observability trends of 2024 05:14 - OpenTelemetry trends 07:50 - Observability 2.0 14:45 - AI for DevOps and Observability 27:02 - Platform engineering 36:37 - observability query and data analytics 43:40 - observability for business insights 46:53 - how to start observability in Greenfield projects 50:15 - additional use cases for observability 54:11 - controlling cost of observability 58:47 - outro Resources: Practitioner's guide to wide events: https://jeremymorrell.dev/blog/a-practitioners-guide-to-wide-events/ Charity Major's blog on Observability 2.0: https://www.honeycomb.io/blog/time-to-version-observability-signs-point-to-yes Observability Is A Data Analytics Problem: https://insideainews.com/2022/04/07/observability-is-a-data-analytics-problem/ Platform as a Product survey by the CNCF: https://www.linkedin.com/feed/update/urn:li:share:7267977952242397185/ SaaS observability: https://medium.com/p/b2db276305b2 Expensive Metrics: Why Your Monitoring Data and Bill Get Out Of Hand: https://medium.com/p/e5724619e3f1 Sampling best practices: https://logz.io/learn/sampling-in-distributed-tracing-guide/ Socials: Twitter: https://twitter.com/OpenObserv YouTube: https://www.youtube.com/@openobservabilitytalks Dotan Horovits ============ Twitter: @horovits LinkedIn: www.linkedin.com/in/horovits Mastodon: @horovits@fosstodon BlueSky: @horovits.bsky.social Charity Majors ============ Twitter: https://x.com/mipsytipsy LinkedIn: https://www.linkedin.com/in/charity-majors Mastodon: @mipsytipsy@hachyderm.io BlueSky: https://bsky.app/profile/mipsytipsy.bsky.social
In this episode, Parker and Landon discuss the recent Talladega race, listener feedback, and the ongoing legal battle between 2311 Racing and NASCAR. They delve into the implications of the charter system in NASCAR, the nature of injunctions in legal proceedings, and the broader economic context of motorsports compared to traditional stick-and-ball sports. The conversation highlights the challenges faced by racing teams in managing costs and the impact of legal frameworks on the sport. In this segment, the conversation delves into various aspects of sports contracts, particularly in racing, and the implications of spending caps. The discussion transitions to the recent Talladega race, highlighting its viewership and the dynamics of super speedway racing. The hosts analyze the strategies employed during the race, including a controversial move by Kyle Busch that sparked debate among fans and commentators. The segment concludes with reflections on fan reactions and predictions for future races. In this conversation, the hosts delve into various aspects of NASCAR and F1 racing, discussing race dynamics, driver strategies, and the impact of technical changes on racing tracks. They analyze viewer engagement trends in NASCAR, particularly with the CW network, and explore the implications of recent track modifications. The discussion also touches on the safety car situation in F1 and concludes with insights into IndyCar's new ventures aimed at penetrating the Dallas market. Leave us a voicemail! https://moneylap.com Timestamps: 00:00 - Intro 06:44 - Listener Feedback and Reviews 15:19 - 23XI Files for Preliminary Injunction, Our Non-Lawyer Thoughts 29:00 - Talladega Race Overview 37:01 - Kyle Busch's Position in the Race 43:03 - Final Lap Tactics 51:25 - NASCAR Rules are Exhausting 52:11 - Chicago Attendance Success 52:55 - Xfinity Series Viewership 53:11 - JRM Disqualification Appeal Outcome 54:09 - Changes to the Roval Track 1:01:51 - Alonso's Thoughts on Why No Safety Cars 1:06:41 - F1 Car Launch Event Announcement 1:08:10 - IndyCar's New Arlington Grand Prix 1:14:08 - Outro (Timestamps are a rough timing and may require a little scrubbing to find the start of the topic) The Money Lap is the ultimate motorsport podcast with Parker Kligerman and Landon Cassill professional racecar drivers and hilarious hosts taking you through the world of motorsports. Covering NASCAR, F1, Indycar, and more, they'll provide the scoop, gossip, laughs, and stories from the racing biz. With over 1100 unique products currently in stock, Spoiler Diecast boasts one of the largest inventories in the industry. We are NASCAR focused, offering a wide range of diecast and apparel options. But that's not all. We've expanded our catalog to include diecast for dirt/sprint cars, Indycar, and F1. As passionate racing fans ourselves, we're constantly growing our offerings to cater to different forms of racing. Use promo code "moneylap" for free shipping and 5% off all orders. https://www.spoilerdiecast.com/ Sign up today for the Money Lap newsletter: https://themoneylap.com/subscribe Read by industry executives in NASCAR, F1, and Indycar, our newsletter and podcast are essential resources for any motorsports enthusiast. Join our community of passionate fans and industry insiders today. Welcome to the future of motorsports media! Copyright 2024, Pixel Racing, LLC. All Rights Reserved.
In this episode of Breaking Changes, Postman Head of Product-Observability Jean Yang sits down with Charity Majors, co-founder and CTO of Honeycomb. Charity dives into her journey as a tech founder, exploring her experiences in startups and operations engineering, as well as the significance of observability and the industry's evolving landscape. Charity emphasizes the importance of transparency and trust in team building and reflects on Honeycomb's growth and evolving hiring practices. For more on Charity Majors, check out the following: LinkedIn: https://www.linkedin.com/in/charity-majors/ Twitter: https://twitter.com/mipsytipsy Personal Website: https://charity.wtf/ Honeycomb Website: https://www.honeycomb.io/ Follow Jean on Twitter/X @jeanqasaur. And remember, never miss an episode by subscribing to the Breaking Changes Podcast on your favorite streaming platform, company website at https://www.postman.com/events/breaking-changes or Postman's YouTube Channel—just hit that bell for notifications. #BreakingChanges3 #apis #podcast #postman #honeycomb #TechLeadership #EntrepreneurialJourney #ProfessionalGrowth #TechInnovation #StartupSuccess #BusinessInsights #EntrepreneurMindset #careersuccess #PersonalGrowthJourney #observability Episode Timestamps 00:00 - Introduction and Background 03:58 - Getting into Startups and Leadership 07:07 - Transition to Leadership 08:47 - Current Role and Challenges 15:01 Observability and Breaking Changes 20:23 - Smooth Changes and Lessons 26:01 - Failed Changes and Lessons 30:40 - What Hasn't Changed 33:09 - Surprising Changes 35:09 - People Growth and Hiring 39:57 - Evolution of Hiring Practices 40:27 - Evolution of Hiring and Decision-Making 42:10 - Leveraging Hype while Staying True to Vision 45:10 - Changing the Industry and Fostering Inclusive Culture 47:06 - Underrated Breaking Changes in the Industry 49:24 - AI and Understanding Systems 50:03 - Exciting Changes in Observability 50:47 - Where to Find Charity
This is a recap of the top 10 posts on Hacker News on July 22nd, 2024.This podcast was generated by wondercraft.ai(00:39): Jellyfin: We're Good, SeriouslyOriginal post: https://news.ycombinator.com/item?id=41031998&utm_source=wondercraft_ai(01:48): Kawaii – A Keychain-Sized Nintendo WiiOriginal post: https://news.ycombinator.com/item?id=41038552&utm_source=wondercraft_ai(02:59): Jiff: Datetime library for RustOriginal post: https://news.ycombinator.com/item?id=41031037&utm_source=wondercraft_ai(04:04): No More Blue FridaysOriginal post: https://news.ycombinator.com/item?id=41033579&utm_source=wondercraft_ai(05:24): Copying is the way design worksOriginal post: https://news.ycombinator.com/item?id=41038372&utm_source=wondercraft_ai(06:48): Scientists discover a new hormone that can build strong bones in miceOriginal post: https://news.ycombinator.com/item?id=41036462&utm_source=wondercraft_ai(07:54): Parse, Don't Validate (2019)Original post: https://news.ycombinator.com/item?id=41031585&utm_source=wondercraft_ai(09:09): Ryanair wins screen scraping case against Booking.com in US court rulingOriginal post: https://news.ycombinator.com/item?id=41031960&utm_source=wondercraft_ai(10:18): Eza: A modern, maintained replacement for lsOriginal post: https://news.ycombinator.com/item?id=41031112&utm_source=wondercraft_ai(11:25): No one expects young men to do anything and they respond by doing nothing (2022)Original post: https://news.ycombinator.com/item?id=41032918&utm_source=wondercraft_aiThis is a third-party project, independent from HN and YC. Text and audio generated using AI, by wondercraft.ai. Create your own studio quality podcast with text as the only input in seconds at app.wondercraft.ai. Issues or feedback? We'd love to hear from you: team@wondercraft.ai
Heath Cummings is joined by Matt Olson to discuss how to use what they learned in OTAs for drafting and what they're looking for in training camp that can give them the edge. Intro (0:00) How to Parse coachspeak (1:40) Non-Negotiable Dynasty league, rule or setting (3:42) What matters most over the next 6 weeks? (5:49) Commanders running back reports (7:26) Cardinals running back reports (11:07) Bills wide receiver reports (14:17) Patriots wide receiver reports (19:18) Does Rashee Rice get suspended this year? (24:22) Rookie QBs battling for jobs (27:56) Injury updates (30:46) Bo Nix (35:22) Listener questions (36:52) To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices
Heath Cummings is joined by Matt Olson to discuss how to use what they learned in OTAs for drafting and what they're looking for in training camp that can give them the edge. Intro (0:00) How to Parse coachspeak (1:40) Non-Negotiable Dynasty league, rule or setting (3:42) What matters most over the next 6 weeks? (5:49) Commanders running back reports (7:26) Cardinals running back reports (11:07) Bills wide receiver reports (14:17) Patriots wide receiver reports (19:18) Does Rashee Rice get suspended this year? (24:22) Rookie QBs battling for jobs (27:56) Injury updates (30:46) Bo Nix (35:22) Listener questions (36:52) To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices
Charity Majors is the co-founder and CTO of honeycomb.io. She pioneered the concept of modern Observability, drawing on her years of experience building and managing massive distributed systems at Parse (acquired by Facebook), then subsequently at Facebook, and at Linden Lab building Second Life. She is the co-author of Observability Engineering and Database Reliability Engineering (O'Reilly). She loves free speech, free software and single malt scotch. Do you have something cool to share? Some questions? Let us know: - web: kubernetespodcast.com - mail: kubernetespodcast@google.com - twitter: @kubernetespod News of the week CNCF Blog: Vitess 20 is now Generally Available Vitess Blog: Announcing Vitess 20 Anthropic Blog: Claude 3.5 Sonnet KubeCon India 2024 CFP Apps on Azure Blog: Announcing support of OCI v1.1 specification in Azure Container Registry VMware Tanzu Blog: Announcing VMware Tanzu Greenplum 7.2: Powering Your Business with Enhanced Performance and Advanced Capabilities VMware Tanzu Blog: Join the public beta for GenAI on Tanzu Platform today! CNCF: Adobe End User Journey Report Links from the interview Honeycomb.io O'Reilly Book: Observability Engineering O'Reilly Book: Database Reliability Engineering Charity's blog site: charity.wtf Charity Blog: Questionable Advice: “My boss says we don't need any engineering managers. Is he right?” Daniel H. Pink book: “Drive: The Surprising Truth About What Motivates Us” In which, “He examines the three elements of true motivation—autonomy, mastery, and purpose-and offers smart and surprising techniques for putting these into action in a unique book that will change how we think and transform how we live.” Charity blog on Stack Overflow: “Generative AI is not going to build your engineering team for you” In which she talks about how the tech industry is an apprenticeship industry. Charity Majors in the Google Cloud Next 2024 Developer Keynote honeycomb.io blog: “How Time Series Databases Work—And Where They Don't” by Alex Vondrak honeycomb.io blog: “Why Observability Requires a Distributed Column Store” by Alex Vondrak Links from the post-interview chat CNCF Kubernetes Community Days (KCDs) CNCF Kubernetes Community Days (KCDs) on GitHub Julia Evans Blog Wizard Zines by Julia Evans “Help! I Have a Manager!” zine by Julia Evans Aja Hammerly aka “thagomizer” blog “The Toaster Parable” “Manager Toolkit: Manage The Person In Front Of You” “Manager Toolkit: Useful Manager Phrases for 1:1s” “Manager Toolkit: You Talk, I Type”
In this episode of the Steering Engineering Podcast, we explore how emerging AI technologies, such as AI code assistants, are reshaping the landscape of software design and development. Our focus is on how engineers are transitioning from hands-on coding to roles that more closely resemble that of an engineering manager. We examine the implications of generative AI for software engineering education, on-the-job training, roles/responsibilities, and career development.Charity Majors is an operations and database engineer, “sometimes” engineering manager, author and CTO at Honeycomb. Charity was a production engineering manager at Facebook and spent several years working on Parse. She also spent several years at Linden Lab, working on the infrastructure and databases that power Second Life. Charity is the co-author of O'Reilly's Database Reliability Engineering and author of "Observability Engineering: Achieving Production Excellence.” She loves free speech, free software, and single malt scotch.
Maybe less thinking through all the ins and outs, and more simply living, as best we can, covered by the grace of God.
Joël shares his experience with the dry-rb suite of gems, focusing on how he's been using contracts to validate input data. Stephanie relates to Joël's insights with her preparation for RailsConf, discussing her methods for presenting code in slides and weighing the aesthetics and functionality of different tools like VS Code and Carbon.sh. She also encounters a CI test failure that prompts her to consider the implications of enforcing specific coding standards through CI processes. The conversation turns into a discussion on managing coding standards and tools effectively, ensuring that automated systems help rather than hinder development. Joël and Stephanie ponder the balance between enforcing strict coding standards through CI and allowing developers the flexibility to bypass specific rules when necessary, ensuring tools provide valuable feedback without becoming obstructions. Transcript: AD: We're excited to announce a new workshop series for helping you get that startup idea you have out of your head and into the world. It's called Vision to Value. Over a series of 90-minute working sessions, you'll work with a thoughtbot product strategist and a handful of other founders to start testing your idea in the market and make a plan for building an MVP. Join for all seven of the weekly sessions, or pick and choose the ones that address your biggest challenge right now. Learn more and sign up at tbot.io/visionvalue. STEPHANIE: Hello and welcome to another episode of the Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn. JOËL: And I'm Joël Quenneville. And together, we're here to share a bit of what we've learned along the way. STEPHANIE: So, Joël, what's new in your world? JOËL: I've been working on a project that uses the dry-rb suite of gems. And one of the things we're doing there is we're validating inputs using this concept of a contract. So, you sort of describe the shape and requirements of this, like hash of attributes that you get, and it will then tell you whether it's valid or not, along with error messages. We then want to use those to eventually build some other sort of value object type things that we use in the app. And because there's, like, failure points at multiple places that you have to track, it gets a little bit clunky. And I got to thinking a little bit about, like, forget about the internal machinery. What is it that I would actually like to happen here? And really, what I want is to say, I've got this, like, bunch of attributes, which may or may not be correct. I want to pass them into a method, and then either get back a value object that I was hoping to construct or some kind of error. STEPHANIE: That sounds reasonable to me. JOËL: And then, thinking about it just a little bit longer, I was like, wait a minute, this idea of, like, unstructured input goes into a method, you get back something more structured or an error, that's kind of the broad definition of parsing. I think what I'm looking for is a parser object. And this really fits well with a style of processing popularized in the functional programming community called parse, don't validate the idea that you use a parser like this to sort of transform data from more loose to more strict values, values where you can have more assumptions. And so, I create an object, and I can take a contract. I can take a class and say, "Attempt to take the following attributes. If they're valid according to the construct, create this classroom." And it, you know, does a bunch of error handling and some...under the hood, dry-rb does all this monad stuff. So, I handled that all inside of the object, but it's actually really nice. STEPHANIE: Cool. Yeah, I had a feeling that was where you were going to go. A while back, we had talked about really impactful articles that we had read over the course of the year, and you had shared one called Parse, Don't Validate. And that heuristic has actually been stuck in my head a little bit. And that was really cool that you found an opportunity to use it in, you know, previously trying to make something work that, like, you weren't really sure kind of how you wanted to implement that. JOËL: I think I had a bit of a light bulb moment as I was trying to figure this out because, in my mind, there are sort of two broad approaches. There's the parse, don't validate where you have some inputs, and then you transform them into something stricter. Or there's more of that validation approach where you have inputs, you verify that they're correct, and then you pass them on to someone else. And you just say, "Trust me, I verified they're in the right shape." Dry-rb sort of contracts feel like they fit more under that validation approach rather than the parse, don't validate. Where I think the kind of the light bulb turned on for me is the idea that if you pair a validation step and an object construction step, you've effectively approximated the idea of parse, don't validate. So, if I create a parser object that says, in sort of one step, I'm going to validate some inputs and then immediately use them if they're valid to construct an object, then I've kind of done a parse don't validate, even though the individual building blocks don't follow that pattern. STEPHANIE: More like a parse and validate, if you will [laughs]. I have a question for you. Like, do you own those inputs kind of in your domain? JOËL: In this particular case, sort of. They're coming from a form, so yes. But it's user input, so never trust that. STEPHANIE: Gotcha. JOËL: I think you can take this idea and go a little bit broader as well. It doesn't have to be, like, the dry-rb-related stuff. You could do, for example, a JSON schema, right? You're dealing with the input from a third-party API, and you say, "Okay, well, I'm going to have a sort of validation JSON schema." It will just tell you, "Is this data valid or not?" and give you some errors. But what if you paired that with construction and you could create a little parser object, if you wanted to, that says, "Hey, I've got a payload coming in from a third-party API, validate it against this JSON schema, and attempt to construct this shopping cart object, and give me an error otherwise." And now you've sort of created a nice, little parse, don't validate pipeline which I find a really nice way to deal with data like that. STEPHANIE: From a user perspective, I'm curious: Does this also improve the user experience? I'm kind of wondering about that. It seems like it could. But have you explored that? JOËL: This is more about the developer experience. STEPHANIE: Got it. JOËL: The user experience, I think, would be either identical or, you know, you can play around with things to display better errors. But this is more about the ergonomics on the development side of things. It was a little bit clunky to sort of assemble all the parts together. And sometimes we didn't immediately do both steps together at the same time. So, you might sort of have parameters that we're like, oh, these are totally good, we promise. And we pass them on to someone else, who passes them on to someone else. And then, they might try to do something with them and hope that they've got the data in the right shape. And so, saying, let's co-locate these two things. Let's say the validation of the inputs and then the creation of some richer object happen immediately one after another. We're always going to bundle them together. And then, in this particular case, because we're using dry-rb, there's all this monad stuff that has to happen. That was a little bit clunky. We've sort of hidden that in one object, and then nobody else ever has to deal with that. So, it's easier for developers in terms of just, if you want to turn inputs into objects, now you're just passing them into one object, into one, like, parser, and it works. But it's a nicer developer experience, but also there's a little bit more safety in that because now you're sort of always working with these richer objects that have been validated. STEPHANIE: Yeah, that makes sense. It sounds very cohesive because you've determined that these are two things that should always happen together. The problems arise when they start to actually get separated, and you don't have what you need in terms of using your interfaces. And that's very nice that you were able to bundle that in an abstraction that makes sense. JOËL: A really interesting thing I think about abstractions is sometimes thinking of them as the combination of multiple other things. So, you could say that the combination of one thing and another thing, and all of a sudden, you have a new sort of combo thing that you have created. And, in this case, I think the combination of input validation and construction, and, you know, to a certain extent, error handling, so maybe it's a combination of three things gives you a thing you can call a parser. And knowing that that combination is a thing you can put a name on, I think, is really powerful, or at least it felt really powerful to me when that light bulb turned on. STEPHANIE: Yeah, it's kind of like the whole is greater than the sum of its parts. JOËL: Yeah. STEPHANIE: Cool. JOËL: And you and I did an episode on Specialized Vocabulary a while back. And that power of naming, saying that, oh, I don't just have a bunch of little atomic steps that do things. But the fact that the combination of three or four of them is a thing in and of itself that has a name that we can talk about has properties that we're familiar with, all of a sudden, that is a really powerful way to think about a system. STEPHANIE: Absolutely. That's very exciting. JOËL: So, Stephanie, what's new in your world? STEPHANIE: So, I am plugging away at my RailsConf talk, and I reached the point where I'm starting to work on slides. And this talk will be the first one where I have a lot of code that I want to present on my slides. And so, I've been playing around with a couple of different tools to present code on slides or, I guess, you know, just being able to share code outside of an editor. And the two tools I'm trying are...VS Code actually has a copy with syntax functionality in its command palette. And so, that's cool because it basically, you know, just takes your editor styling and applies it wherever you paste that code snippet. JOËL: Is that a screenshot or that's, like, formatted text that you can paste in, like, a rich text editor? STEPHANIE: Yeah, it's the latter. JOËL: Okay. STEPHANIE: That was nice because if I needed to make changes in my slides once I had already put them there, I could do that. But then the other tool that I was giving a whirl is Carbon.sh. And that one, I think, is pretty popular because it looks very slick. It kind of looks like a little Mac window and is very minimal. But you can paste your code into their text editor, and then you can export PNGs of the code. So, those are just screenshots rather than editable text. And I [chuckles] was using that, exported a bunch of screenshots of all of my code in various stages, and then realized I had a typo [laughs]. JOËL: Oh no! STEPHANIE: Yeah, so I have not got around to fixing that yet. That was pretty frustrating because now I would have to go back and regenerate all of those exports. So, that's kind of where I'm at in terms of exploring sharing code. So, if anyone has any other tools that they would use and recommend, I am all ears. JOËL: How do you feel about balancing sort of the quantity of code that you put on a slide? Do you tend to go with, like, a larger code slide and then maybe, like, highlight certain sections? Do you try to explain ideas in general and then only show, like, a couple of lines? Do you show, like, maybe a class that's got ten lines, and that's fine? Where do you find that balance in terms of how much code to put on a slide? Because I feel like that's always the big dilemma for me. STEPHANIE: Yeah. Since this is my first time doing it, like, I really have no idea how it's going to turn out. But what I've been trying is focusing more on changes between each slide, so the progression of the code. And then, I can, hopefully, focus more on what has changed since the last snippet of code we were looking at. That has also required me to be more fiddly with the formatting because I don't want essentially, like, the window that's containing the code to be changing sizes [laughs] in between slide transitions. So, that was a little bit finicky. And then, there's also a few other parts where I am highlighting with, like, a border or something around certain texts that I will probably pause and talk about, but yeah, it's tough. I feel like I've seen it done well, but it's a lot harder to and a lot more effort to [laughs] do in practice, I'm finding. JOËL: When someone does it well, it looks effortless. And then, when somebody does it poorly, you're like, okay, I'm struggling to connect with this talk. STEPHANIE: Yep. Yep. I hear that. I don't know if you would agree with this, but I get the sense that people who are able to make that look effortless have, like, a really deep and thorough understanding of the code they're showing and what exactly they think is important for the audience to pay attention to and understand in that given moment in their talk. That's the part that I'm finding a lot more work [laughs] because just thinking about, you know, the code I'm showing from a different lens or perspective. JOËL: How do you sort of shrink it down to only what's essential for the point that you're trying to make? And then, more broadly, not just the point you're trying to make on this one slide, but how does this one slide fit into the broader narrative of the story you're trying to tell? STEPHANIE: Right. So, we'll see how it goes for me. I'm sure it's one of those things that takes practice and experience, and this will be my first time, and we'll learn something from it. JOËL: That's exciting. So, this is RailsConf in Detroit this year, I believe, May 7th through 9th. STEPHANIE: Yep. That's right. So, recently on my client work, I encountered a CI failure on a PR of mine that I was surprised by. And basically, I had introduced a new association on a model, and this CI failure was saying like, "Hey, like, we see that you introduced this association. You should consider adding this to the presenter for this model." And I hadn't even known that that presenter existed [laughs]. So, it was kind of interesting to get a CI failure nudging me to consider if I need to be, like, making a different, you know, this other change somewhere else. JOËL: That's a really fun use of CI. Do you think that was sort of helpful for you as a newer person on that codebase? Or was it more kind of annoying and, like, okay, this CI is over the top? STEPHANIE: You know, I'm not sure [laughs]. For what it's worth, this presenter was actually for their admin dashboard, essentially. And so, the goal of what this workflow was trying to do was help folks who are using the admin dashboard have, like, all of the capabilities they need to do that job. And it makes sense that as you add behavior to your app, sometimes those things could get missed in terms of supporting, you know, not just your customers but developers, support product, you know, the other users of your app. So, it was cool. And that was, you know, something that they cared enough to enforce. But yeah, I think there maybe is a bit of a slippery slope or at least some kind of line, or it might even be pretty blurry around what should our test failures really be doing. JOËL: And CI is interesting because it can be a lot more than just tests. You can run all sorts of things. You can run a linter that fails. You could run various code quality tools that are not things like unit tests. And I think those are all valid uses of the CI process. What's interesting here is that it sounds like there were two systems that needed to stay in sync. And this particular CI check was about making sure that we didn't accidentally introduce code that would sort of drift apart in those two places. Does that sound about right? STEPHANIE: Yeah, that does sound right. I think where it gets a little fuzzy, for me, is whether that kind of check was for code quality, was for a standard, or for a policy, right? It was kind of saying like, hey, like, this is the way that we've enforced developers to keep those two things from drifting. Whereas I think that could be also handled in different ways, right? JOËL: Yeah. I guess in terms of, like, keeping two things in sync, I like to do that at almost, like, a code level, if possible. I mean, maybe you need a single source of truth, and then it just sort of happens automatically. Otherwise, maybe doing it in a way that will yell at you. So, you know, maybe there's a base class somewhere that will raise an error, and that will get caught by CI, or, you know, when you're manually testing and like, oh yeah, I need to keep this thing in sync. Maybe you can derive some things or get fancy with metaprogramming. And the goal here is you don't have a situation where someone adds a new file in one place and then they accidentally break an admin dashboard because they weren't aware that you needed these two files to be one-to-one. If I can't do it just at a code level, I have done that before at, like, a unit test level, where maybe there's, like, a constant somewhere, and I just want to assert that every item in this constant array has a matching entry somewhere else or something like that, so that you don't end up effectively crashing the site for someone else because that is broken behavior. STEPHANIE: Yeah, in this particular case, it wasn't necessarily broken. It was asking you "Hey, should this be added to the admin presenter?" which I thought was interesting. But I also hear what you're saying. It actually does remind me of what we were talking about earlier when you've identified two things that should happen, like mostly together and whether the code gives you affordances to do that. JOËL: So, one of the things you said is really interesting, the idea that adding to the presenter might have been optional. Does that mean that CI failed for you but that you could merge anyway, or how does that work? STEPHANIE: Right. I should have been more clear. This was actually a test failure, you know, that happened to be caught by CI because I don't run [laughs] the whole test suite locally. JOËL: But it's an optional test failure, so you're allowed to let that test fail. STEPHANIE: Basically, it told me, like, if I want this to be shown in the presenter, add it to this method, or if not, add it to...it was kind of like an allow list basically. JOËL: I see. STEPHANIE: Or an ignore list, yeah. JOËL: I think that kind of makes sense because now you have sort of, like, a required consistency thing. So, you say, "Our system requires you...whenever you add a file in this directory, you must add it to either an allow list or an ignore list, which we have set up in this other file." And, you know, sometimes you might forget, or sometimes you're new, and it's your first time adding a file in this directory, and you didn't remember there's a different place where you have to effectively register it. That seems like a reasonable check to have in place if you're relying on these sort of allow lists for other parts of the system, and you need to keep them in sync. STEPHANIE: So, I think this is one of the few instances where I might disagree with you, Joël. What I'm thinking is that it feels a bit weird to me to enforce a decision that was so far away from the code change that I made. You know, you're right. On one hand, I am newer to this codebase, maybe have less of that context of different features, things that need to happen. It's a big app. But I almost think this test reinforces this weird coupling of things that are very far away from each other [laughs]. JOËL: So, it's maybe not the test itself you object to rather than the general architecture where these admin presenters are relying on these other objects. And by you introducing a file in a totally different part of the app, there's a chance that you might break the admin, and that feels weird to you. STEPHANIE: Yeah, that does feel weird to me. And then, also that this implementation is, like, codified in this test, I guess, as opposed to a different kind of, like, acceptance test, rather than specifying specifically like, oh, I noticed, you know, you didn't add this new association or attribute to either the allow list or the ignore list. Maybe there is a more, like, higher level test that could steer us in keeping the features consistent without necessarily dictating, like, that it needs to happen in these particular methods. JOËL: So, you're talking something like doing an integration test rather than a unit test? Or are you talking about something entirely different? STEPHANIE: I think it could be an integration test or a system test. I'm not sure exactly. But I am wondering what options, you know, are out there for helping keeping standards in place without necessarily, like, prescribing too much about, like, how it needs to be done. JOËL: So, you used the word standard here, which I tend to think about more in terms of, like, code style, things like that. What you're describing here feels a little bit less like a standard and more of what I would call a code invariant. STEPHANIE: Ooh. JOËL: It's sort of like in this architecture the way we've set up, there must always be sort of one-to-one matching between files in this directory and entries in this array. Now, that's annoying because they're sort of, like, two different places, and they can vary independently. So, locking those two in sync requires you to do some clunky things, but that's sort of the way the architecture has been designed. These two things must remain one-to-one. This is an invariant we want in the app. STEPHANIE: Can you define invariant for me [laughs], the way that you're using it here? JOËL: Yeah, so something that is required to be true of all elements in this class of things, sort of a rule or a law that you're applying to the way that these particular bits of code need to behave. So, in this case, the invariant is every file in this directory must have a matching entry in this array. There's a lot of ways to enforce that. The sort of traditional idea is sort of pushing a lot of that checking...they'll sometimes talk about pushing errors to the left. So, if you can handle this earlier in the sort of code execution pipeline, can you do it maybe with a type system if you're in a type language? Can you do it with some sort of input validation at runtime? Some languages have the concept of contracts, so maybe you enforce invariants using that. You could even do something really ad hoc in Ruby, where you might say, "Hey, at boot time, when we load this particular array for the admin, just load this directory. Make sure that the entries in the array match the entries in the directory, and if they don't, raise an error." And I guess you would catch that probably in CI just because you tried to run your test suite, and you'd immediately get this boot error because the entries don't match. So, I guess it kind of gets [inaudible 22:36] CI, but now it's not really a dedicated test anymore. It's more of, like, a property of the system. And so, in this case, I've sort of shifted the error checking or the checking of this invariant more into the architecture itself rather than in, like, things that exercise the architecture. But you can go the other way and say, "Well, let's shift it out of the architecture into tests," or maybe even beyond that, into, like, manual QA or, you know, other things that you can do to verify it. STEPHANIE: Hmm. That is very compelling to me. JOËL: So, we've been talking so far about the idea of invariants, but the thing about invariants is that they don't vary. They're always true. This is a sort of fundamental rule of how this system works. The class of problems that I often struggle with how to deal with in these sorts of situations are rules that you only sometimes want to apply. They're not consistent. Have you ever run into things like that? STEPHANIE: Yeah, I have. And I think that's what was compelling to me about what you were sharing about code invariance because I wasn't totally convinced this particular situation was a very clear and absolute rule that had been decided, you know, it seemed a little bit more ambiguous. When you're talking about, like, applying rules that sometimes you actually don't want to apply, I think of things like linters, where we want to disable, you know, certain rules because we just can't get around implementing the way we want to while following those standards. Or maybe, you know, sometimes you just have to do something that is not accessible [laughs], not that that's what I would recommend, but in the case where there aren't other levers to change, you maybe want to disable some kind of accessibility check. JOËL: That's always interesting, right? Because sometimes, you might want, like, the idea of something that has an escape hatch in it, but that immediately adds a lot of complexity to things as well. This is getting into more controversial territory. But I read a really compelling article by Jeroen Engels about how being able to, like, locally disable your linter for particular methods actually makes your code, but also the linter itself, a worse tool. And it really kind of made me rethink a little bit of how I approach linters as a tool. STEPHANIE: Ooh. JOËL: And what makes sense in a linter. STEPHANIE: What was the argument for the linter being a worse tool by doing that? JOËL: You know, it's funny that you ask because now I can't remember, and it's been a little while since I've read the article. STEPHANIE: I'll have to revisit it after the show [laughs]. JOËL: Apparently, I didn't do the homework for this episode, but we'll definitely link to that article in the show notes. STEPHANIE: So, how do you approach either introducing a new rule to something like a linter or maybe reconsidering an existing rule? Like, how would you go about finding, like, consensus on that from your team? JOËL: That varies a lot by organizational culture, right? Some places will do it top-down, some of them will have a broader conversation and come to a consensus. And sometimes you just straight up don't get a choice. You're pulling in a tool like standard rb, and you're saying, "Look, we don't want to have a discussion about every little style thing, so whatever, you know, the community has agreed on for the standard rb linter is the style we're using. There are no discussions. Do what the linter tells you." STEPHANIE: Yeah, that's true. I think I have to adapt to whatever, you know, client culture is like when I join new projects. You know, sometimes I do see people being like, "Hey, I think it's kind of weird that we have this," or, "Hey, I've noticed, for example, oh, we're merging focused RSpec tests. Like, let's introduce a rule to make sure that that doesn't happen." I also think that a different approach is for those things not to be enforced at all by automation, but we, you know, there are still guidelines. I think the thoughtbot guides are an example of pretty opinionated guidelines around style and syntax. But I don't think that those kinds of things would, you know, ever be, like, enforced in a way that would be blocking. JOËL: Those are kind of hard because they're not as consistent as you would think, so it's not a rule you can apply every time. It's more of a, here's some things to maybe keep in mind. Or if you're writing code in this way, think about some of the edge cases that might happen, or don't default to writing it in this way because things might go wrong. Make sure you know what you're doing. I love the phrase, "Must be able to justify this," or sometimes, "Must convince your pair that this is okay." So, default to writing in style A, avoid style B unless you can have a compelling reason to do so and can articulate that on your PR or, you know, convince your pair that that's the right way to go. STEPHANIE: Interesting. It's kind of like the honor system, then [laughs]. JOËL: And I think that's sort of the general way when you're working with developers, right? There's a lot of areas where there is ambiguity. There is no single best way to do it. And so, you rely on people's expertise to build systems that work well. There are some things where you say, look, having conversations about these things is not useful. We want to have some amount of standardization or uniformity about certain things. Maybe there's invariance you want to hold. Maybe there's certain things we're, like, this should never get to production. Whenever you've got these, like, broad sweeping statements about things should be always true or never true, that's a great time to introduce something like a linting rule. When it's more up to personal judgment, and you just want to nudge that judgment one way or another, then maybe it's better to have something like a guide. STEPHANIE: Yeah, what I'm hearing is there is a bit of a spectrum. JOËL: For sure. From things that are always true to things that are, like, sometimes true. I think I'm sort of curious about the idea of going a level beyond that, though, beyond things like just code style or maybe even, like, invariance you want to hold or something, being able to make suggestions to developers based off the code that is written. So, now you're applying more like heuristics, but instead of asking a human to apply those heuristics at code review time and leave some comments, maybe there's a way to get automated feedback from a tool. STEPHANIE: Yeah, I think we had mentioned code analysis tools earlier because some teams and organizations include those as part of their CI builds, right? And, you know, even Brakeman, right? Like, that's an analysis tool for security. But I can't recall if I've seen an organization use things like Flog metrics which measure code complexity in things like that. How would you feel if that were a check that was blocking your work? JOËL: So, I've seen things like that be used if you're using, like, the Code Climate plugin for GitHub. And Code Climate internally does effectively flog and other things that are fancier on your code quality. And so, you can set a threshold to say, hey, if complexity gets higher than a certain amount, fail the build. You can also...if you're doing things via GitHub, what's nice is that you can do effectively non-blocking comments. So, instead of failing CI to say, "Hey, this method looks really complex. You cannot merge until you have made this method less complex," maybe the sort of, like, next step up in ambiguity is to just leave a comment on a PR from a tool and say, "Hey, this method here is looking really complex. Consider breaking it up." STEPHANIE: Yeah, there is a tool that I've seen but not used called Danger, and its tagline is, Stop saying, "You forgot to..." in code review [laughs]. And it basically does that, what you were saying, of, like, leaving probably a suggestion. I can imagine it's blocking, but a suggestive comment that just automates that rather than it being a manual process that humans have to remember or notice. JOËL: And there's a lot of things that could be specific to your organization or your architecture. So, you say, "Hey, you introduced a file here. Would you consider also making an entry to this presenter file so that it's editable on the admin?" And maybe that's a better place to handle that. Just a comment. But you wouldn't necessarily want every code reviewer to have to think about that. STEPHANIE: So, I do think that I am sometimes not necessarily suspicious, but I have also seen tools like that end up just getting in the way, and it just becomes something you ignore. It's something you end up always using the escape hatch for, or people just find ways around it because they're harming more than they're helping. Do you have any thoughts about how to kind of keep those things in check and make sure that the tools we introduce genuinely are kind of helping the organization do the right thing rather than kind of being these perhaps arbitrary blockers? JOËL: I'm going to throw a fancy phrase at you. STEPHANIE: Ooh, I'm ready. JOËL: Signal-to-noise ratio. STEPHANIE: Whoa, uh-huh. JOËL: So, how often is the feedback from your tool actually helpful, and how often is it just noise that you have to dismiss, or manually override, or things like that? At some point, the ratio becomes so much that you lose the signal in all the noise. And so, maybe you even, like, because you're always just ignoring the feedback from this tool, you accidentally start overriding things that would be genuinely helpful. And, at that point, you've got the worst of both worlds. So, sort of keeping track on what that ratio is, and there's not, like, a magic number. I'm not going to tell you, "Oh, this is an 80/20 principle. You need to have, you know, 80% of the time it's useful and only 20% of the time it's not useful." I don't have a number to give you, but keeping track on maybe, you know, is it more often than not useful? Is your team getting to the point where they're just ignoring feedback from this tool? And thinking in terms of that signal versus that noise, I think is useful—to go back to that word again, heuristic for managing whether a tool is still helpful. STEPHANIE: Yeah. And I would even go on to say that, you know, I always appreciate when people in leadership roles keep an eye on these things. And they're like, "Oh, I've been hearing that people are just totally numb to this tool [laughs]" or, you know, "There's no engagement on this. People are just ignoring those signals." Any developer impacted by this, it is valid to bring it up if you're getting frustrated by it or just finding yourself, you know, having all of these obstacles getting in the way of your development process. JOËL: Sometimes, this can be a symptom that you're mixing too many classes of problems together in one tool. So, maybe there are things that are, like, really dangerous to your product to go live with them. Maybe it's, you know, something like Brakeman where you're doing security checks, and you really, ideally, would not go to production with a failing security check. And then, you've got some random other style things in there, and you're just like, oh yeah, whatever, it's this tool because it's mostly style things but occasionally gives you a security problem. And because you ignore it all the time, now you accidentally go to production with a security problem. So, splitting that out and say, "Look, we've got blocking and unblocking because we recognize these two classes of problems can be a helpful solution to this problem." STEPHANIE: Joël, did you just apply an object-oriented design principle to an organizational system? [laughter] JOËL: I may be too much of a developer. STEPHANIE: Cool. Well, I really appreciate your input on this because, you know, I was just kind of mulling over, like, how I felt about these kinds of things that I encounter as a developer. And I am glad that we got to kind of talk about it. And I think it gives me a more expanded vocabulary to, you know, analyze or reflect when I encounter these things on different client organizations. JOËL: And every organization is different, right? Like, you've got to learn the culture, learn the different elements of that software. What are the things that are invariant? What are the things that are dangerous that we don't want to ship without? What are the things that we're doing just for consistency? What are things which are, like, these are culturally things that we'd like to do? There's all these levels, and it's a lot to pick up. STEPHANIE: Yeah. At the end of the day, I think what I really liked about the last thing you said was being able to identify the problem, like the class of problem, and applying the right tool for the right job. It helps me take a step back and perhaps even think of different solutions that we might not have thought about earlier because we had just gotten so used to the one way of enforcing or checking things like that. JOËL: On that note, shall we wrap up? STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm. JOËL: This show has been produced and edited by Mandy Moore. STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter. STEPHANIE: Or reach both of us at hosts@bikeshed.fm via email. JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeee!!!!!! AD: Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us. More info on our website at: tbot.io/referral. Or you can email us at: referrals@thoughtbot.com with any questions.
This week on Screaming in the Cloud, Corey is joined by good friend and colleague, Charity Majors. Charity is the CTO and Co-founder of Honeycomb.io, the widely popular observability platform. Corey and Charity discuss the ins and outs of observability 1.0 vs. 2.0, why you should never underestimate the power of software to get worse over time, and the hidden costs of observability that could be plaguing your monthly bill right now. The pair also shares secrets on why speeches get better the more you give them and the basic role they hope AI plays in the future of computing. Check it out!Show Highlights:(00:00 - Reuniting with Charity Majors: A Warm Welcome(03:47) - Navigating the Observability Landscape: From 1.0 to 2.0(04:19) - The Evolution of Observability and Its Impact(05:46) - The Technical and Cultural Shift to Observability 2.0(10:34) - The Log Dilemma: Balancing Cost and Utility(15:21) - The Cost Crisis in Observability(22:39) - The Future of Observability and AI's Role(26:41) - The Challenge of Modern Observability Tools(29:05) - Simplifying Observability for the Modern Developer(30:42) - Final Thoughts and Where to Find MoreAbout CharityCharity is an ops engineer and accidental startup founder at honeycomb.io. Before this she worked at Parse, Facebook, and Linden Lab on infrastructure and developer tools, and always seemed to wind up running the databases. She is the co-author of O'Reilly's Database Reliability Engineering, and loves free speech, free software, and single malt scotch.Links:https://charity.wtf/Honeycomb Blog: https://www.honeycomb.io/blogTwitter: @mipsytipsy
Everyone talks about digital transformation, but it seems like no one really explains what it means... until now. In today's episode, Rob and Justin dive deep to cut through the buzzwords and lay out the reality. They're tackling why digital transformation isn't about making huge, instant changes but rather about the smart, subtle tweaks in areas that usually get ignored but badly need a digital lift. They dive into how leveraging tools like the Power Platform can spark significant improvements, showing that it's the small changes that can really boost efficiency and smooth out your workflow. Ever found yourself wondering how to translate all the chatter about digital evolution into actionable steps? That's exactly what Rob and Justin are unpacking. They're guiding you through how minor, yet clever adjustments can transform your processes. It's all about enhancing the routine, one step at a time. And, as always, if you enjoyed the episode, be sure to leave us a review on your favorite podcast platform to help new listeners find us. EPISODE TRANSCRIPT: Rob Collie (00:00): Hello, friends. In today's episode, Justin and I demystify what is meant by the phrase digital transformation. Phrases like that are one of my least favorite things. Why do I say that? Well, these are phrases that get used a lot. They cast a big shadow. You encounter them almost anywhere you go. That's fine by itself. But in the case of digital transformation, that massive shadow is multiplied by no one understanding what it actually means. (00:30): Now earlier in my career, I used to be really intimidated by things like this. Everyone seems to know what this means because they're using it all the time. I don't know what it means, so should I just pretend and play along like everyone else? But at some point, many years ago, I had this moment where I realized that the Emperor has no clothes. It almost never has clothes. Now when I encounter phrases like this, instead of being like paralyzed or intimidated, I instead start working in my own definition and this process takes time. I've been picking apart and stewing on the definition of digital transformation now for probably the better part of a year plus. Somewhere along the way in that process, I realized that we at P3 are doing quite a bit of digital transformation work, I just hadn't realized it yet because I didn't have a good enough definition. (01:18): Lately, I've been noticing that my definition for digital transformation has reached a steady state. It's not changing over time anymore, which tends to be my signal that I've arrived at a definition that works. Now seemed like a good time to sit down and compare notes with Justin, who's been following his own parallel process of arriving at a definition. I'm very pleased with where we landed. A practical and specific definition that can be reduced to practice with an almost paint-by-numbers type of approach. (01:47): If you asked someone for a definition of something like digital transformation, and by the time they're done giving you their definition, you can't practically boil that down to what it means for you, that's not a problem with you, that's a problem with the definition. A lot of times, people's definitions for terms like this are almost like deliberately vague, as a means of projecting power, as a means of actually controlling you. You'll get a lot of definitions that are engineered to sound smart, engineered to sound authoritative, but not engineered to provide anything resembling clarity. Because if you sound smart, and you sound authoritative but you leave your audience hungry, you create a feeling of dependency. Folks, I just think that's yucky. That's just gross. (02:35): To show you what I mean, I just ran the Google search, "What does digital transformation mean?" The very top hit, enterprisersproject.com, defines digital transformation as "the integration of digital technology into all areas of a business resulting in fundamental changes to help businesses operate in how they deliver value to customers." Did that clear it up? Nope. Boiling that one down, it sounds a lot like you should use computers and use them to make changes. But it sounds smart, sounds authoritative. (03:06): Here's the second result from our old favorite, McKinsey. McKinsey defines digital transformation as "the process of developing organizational and technology based capabilities that allow a company to continuously improve its customer experience and lower its unit costs, and over time sustain a competitive advantage." All right, so that one sounds like McKinsey is almost starting with that original definition and adding additional value to it. They're saying use computers to improve, and to make money, and to compete. If you have $1 million to spend, you can get advice like that. (03:43): All right, with those two definitions, we don't even need an episode. We can just skip it? Because everyone knows exactly what they're talking about. These are the top two hits on Google, folks. Useless. Part of the reason these definitions are useless, again, is because they're designed to be useless. But I also think though, that a lot of times you hear definitions like this is because the people writing them actually cannot boil them down. By the time you come up with a truly useful definition, or a framework, or a guide for understanding a topic like this, it almost by its definition, it's not going to sound nearly as sexy, nearly as smart. It's going to sound relatively simple, mundane. But those are the valuable definitions, the ones that we can actually apply, that make a difference in how we actually view our own business. (04:29): That's what we set out to do in this episode. I think we succeeded, came up with a very practical, applicable definition that you'll never find on McKinsey's website. Let's get into it. Speaker 2 (04:42): Ladies and gentlemen, may I have your attention please? Speaker 4 (04:46): This is the Raw Data by P3 Adaptive Podcast, with your host, Rob Collie, and your cohost, Justin Mannhardt. Find out what the experts at P3 Adaptive can do for your business. Just go to p3adaptive.com. Raw Data by P3 Adaptive is data with the human element. Rob Collie (05:12): Justin, one of the things that we really like to do, I really like to do, I think you do as well, is to take a phrase or topic, and demystify it. Especially phrases that you hear repeated over, and over, and over again, and everyone has to pretend that they understand what they mean. But even when they do, they often have very different pictures in their heads. (05:33): One that I think is due for a treatment, and we've hinted at it once before on this podcast but not with any depth, is digital transformation. What does it mean? Justin Mannhardt (05:45): What does it mean, what does it not mean, all parts in between. Rob Collie (05:50): Starting with the places where I hear it. I often hear it in the context of this is something that's already done. The big talking head analysts at places like Gartner- Justin Mannhardt (06:00): Yeah. Rob Collie (06:00): Will talk about it like it's in the rearview mirror. "The shift to digital, the pivot to digital has forced the following things," so has forced, it's a past tense thing. Which further underlines the idea that well, if it's already happened, clearly everyone knows what it means. They don't stop to define it, they're just tossing that aside as a means of getting to the next point. I find that to be one of the most troubling habits of the talking heads. (06:28): The first few times I encountered this phrase, I didn't really know what it meant. I imagined that it meant switching to ecommerce from brick-and-mortar. Justin Mannhardt (06:37): Yeah. Rob Collie (06:37): I didn't even realize that that was the impression I had, it was just this vague feeling in the back of my head. Justin Mannhardt (06:42): The word digital, I'm just thinking about this now because a lot of times, you'll look at one of these diagrams, it's like, "Your digital transformation wheel includes all these things." You'll see something like, "Move to the cloud." I'm like, "Okay, were the servers with the software, was that software analog or something?" Rob Collie (06:59): Yeah, we've been digital for a long time, right? Justin Mannhardt (07:01): Yeah. Rob Collie (07:01): Most broadly defined, you could say that the digital transformation really got going with the adoption of the PC. Justin Mannhardt (07:09): Right. Rob Collie (07:10): That was when digital transformation started. In the sense that it started in the 1980s, maybe it is something worth talking about somewhat in the rearview mirror, but that's not what they mean. They don't mean the adoption of the PC. Justin Mannhardt (07:23): No. But it's interesting, when you think about the timeline of technology evolution. People say, "Oh, you described it as past tense." Digital transformation has occurred in en masse in market. Now today, it's like AI is here, en masse in market. But the pace at which new things are coming out, what's really happening is just the long tail is longer back to where companies were at in this journey. It's not like the entire industrial complex has been collectively moving to the modern current state across the board. There's companies that are still running SQL 2000, that's their production world still. This isn't something that's happened. Rob Collie (08:09): I think that the big talking head analysts often tend to really only talk about the most elite sub-strata of even their own clients. When they talk about this as something that's completely done, even most of Gartner's paying clients, I would suspect, aren't anywhere close to done. But we still haven't really started talking about what it actually means. (08:32): Let's say it is not the switch from paper and pencil systems to electronic line-of-business systems. Not only do we have the PC, and that's been long since mainstreamed, the notion of line-of-business software, server based software, whether cloud or otherwise, line-of-business software is also I think incredibly well entrenched. We're done with having key business systems running in a manual format. That's long since rearview. That also isn't what they mean by digital transformation. (09:07): Of course, both of those are digital and they were huge transformations, but that's not the digital transformation we're talking about. It's anything that's happened after that. Justin Mannhardt (09:15): Yeah. Rob Collie (09:16): It's a lot harder to pin down the things that happened after that. Justin Mannhardt (09:20): In general, I agree with you because the big blocks, software, the availability of the cloud, not having intensive paper process in most companies, that's largely been accomplished. To different levels, of course. Then, what's left? What's the definition? What are we trying to do? Rob Collie (09:41): Well, if you think of the line-of-business application and the PC, the PC interfaces with all the line-of-business apps. I would say that, and even this is not 100% true, but I would say that the conversion to digital systems is complete, or complete-ish. Justin Mannhardt (09:59): Okay. Rob Collie (09:59): When you look at your business as individual silos. Justin Mannhardt (10:03): Say more. You've got a digital environment for finance, digital environment for sales, is that what you mean? Rob Collie (10:09): Yeah. Core workflows have largely been digital for a while. All the workflows that take place between systems, or the workflows that take place adjacent to a system, those are the things that we're talking about when we talk about digital transformation, going after those workflows. (10:30): Everything we've been doing in the world of business software since at least the 1980s has been digital transformation. Justin Mannhardt (10:38): Yeah. Rob Collie (10:39): But our digital transformation, we're really talking about at least the third chapter. It's not chapter one or two. It's like the next frontier, identifying and going after a new class of workflows that would benefit from essentially software support. Justin Mannhardt (10:56): Right. Rob Collie (10:56): Okay. Now because almost by definition, just by subtraction ... We're saying, "Look, we've got the PC, we've got the line-of-business systems that handle the core workflows within a silo. What's left?" Well, it's almost like a perfect mathematical proof. What's left is the stuff between and outside. (11:14): Given that everyone's mix of line-of-business systems is, I like to say, best of breed, meaning random. It's whatever we decided at the time. It seemed like a good idea at the time. Legacy. Justin Mannhardt (11:25): Yeah. Rob Collie (11:26): You're never going to have anything off-the-shelf that helps you solve the workflows. The middleware problem between your systems is always going to be a custom solution. (11:38): We should give examples of these. When I said outside or adjacent to, there's even workflows that they're not really between systems, they're just the offline portion of working with the system. I'm thinking about a budgeting process, for instance. The world's first budgeting systems were mostly there to record your budget that you enter into it. As those budgeting systems have gotten better, they've included more and more of the human workflow that goes into creating, and evaluating, and kicking the tires before it's finalized. Those offline human workflows, getting more and more structured about them, can make a huge difference. Justin Mannhardt (12:19): Not just structured, Rob, more tightly integrated with the adjacent system itself. I like that adjacency, because if you have a financial system where your budget or your forecast lives, there's a martialing of activity, analysis, input. Then you say, "Okay, we need to get it look like this," and then we put it in the thing. What happens in that processes, you get all sorts of scattered iterations of ideas and it gets loose. But if you could have all that iteration tight, the final submission is already handled or much easier. Rob Collie (12:51): Yeah. Sticking with the budgeting example for a moment, it still echoes one of the themes I mentioned for the between systems, the between silos case. Which is that one-size-fits-all systems, off-the-shelf systems, they really struggle to address all the nuances of your particular business. It's very, very difficult. The more, and more, and more you try to get the offline processes, the human processes brought into the digital workflow, the more an off-the-shelf software package is going to struggle. It's getting further and further away from the safety of the core of the task. (13:28): This is why the Power Platform approach to budgeting and planning is often, in fact almost always, a more effective, in terms of cost-effective, time effective, results effective. The core libraries for doing all of the things that you need to do are basically already there and it's inherently designed to be customizable. Justin Mannhardt (13:48): And very nimble. Even the big players in FP&A software, they're not that great, in our opinion, at the end of the day. But the price points just exclude anybody that's not a very sizeable, formidable company. You're not looking to spend that kind of money if you're even a few hundred million a year type operation. You're just not going to sign up to that agreement. You are left with a middleware type of a problem, that you're either solving with spreadsheets, pen and paper, or something else. Our platform can slide right in there. Rob Collie (14:26): Of course, there is a huge advantage to performing a "digital transformation" on a process like that because the human, offline, pen and paper, sending random emails, getting answers, tracking them, it's incredibly tedious, it's incredibly error-prone. Just super, super slow. It's not like you can perform many iterations. You're not even really going to be able to pull off one iteration and you call it good. But you're just going to miss so much. The budget could have been so much better. If you've got a bad budget, of course you're going to pay for that later. (14:58): That's the adjacent case. Let's talk about the between a little bit as well. What's an example of a workflow that would span across different line-of-business systems but require a human being essentially, or humans, to essentially carry the buckets of water between those different pipes? Justin Mannhardt (15:18): We'll make up a company today, Rob, we'll start a new company and it's going to be called I Manufacture Things, Inc. Hey. At I Manufacture Things, Inc., I've got a sales team. Rob Collie (15:28): Do we make things other than ink? Justin Mannhardt (15:30): No, that's incorporated. Rob Collie (15:32): Oh, okay. Justin Mannhardt (15:32): We just make things. Rob Collie (15:34): Can't help it. Can we be We Manufacture Things Ink, Inc.? Justin Mannhardt (15:38): Sure. Rob Collie (15:39): All right. But anyway, we manufacture things. Justin Mannhardt (15:41): There you go. We've got a sales team and they're using a CRM system, such as Salesforce, or HubSpot, or whatever. They're out there, they're doing quotes, they're tracking opportunities, and eventually someone says, "Yeah, I'd love to buy a palette of ink," or whatever. Our company, we're not using the CRM to deal with the production and fulfillment of that order. Okay, so now there's this process where my order form, let's not use any paper in this example, it's still digital but it lands as a PDF form in someone's email inbox that says, "Hey, Customer Service Rep, here's an order." Oh, okay. Now I'm keying said order into our production system that says, "Go manufacture this thing." Now we need to ship the thing out somewhere, and now we're in our logistics system. (16:33): There's all these little hops between systems. Which technology has become more open, and sure there's things like APIs and code based ways to integrate them, but that's not in range for a lot of companies. That's an example of where you could stitch in these little Power Platform type solutions to just, "Hey, let's map the relevant fields and information from the CRM into the order management system." If there's some blanks that need to get filled in, that's okay. Maybe I'm just starting from a queue of new orders right in the system, and I'm maybe adding three or four pieces to that puzzle instead of all of it. Rob Collie (17:12): Okay. I want to make a global note here. Note that we're talking about this broad topic, digital transformation. We're already way down into very detailed, specific use cases. In my opinion, that's what digital transformation is, it's a collection of all of these individual use cases where things can get faster, more efficient, more accurate. It is the sum of many small things. Each one of them might have tremendous impact. This is the way. (17:46): In this particular example, I've been describing the Power Platform as the world's best middleware for a while now. Even Power BI is middleware. It's beautiful, beautiful, beautiful capability is that it can simultaneously ingest data from multiple different line-of-business silos that have never once talked to each other. The only place that they meet is in a Power BI semantic model. Justin Mannhardt (18:10): Yeah. Rob Collie (18:10): And they play a symphony together that Power BI makes them play. They still have never seen each other, but Power BI is what bridges the gap. Now, Power BI is read-only by itself, it doesn't make changes to any systems. (18:25): In this particular case, it sounds like Power App's and Power Automate's music. Let's just get really tangible here. I know that it's a very specific, but it's a fictional example. But lots of people have almost exactly this problem. Justin Mannhardt (18:39): Yeah. Rob Collie (18:39): Just talk me through what a solution to that particular problem might look like if we implemented it in the Power Platform. How much work, how much elapsed time do you think it would take? Let's dig into this one a little bit. Justin Mannhardt (18:51): If what I want to do is, when we receive an order or close a deal in our CRM, I want that to move some data to another system, let's just say that's assumed. Power Automate can solve this need. Obviously there's a lot of detail, you can look some things up online, or you can email robandjustin@p3adaptive.com and we can trade some ideas here. But there are tons of out-of-the-box connectors, and in those connectors they have what's called a trigger. I could say, "When this happens in Salesforce," for example, "I want to start building a flow." I can say, "Okay, I want these fields, and I want to write them from Salesforce to this destination." Maybe that destination's a database, maybe that destination is another system that Power Automate supports that you can write to. (19:37): It could be just this simple mapping exercise. When this happens over here, grab this data, and create a new record over here in this system. Rob Collie (19:46): Okay. A trigger in this case would look something like, "When a record in Salesforce is marked as a win," we've signed a deal, someone wants to buy a palette of whatever. Then automatically, it wakes up, looks at the record in question that the data associated with the sales win in Salesforce, grabs certain fields out of the Salesforce record, certain pieces of information. Let's keep it simple for a moment, and just pushes them into a simple SQL database or something, that could be stood up in minutes. We don't have to spend a lot of time. Or maybe, we just drop it into OneLake. Justin Mannhardt (20:23): Lots of options there. I think this is a nice little simple example, because when you talk about Power BI, that's a very tangible apparatus. These are the things you set up, and you never really go ... You monitor it of course, but you never really go engage with it. You put the glue in place, and it's magic and it's cool. That's a simple version. (20:44): But sometimes, the data coming from its source is incomplete relative to what it's destination requires to take the next action. In this type of scenario you could either say, "Well okay, once it gets over there, we're just in that system, maybe we're adding to it." But this is where you might insert a Power App into the process. Win a deal in Salesforce that triggers, grab these fields. Let's go ahead and write it over to Dataverse, this is a back end of a Power App, for example. Or a database, or SharePoint, who knows. It depends on what makes sense. (21:18): Now we've got a Power App that maybe has a little work cue that says, "Hey, Rob, you've got new orders." You're either approving them, or you're annotating them with additional information. You're doing the human process, like you were describing before, maybe ensuring some hygiene, completeness, whatever. Then you do something in Power App that says, "Okay, go ahead and kick this down the line from here." Rob Collie (21:40): Yeah. Here's an example. In the CRM system where the sale is being executed, there's probably an address for this customer that is associated with that account, especially if we've done business with them before. But this customer might have many different physical locations. A palette of stuff showing up at the wrong physical location would be a real problem. Justin Mannhardt (22:06): Yeah. Rob Collie (22:08): Even just a sanity check Power App that hits the sales rep back, shows up in their inbox or something, shows up in Teams, somehow there's a cue for them to process these things, where they need to just glance at the order and validate that the shipping address is the right one. Justin Mannhardt (22:28): Yeah. Rob Collie (22:28): Even if that's all it is, that's the only additional piece of information is yes, no, that's the right address. Justin Mannhardt (22:34): Yeah. Or sometimes there's a material that is sold is related to a bill of materials to produce. Maybe there's some choices that need to get made in the manufacturing process, such as what specific raw materials are we going to use for this order? Which machine are we going to produce it on this week? Maybe you're just adding the execution instructions. Rob Collie (22:59): This is interesting because you could stop yourself at this moment and go, "Wait a second. Shouldn't those questions be encoded and implemented into the CRM?" The answer is of course, they could be. But your CRM might not be a nimble place to make those sorts of changes. Justin Mannhardt (23:20): That's right. Rob Collie (23:22): It's also a dangerous thing to be customizing. Justin Mannhardt (23:24): Yes. Rob Collie (23:25): There's a lot of validation and testing that's required. There's a reason why modifying and writing custom code into one's CRM doesn't happen all that frequently. Whereas this process you're describing is relatively safe, by comparison. It doesn't rock the boat. It's between. Forcing these sorts of modifications and customizations into the individual silo line-of-business applications, if that were so feasible, that would already be happening. Justin Mannhardt (23:55): I've worked for companies like this, I've engaged with companies in my consulting career like this, where they have done that. They said, "We've got the talent in-house, so we're going to customize this thing." Then you get into a conversation of, "We'd like to upgrade to the newer version." They realized, "Oh, we can't." Rob Collie (24:18): Yeah. "It'll break out customizations," yes. Justin Mannhardt (24:20): Or sometimes, the programming language that the customizations are done in is not the same programming language in the newer version. While it's possible, if you have the resources, the time, and the money, it becomes a heavier lift. It begs the question, why? Rob Collie (24:36): I was describing the heavy lift being that the original line-of-business system might be resistant to change, resistant to the customizations that you want to implement. You're describing it as also, even if you do perform those customizations, the next major software upgrade is going to be a problem. That rings true for me. I remember the object model in Office- Justin Mannhardt (24:59): Oh, yeah. Rob Collie (25:00): All the VBA solutions that were out there, being incredibly paralyzing in terms of the things we could do with the product, because if you broke people's macros, they wouldn't upgrade to the new version of Office. Justin Mannhardt (25:09): Yeah, been there. Yeah. Rob Collie (25:12): I promise you that, at Microsoft, we took that problem and approached it with a level of discipline that it was probably 10 times greater than the average line-of-business software vendor. Because most line-of-business software vendors see themselves as platform vendors. They want to be considered like that, but they don't want to pay the price of it. So that's good. (25:30): But then, the other thing is is if you built it into the line-of-business system, then inherently you're saying, "Okay, whatever that extra logic is, then it's up to that line-of-business system to then push those records across the wire." The new information has to go from the CRM to the other system. That kind of customization, both ends of the process are going to be very non-cooperative with this. This is another reason why doing this in a lightweight, nimble, intermediate layer provides a shock absorber to the system. Justin Mannhardt (26:08): I like that analogy. Rob Collie (26:09): It's pretty easy for Power Automate, all it's doing is pushing a handful of doing to something and that other something is going to take care of all the validation, all of the retry. Validation with human beings, but also the logging in to the other system and all of that. Coding all of that into your CRM is almost a non-starter. This is why the between workflows have remained so non-digitized. Justin Mannhardt (26:42): Yeah. There's also a lot of tedium should be in play here, too. You have a written process, you look at your SOP documents and you say, "Oh, when this happens, Jan sends an email to Rob." Okay, well we could probably just get the Power Automate to send the email to Rob, if that what needs to happen. (26:59): An example of this is something I built for myself at P3. When a potential new customer reaches out to us, and they want to meet with us and just chat, I wanted a process that reminded myself to go check out who that company is, understand who I'm going to talk. I just had a trigger that said, "When a meeting gets scheduled from this arena, just create a task for me to remember to do this before the meeting." Even little things like that, that are just personally useful, have been really beneficial as well. (27:33): It's much easier to say well yeah, dashboards, charts, graphs, cool. Or even fabric, even though that needs some demystifying still. This middleware, it's invisible, there's so many options. There's 100,000 little improvements you could make with it. Rob Collie (27:48): The world has spent a long time coming around to why dashboards could be valuable. Justin Mannhardt (27:55): They still are. Rob Collie (27:56): Yes. When you say the word dashboards and you show that work product, even in the abstract to someone, the communication of what the value is benefiting from all of that history of the world waking up to the value of dashboards. Honestly, it wasn't that clear 15 years ago. It wasn't clear to people, most people anyway, why they needed them, why they were better than just running the reports out of each line-of-business system. But because it's such an inherently visible work product, it is a lot easier, I'm going to use the word, it's a lot easier to visualize what the impact will be, what it does for you. Whereas these other workflows, until you know that they're improvable, this is why digital transformation is so hard to understand because it is really talking about spaces where it's hard to visualize software helping because it's never been able to help. (28:53): Let's go back to this example where the sale happens in the CRM system. Some information just automatically gets dropped in a data store, off to the side for the moment. There's potentially some Power App clarification. There are human inputs that are required here and you still want a human being to provide those. Justin Mannhardt (29:16): I want to point out here too, it's easy to get into a situation where that data store is simply being read by a report, even a Power BI report. But if the human's going to say, "Yes, no," or add to it, the Power App is just a way better piece to put there. Rob Collie (29:32): Yeah. Let's have this example be like an example that we would look at and smile, be proud of. The Power App is involved. Then when the human interaction is done, they press okay or approve in the Power App. Take me to the next step. Justin Mannhardt (29:49): Well ideally, we are pushing data and information into the next system or workflow. Rob Collie (29:57): This is a two silo problem. We have the CRM system and then we have the manufacturing, work order and shipment system, the fulfillment system. Justin Mannhardt (30:06): The WMS. Rob Collie (30:08): Is that what that is? Justin Mannhardt (30:08): Yeah. Rob Collie (30:09): Okay. We've already covered the first silo. We've gotten the human interaction. Now it's time to send it on to the second silo. How does that work? Justin Mannhardt (30:20): This just comes down to what the point of integration is in the second silo. We could be inserting records into a SQL database, we could be making a post request to an API endpoint. In Power Automate, most of these things are WISIWIG in nature. There is an open code interface if you need to get to that and want to do that, need it. But usually, it's just mapping. You find your destination and it says, "Oh, here's the fields to map to." You say, "Okay," you just drag and drop. It just depends on what your destination system is, but you're just creating a target in your workflow, and the data goes. Rob Collie (30:55): The way I like to look at this is that, even though each line-of-business silo system, they're never really built to talk to each other. Justin Mannhardt (31:04): Right, they need a translator. Rob Collie (31:05): Yeah. The translator and the shock absorber. But at the same time, it's not hard to get the information you want out of one system, and it's not hard to write the information you need into another. But when you try to wire them directly through to each other- Justin Mannhardt (31:23): Yeah. Rob Collie (31:23): That is actually really difficult. You need this referee in the middle, that's able to change gears, like the ambassador between the two systems. When you think about a translator system, an ambassador system, a shock absorber, whatever you want to call it, whatever metaphor you want, you can also imagine an incredibly expensive, elaborate piece of custom software that's being written to do that. That's not what we're talking about. Justin Mannhardt (31:47): No. Rob Collie (31:48): Let's recap. Trigger fires in CRM system, some data gets slurped out related to that sale, dropped in an intermediate location that then powers a Power App. Power App is able to read that information, it knows who to reach back to to get the clarification, the approval, et cetera. It might be multiple people that need to provide some input. Justin Mannhardt (32:09): It could be a whole workflow that lives right there. Rob Collie (32:12): But eventually at the end of that workflow, in this case we'll just assume it's one step, one human being, the sales rep just needs to sign off, then the Power App's job is done. That's the human interaction part. Now we're back to Power Automate, correct? Justin Mannhardt (32:24): That's right. Rob Collie (32:25): Power Automate will notice there's another trigger that the Power App is done with its part, the approval button was pressed. Justin Mannhardt (32:31): Clicked, yeah. Rob Collie (32:33): Then it turns around, and it knows, because again we wire it up ... It sounds like we might be lucky, it's just drag and drop, one time development. But if it's not, it's probably not that much code, to go inject the new work order into the WMS system? Justin Mannhardt (32:52): Yeah, it's the WMS, warehouse management system. Rob Collie (32:53): Let's call that the end of the story for this one integration. Let's say things go incredibly well in this project. We don't really encounter any hiccups. Best case scenario, how long on the calendar would it take for us to wire something like this up? Justin Mannhardt (33:12): Yeah, best case scenario this is something that gets done inside of a week. Rob Collie (33:15): That's the difference. Justin Mannhardt (33:16): Yeah. Rob Collie (33:18): All right. Worst case scenario, both of these systems are more stubborn than usual, the connectors aren't built into the system, and they still have some relatively rudimentary ways of data access, but it's nothing WISIWIG off-the-shelf. We just get unlucky with these two stubborn line-of-business systems. How bad can that be? Justin Mannhardt (33:37): Well, instead of being inside of a week, maybe it's weeks, like two or three. The only reason that gets extended would be okay, instead of pure WISIWIG drag and drop, maybe we are having to do some light handling of adjacent array. But there's tools for that. You can say, "Parse this into fields so I can now drag and drop it." Maybe instead of our Power Automate workflow having three, four steps, maybe there's 10. Some of those steps have a little bit more involvement. Maybe there's some time because we got to troubleshoot a little bit more and make sure we've got it all right. But I think the overall point here is these are relatively light touch on the calendar. Rob Collie (34:18): I had a job in college that I've never brought up on this show. Justin Mannhardt (34:23): Ooh. Rob Collie (34:23): I was obsessed about this workflow for nearly a whole decade afterwards. Where I was working for a construction company, and there's this thing in the construction industry that I'm sure is still a thing, and it's called the submittals process. Where it turns out, when you're going to build a building, there's an ingredients list for a building. You were talking about different material options for manufacturing. So we're going to make a brick exterior. Okay, what kind of brick? There are many different colors, kinds, textures, levels of quality. Literally, the owner of the building, the person paying to have the building built, that owner and their architect, and sometimes their structural engineers, are going to want to hold a physical brick in their hand. Justin Mannhardt (35:05): Right. Rob Collie (35:06): This is the brick that you are going to use. They want to inspect it with their eyes, whatever, they want to feel ... Maybe even run tests on it. Justin Mannhardt (35:14): Smack it with a hammer. Rob Collie (35:16): Right. Then, when you build the building, you better use that brick because they're holding onto the brick, the sample, the reference brick. You think about the number of ingredients that goes into building a building, and the building in question that I was working on helping out with this process was the new chemistry building at Vanderbilt University. It was not just a regular building, it had all kinds of specialized hardware, and exhaust, and crazy stuff that wouldn't be in a normal building. (35:44): There's this long list of materials that need to have submittals produced for them, samples. The requests all go to a million different vendors. You have to ask the subcontractor, the plumbing contractor, what pipe they plan to use. You find out what pipe they plan to use and then you say, "Okay, where do I get a sample of that pipe?" Sometimes you have to send the request for the sample to the pipe manufacturer, or something the subcontracting, the plumber, people will do it for you. Ah! It's awful. (36:14): I was brought in to just be the human shock absorber in this process. I was constantly taking information from one format, copying and pasting it, if I was lucky. Usually, re-hand entering into another one. I have to do this multiple times. I have to do this on the outgoing request, and then the incoming materials coming back. Ugh, and then the shipping labels and everything. It was just they brought me in because they had their assistant project manager for the construction company, the general contractor, on this site. All of this was having to go through him. It turns out, he had another job which was called build the building. Justin Mannhardt (36:54): Just a minor, little job. Rob Collie (36:56): Yeah. The job of push the samples around was a fine thing to subcontract to a college student. I swear, I did 40 hours a week on that for a whole summer, and then part-time for the next two years. That's all I did. Justin Mannhardt (37:13): Make note, students. If you take an internship and you end up like Rob, learn how to do Power Automate stuff and use that for your internship. Rob Collie (37:22): By the way, we already had Lotus Notes with a tremendous amount of customized Lotus Note template for this process. Justin Mannhardt (37:30): Yeah. Rob Collie (37:30): But all that really was was just another line-of-business system that didn't talk to anything. It spit out paper is what it did, it spit out printed slips that announced, "This is your brick." Justin Mannhardt (37:42): Congratulations. Rob Collie (37:44): That would be a really, really challenging digital transformation process today, because not only is it cross system, it's also cross companies. But I'm sure that, if we looked at that process today, we would find things that could be optimized. Justin Mannhardt (37:56): Oh, yeah. Your example reminded me of a really important opportunity in the construction industry or lots of trades. You're talking about people that are out in the field, on job sites, on location, they're not sitting in offices at workstations. All of these things we're talking about, especially these Power App interfaces, can be optimized for mobile. Instead of, "Oh, I'm going to write this down so when I get back to my home office," I can put something on the smartphone. Even if you're not picking from a list of material SKUs or whatever, you can say, "Hey, Rob needs a brick." (38:36): Now this goes back to your central office, and it's into a work queue, and another screen in the Power App, then they can go navigate the vendors and all that sort of stuff, too. That's a great example of where you can just put a little spice on it. Rob Collie (38:50): I said that was the only thing I did in that job, that's not true. I had other jobs. One of them was the plumbing contractor was deemed to be running well behind schedule, they were not installing pipe fast enough, pipe and duct work. They assigned me, the construction company assigned me the job of going out there, walking through the building and seeing how much had been installed, linear feet of various materials, and writing it down. I was terrible at this. It's not a good fit for me at any age, but at age 20, I was just constantly under-reporting how much work they'd actually done and getting them in trouble. Justin Mannhardt (39:32): This does not sound like a good use of Rob. Rob Collie (39:34): Eventually, everyone bought me the little thing that wheels along on the ground and counts distance. What I would do is I'd be looking overhead at these copper pipes that were hanging from the ceiling, and I'd just stand beneath one end of them and walk across the building, tick, tick, tick, tick, tick, tick. But then, what would I do? I would write it down. I'd write down a number. What floor am I on? What side of the building am I on? Which pipes am I looking at? "Oh yeah, 150 linear feet." By the way, have I already counted those pipes? Did I count those pipes last week? I don't know. Justin Mannhardt (40:11): There's errors in the world that have Rob Collie's fingerprints on them. There's a building somewhere that's had some pretty serious issues over the years and it's Rob's fault. Rob Collie (40:21): The plumbing contractor had a pretty good sense of humor about it. They knew I was a youngster. Anyway, really just another example of something that could be digitally transformed today and it doesn't have to be difficult. (40:33): This is not something that's a global, let's go digitally transform the whole company all at once. You can pick and choose some high value examples. And decide if that's a sufficient win for you, you might be encouraged to do it elsewhere. There's no thou shalt do all of these things, there's nothing like that. You get to choose where your cost benefit curve lies. But just even knowing that this is possible I think and what it entails. Demystifying ... The process we just walked through, with today's technology, is not difficult. We're talking, as you said, within a week to several weeks on the worst case end. You do realize a bunch of benefits from that. Justin Mannhardt (41:16): Yeah. I love how well the Power Platform, and this idea of it being middleware, just leans right into an idea that's been around for a long time in companies, which is continuous improvement. You can look at a problem, like the ones we've been describing, and you can go down the path and you say, "Okay, is there a piece of software that would solve or improve this problem?" You could look into something like that. Or you could say, "Actually, we have these other tools that we've been learning how to use and integrate into our organization, and we'll just take a week, or three weeks and make it better." If you decide to replace a silo down the road, like, "Hey, we're going to do a CRM take out," you've not saddled yourself up with this huge level of tech debt. Rob Collie (42:05): Yeah, that's huge. Justin Mannhardt (42:06): Because a lot of these decisions have so much pressure because you're like, "If we don't get this right, then we'll have all this." It's actually okay to be like, "Yeah, we're going to throw this away and build a different one." I think that's an important aspect of these things. You can empower a team of people who are just interested in making things better and it's not this huge sunk cost or investment that you're never going to get back. You're going to get value from it, even if you're only going to leverage it, say for a year. It's like, "Hey, that week was worth it because it eliminated this many errors," or lost time, or whatever. Then we did something else. Rob Collie (42:44): This really hearkens back to something that I struggled to explain to people in my time at Microsoft. I had an intuition, and a lot of people had the same intuition, we weren't doing a great job of explaining it. What I'm going to talk about is the XML revolution. (43:01): XML, and JSON, and all these sorts of things, are just taken for granted today. There's nothing magic about them, it's completely commoditized and that's the way it should be. But those of us who saw this XML thing coming as a real game changer, I think we're really just keying in on exactly this thing we're talking about. The world had been obsessed with APIs up until that point. Every system had an API on it that was capable of doing verby things. Read/write, make changes. These APIs tended to be very heavy. Anyone that's ever written any macro code against Excel will know that the Excel API is incredibly complicated. I'm talking about the desktop VBA comm automation. Go play around with the range object for a couple of days. (43:49): The idea that two systems with good APIs could then talk to each other was still this myth that I think most of the software world believed. Our belief was stubbornly that we just hadn't gotten the APIs right yet. The next standard in API was going to get it done. What XML did, all it was really doing was saying, "Look, there's going to be a data transmission format that is completely separate from any API, and it's super, super readable, and it's super, super simple." It's the beginning of this shock absorber mentality. Since then, we've discovered that it doesn't have to be XML. Justin Mannhardt (44:30): Oh, yeah. Rob Collie (44:31): But the XML thing did eventually lead us down the road of Hadoop, and DataLakes, and all of that. But yeah, this notion that you get the necessary data from system one, and there's this temporary ah, breath that you can take, and you can disconnect the process of slurp from system one and inject new into the other system. You can ever so slightly disconnect those two so they're not talking directly to each other. When you do that, you gain just massive, massive, massive benefits. (45:03): Yeah, it's kind of neat to connect that now. Again, I used to talk to people all the time like, "No, XML is magic. It's going to blah, blah, blah." People would go, like my old boss did, again would be like, "I don't get it. Why is it magic?" I'd be like, "Well, it just is, man. You don't understand." He beat that out of me. It was one of the greatest that anyone's ever given me. By the time I was done with him, I could explain why XML was valuable but not at the beginning. I certainly didn't envision where we've landed here. (45:27): Okay, so I think this was pretty straightforward, right? If you want to identify what digital transformation means for your organization ... This actually really parallels the talk I gave on AI the other night here in Indy. Justin Mannhardt (45:39): Oh, right. Yeah. Rob Collie (45:40): Don't talk about it from the tech point of view. Justin Mannhardt (45:43): Yeah. Rob Collie (45:43): Think about it from the workflow point of view. Where are the workflows in your company? What's really beautiful about digital transformation is that we can provide this extra guidance that, what are the workflows that happen between systems or adjacent to systems? Justin Mannhardt (46:00): Yeah. Rob Collie (46:00): It helps you focus on what we're talking about. It's not often you get a cheat code like that, so you can really zero in on something. (46:08): I suspect that once you have that algorithm for looking, you're going to find lots of things. The Power Platform makes it- Justin Mannhardt (46:18): Ah, it transforms them in digital ways. Rob Collie (46:20): It puts that completely within range, completely within budget in a way that you wouldn't necessarily even expect. It's just kind of magic. It's the same level of magic that you'd get from Power BI, but in a read/write workflow sense. Justin Mannhardt (46:33): Between and adjacent to, that's magic. That's a magic algorithm because I bet a lot of people, when you say digital transformation, they are thinking on or within the system, not between it. Rob Collie (46:45): Yeah. It's another one of these marketing terms that's almost deliberately meant to be mystical. Everyone's supposed to pretend that they know what it means, but then it's left for all of us out here in the real world, close to where the rubber meets the road, to actually do something real with it. (46:59): I wonder what percentage of the time people use the phrase digital transformation, if you scratch the surface, you'd find that they were completely bluffing? Justin Mannhardt (47:07): Yeah. There's a category of thinking digital transformation, or even data analytics, where there's just all these abstract, conceptual statements or diagrams that mean very little. Let's just zoom into an actual problem, even if it's a little one, and fix it. Then, we'll go to the next one and fix that. We don't need big, fancy frameworks, teams, and steering committees to do any of that. Rob Collie (47:35): I've got another example. Justin Mannhardt (47:36): Oh, yeah? Rob Collie (47:37): It's one that we've implemented here at P3. We have these Power BI dashboards that measure the effectiveness of our advertising. It turns out that advertising in particular on Google AdWords is not a global thing. It's the sum of many micro trends, your overall performance. It's highly, highly, highly variable based on which keywords you're matching against, what kinds of searches you're matching against, and what kind of messaging you're presenting to the user of Google. The only way to improve, most of the time, is to improve in the details. (48:11): All right. For a while, we had this workflow where we'd identify an intersection of ads that we were running and what we were matching up with, in terms of people's searches. We'd identify a cluster of those that, I'll just keep it simple for the moment, where we'd say, "Look, right now we're providing the same message to a bunch of searches that aren't really the same search and we need to break this out, and provide a more custom, tailored message to each of these individual searches." We'd mark something for granularization. (48:43): But originally, what we would do is we were looking at this report, we'd write down essentially this intersection and say, "Go split that out." Justin Mannhardt (48:51): What did we do? Rob Collie (48:52): Immediately, we'd lose all track of what did we even decide to do? Because then someone had to go over to totally Google AdWords system and enter new ads, and break this thing out. Even knowing whether that had happened, producing the work list of things that needed to happen, was very difficult because we were in the context of a Power BI dashboard that didn't do any communication elsewhere. We couldn't track what our to-do list was. Except again, completely offline. We built a Power App and embedded it into some of these reports. You'd click on the thing you'd want to break out, the Power App would pick up that context, and then we'd just use a little drop-down and say, "What do we want to do to this?" We're going to mark this for granularization. (49:39): That did produce us a to-do list, that then could also be re-imported back into the report, so that we could se that we had marked that one to explode it out. We didn't have to look at it again, and we also in the reporting, could see whether that splitting up had been done because you'd come back to the Power App and say, "Done." Even better, you'd enter the IDs of the new groups, so that you can say, "Hey, this one is now superseded by these." (50:07): Now we never got to the point of directly writing back to Google AdWords to make the changes. That still happened offline. We certainly could have imagined a world in which a Power App, a much more elaborate process was built that, then separately from the dashboard, would prompt you to write the new ad copy and things like that. You get to choose where the 80/20 is in your process. For us, the 80/20 was recording the list and tracking the lineage while we're in the context of the report. That was a big deal. Justin Mannhardt (50:39): There are over 1000 pre-built and certified connectors available for the Power Platform. Rob Collie (50:46): That's it? Just kidding. Justin Mannhardt (50:48): They're adding things all the time. We live in a SaaS world. All these things, they're real. Rob Collie (50:53): Yeah. That's a really critical point about Microsoft, is that they have realized that they are the middleware company. Justin Mannhardt (50:59): Satya is all about it. Rob Collie (51:00): Yes. In the Bill and Steve era, this was not Microsoft's game. They wanted to own everything. Justin Mannhardt (51:06): Yeah. Rob Collie (51:07): In Satya era, it's more like, "No, we want to work with everything." Justin Mannhardt (51:11): It's great, I love it. Rob Collie (51:12): Just recently, as I've gone down this path myself, reverse engineering in my own little way what this term means and coming to the conclusions that we have, I've realized that we are a digital transformation company. It's not the only thing that we do. Is read only Power BI middleware, is that digital transformation? Well, probably. By the strictest definition, probably yes, but not by the spirit of the law. The spirit of the definition means a read/write workflow. I'd mentioned in this last example, Power BI can be part of a read/write workflow. There's no reason to sideline it. In the other episodes, where we talked about improvement and action is the goal, how a Power App can be added to a Power BI report to help you take action on what the report is telling you. But just the broader Power Platform, Power Apps and Power Automate in particular. We do have a handful of clients where, most of the work we're doing is digital transformation work. Justin Mannhardt (52:08): Right, this type of work. Rob Collie (52:09): The adjacent in between that we're talking about. Even though we're mostly thought of as a Power BI company, as we're doing our next round of website rebuild, we've 100% put a digital transformation page on our sitemap. It'll probably use some of this language we're talking about here. Digital transformation, what does it mean? It is both not that special of a term, there's no rocket science to it, and at the same time, there's a lot of value to be realized from it. Justin Mannhardt (52:36): Totally. Here's a fun little call back to our origin story as individuals and as a company. We spend a lot of our time helping, for example, like the Excel analyst move over to Power BI and we're trying to solve these middleware gaps. That's why I think, for us, it's just been quite natural to provide these types of services and capabilities to customers as we've grown because it's the same type of person that's spirited to solve these types of issues, and the technology, and the openness of it brought everything in range. It's fun to reflect back on how broad we can show up to a customer beyond just dashboards. Rob Collie (53:22): Yeah. It's a miracle and a testament to what Microsoft has pulled off. You can certainly imagine a world in which they could enable that uptempo, highly efficient, what we call faucets first methodology for dashboards. Justin Mannhardt (53:22): Yeah. Rob Collie (53:38): And stopping there. To extend it to something like workflow and applications, and have implementation of these solutions feel very, very, very similar. Justin Mannhardt (53:50): Yeah. Rob Collie (53:50): It's completely compatible with our ethos. It's almost like I didn't even notice when we made that transition into doing both. It sneaked up on me. That's a good sign. I feel a little silly that it took me a while to digest it, but I love that it happened organically without us having to go- Justin Mannhardt (54:10): Right. Rob Collie (54:11): Pick up another toolset from another vendor, or change our hiring profile dramatically, or anything like that. Justin Mannhardt (54:18): Yeah. Now, we've got some of these cool projects where you've got maybe someone that their expertise is more on the Power BI side, working right alongside someone whose expertise is more on the Power Apps, Power Automate side. They're just moving in lockstep with the same customer, closing these middleware gaps, building the reporting, and the action lives around it. It's that whole thing working together that makes it all really cool. Rob Collie (54:41): I'm also developing an intuition that AI, maybe not the only application of AI, but I think a lot of the surface area of where we will find AI to be useful, plugs into this digital transformation thing, the adjacent in between. In particular, in sub workflows within the overall workflow. Justin Mannhardt (55:03): Yes. Rob Collie (55:03): Did your reaction fit that? Justin Mannhardt (55:06): Yes, totally. Totally, totally, totally. Yeah. Rob Collie (55:09): Then, we're good. I think it's easy, with dashboards, with BI, to imagine the global. Going from a non-dashboard company to a dashboard company, it's very easy to imagine that as a global thing and it's probably the right thing. Any place where you're flying without the information you need in a convenient, easy to digest format, let's go and get that. Even there, with the transformation to a data oriented organization, a data driven culture, you still pick places to start. Justin Mannhardt (55:39): You got to start somewhere. Rob Collie (55:40): This other thing, digital transformation is a little harder to imagine is a global thing, and that's fine. I think AI's the same way. You should not be thinking about AI as a global transformation for your business. Just like digital transformation, it is a go find particular places where you can score these wins. Speaker 4 (56:00): Thanks for listening to the Raw Data by P3 Adaptive Podcast. Let the experts at P3 Adaptive help your business. Just go to p3adaptive.com. Have a data day.
Nduati Kuria shares his journey from studying AI to why Matthew Griffith's elm-ui makes the web approachable. He explains how an innocuous issue on Tereza Sokol's elm-charts led to a new job.Thanks to our sponsor, Logistically. Email: elmtown@logisticallyinc.com.Music by Jesse Moore.Recording date: 2023.11.10GuestNduati KuriaShow notes[00:00:20] Sponsored by Logistically[00:00:52] Introducing NduatiQodaElm Town 36 – The Risk of ElmElm Town 55 – From algorithms & animation to building a decentralized finance appArtCultureHaruki Murakami Website UIWebGL Sculpture Animation site about Marcus Aurelius[00:01:42] Getting started"How to teach programming (and other things)?" by Felienne Hermans at Strange Loop 2019[00:05:58] Nduati's College Journey: Swift, Internships, and Elm Discovery[00:08:27] Learning Elm: It actually fits in my brainelm-ui[00:13:03] Uber for school buses[00:16:59] How Elm drives you toward best practicesElm Town 67 – Breaking things down with Gingko Writer[00:23:28] Introducing Elm at work[00:25:36] Master's & self-directed learning[00:28:09] From elm-charts to QodaTereza Sokol's elm-charts[00:34:53] The rigour of programming with Elm at Qoda[00:39:55] Ports"The Importance of Ports" by Murphy Randle at Strange Loop 2017Elm RadioA demo of Qoda and an explanation of how we use ports by Dwayne Crooks[00:47:14] Haruki Murakami site animationHaruki Murakami Website UI[00:50:07] Not having to pay the cost of constant changeTereza Sokol's elm-charts[00:54:33] PicksNduati's picks"Parse, don't validate" by Alexis King"Drag & Drop without Draggables & Dropzones" by Jasper WoudenbergMatthew Griffith's elm-uiJared's picksElm Radio on opaque typesIntro to Opaque TypesDeliberate Practice...and in most other episodes
In the first hour of "Connections with Evan Dawson" on Tuesday, March 12, 2024, Joseph Burgess, team leader at the Social Sciences Data Lab at the University of Copenhagen, discusses what we can learn from presidential polling.
The suspense was killing us! OK, the old parser was then... but what about NOW? We're finally answering this question... in more detail than you dared to ask for. PEG, memoization, funky secrets, and how a certain auto-formatter self-inflicted an existential crisis on itself. It's all there, told in barely 100 minutes! Can you believe it? # Timestamps (00:00:00) INTRO (00:00:54) PART 1: What even is PEG? (00:04:02) You can't prove anything! (00:05:03) What's a "parsing expression"? (00:08:23) Our old LL1 parser wasn't doing its job (00:09:37) "Soft keywords" in LL1: A Horror Story (00:13:16) PART 2: How PEG was adopted by Python (00:17:10) Why not LALR? (00:22:11) The PEG paper wasn't enough either, if we're honest (00:26:26) Less obvious advantages of the new parser (00:31:28) Black is stuck with LL1, can it cope? (00:36:24) Hedging against Łukasz, the bringer of doom (00:41:14) PART 3: How does the PEG parser of CPython work? (00:44:30) Pedantic Pablo on "exponential" (00:45:14) Fresh news from literally yesterday last week (00:46:39) Pedantic Pablo on "infinite" (00:47:32) Memoization in the PEG parser (00:50:41) Parse once, and if it fails, try again! (00:52:14) How to model a grammar of programming mistakes? (00:56:36) Why is there C code in my grammar file? (00:59:57) Bro, do you even lift? (01:01:45) How soft keywords work today: it's not free lunch (01:04:29) Funky grammar secrets (01:09:07) PART 4: PR OF THE WEEK (01:09:15) audioop.c license shenanigans (01:14:56) The secret profiler inside CPython (tests) (01:22:45) PART 5: WHAT'S GOING ON IN CPYTHON? (01:23:30) Free-threading changes (01:28:15) Faster Python changes (01:35:39) End of an era: docs get rid of Python 2 migration info (01:36:45) Python --help output is now nicer (01:38:43) SQLite as a dbm backend (01:41:08) OUTRO
How can you gain deeper insights into your complex systems beyond just monitoring infrastructure health metrics? Join us as Charity Majors, CTO and Co-Founder of Honeycomb, challenges traditional approaches to observability. With experience from the infrastructure trenches of fast-growing startups, Charity pushes us to rethink our methods.Can high-cardinality data exploration reveal the "unknown unknowns" hiding in your telemetry? Is prioritizing user experiences over infrastructure stats the key to untangling your "hairball" systems? And what role should observability play across the full software development lifecycle? Charity offers a forward-looking perspective on evolving observability practices to match increasing complexity. Observe the future of observability - Tune in to our latest episode now!Charity Majors is a Co-Founder and Engineer at Honeycomb.io, a startup that blends the speed of time series with the raw power of rich events to give you interactive, iterative debugging of complex systems. She has worked at companies like Facebook, Parse, and Linden Lab, as a systems engineer and engineering manager, but always seems to end up responsible for the databases too. She loves free speech, free software and a nice peaty single malt.Sponsored by: https://www.env0.com/
Helloooooo, internet land! This week, Emily and V were treated to a primer on the OMG Check Please! fandom by listener and friend-of-the-pod korechthonia, and Emily explains the divide between the pro-Parse and anti-Parse sides of the otherwise sweet and peaceful fandom. Plus, she tells V about growing up on a boys' hockey team herself and losing a tooth on the ice! Also, inextricable from OMGCP's sweet tale of gay hockey players in love is the sad, homophobic truth of the NHL, so we had to dig into that as well. But mostly: cute hockey comic about love and pies. WERE YOU PART OF THE KENT PARSON DISCOURSE OF 2016 OR WERE YOU NORMAL? If you would like to suggest an event in fandom history and/or give us a fandom primer, get in touch on our Tumblr! You can support this podcast and also get in touch with us on Patreon.
Bob Evans notes that for many Americans, the fault lies not in our stars...
Merriam-Webster's Word of the Day for November 3, 2023 is: parse PARSS verb To parse something is to study it by looking closely at its parts. In grammar and linguistics, parse means "to divide (a sentence) into grammatical parts and identify the parts and their relations to each other." // The lawyer meticulously parsed the wording of the final contract to be sure that her client would get all that he was asking for. See the entry > Examples: "Around the turn of the millennium, the captcha tool arrived to sort humans from bots based on their ability to interpret images of distorted text. Once some bots could handle that, captcha added other detection methods that included parsing images of motorbikes and trains, as well as sensing mouse movement and other user behavior." — Christopher Beam, WIRED, 14 Sept. 2023 Did you know? If parse brings up memories of learning the parts of speech in school, you've done your homework regarding this word. Parsing sentences, after all, is part and parcel of learning to read and write. Parse comes from the first element of the Latin term for "part of speech," pars orationis. It's an old word that has been used since at least the mid 1500s, but it was not until the late 1700s that parse graduated to its extended, non-grammar-related sense of "to examine in a minute way" or "to analyze critically." Remember this extended sense, and you'll really be at the head of the class.
In this episode, I spoke with Charity Majors, who is the co-founder and CTO of Honeycomb, a popular observability platform.We had a wide-ranging conversation about Parse (and its acquisition by Facebook), observability, devops and why you should deploy on Fridays (but you still need to apply good engineering sense!).Links from the episode:Honeycomb Honeycomb blogHoneycomb's AI-powered query assistantOpenTelemetryYou can find Charity on X as @mipsytipsy.-----For more stories about real-world use of serverless technologies, please subscribe to the channel and follow me on X as @theburningmonk.And if you're hungry for more insights, best practices, and invaluable tips on building serverless apps, make sure to subscribe to our free newsletter and elevate your serverless game! https://theburningmonk.com/subscribeOpening theme song:Cheery Monday by Kevin MacLeodLink: https://incompetech.filmmusic.io/song/3495-cheery-mondayLicense: http://creativecommons.org/licenses/by/4.0
This week on AZAAZ we answered: ❓The Questions❓
Join Mike Davis and Producer Amanda for "Hot Take Tuesday" as we parse the headlines and share our unvarnished opinions "This Evening."
We're off to Gen Con! Here's a space story! Show Notes: Run Time: 1:39:07 We're happy to be here, and we're also happy to be going to Gen Con! If you're in the Indianapolis area on this coming Sunday, August 6th, at noon, come to Stadium Room 12 and join us for a live show! However, before that, we go to space with our most oft-character-changing hero, Parse! Oh, and because Adam had to make that poster for the comic book cover anyway, here's a "clean" version of it: No episode next week, because of Gen Con! See you after the dust settles!
It's updates on the work front today! Stephanie was tasked with removing a six-year-old feature flag from a codebase. Joël's been doing a lot of small database migrations. A listener question sparked today's main discussion on gerunds' interesting relationship to data modeling. Episode 386: Value Objects Revisited: The Tally Edition (https://www.bikeshed.fm/386) RailsConf 2017: In Relentless Pursuit of REST by Derek Prior (https://www.youtube.com/watch?v=HctYHe-YjnE) REST Turns Humans Into Database Clients (https://chrislwhite.com/rest-contortion/) Parse, don't validate (https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-validate/) Wikipedia Getting to Philosophy (https://en.wikipedia.org/wiki/Wikipedia:Getting_to_Philosophy) Transcript: JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville. STEPHANIE: And I'm Stephanie Minn. And together, we're here to share a bit of what we've learned along the way. JOËL: So, Stephanie, what's new in your world? STEPHANIE: So, this week, I've been tasked with something that I've been finding very fun, which is removing a six-year-old feature flag from the codebase that is still very much in use in the sense that it is actually a mechanism for providing customers access to a feature that had been originally launched as a beta. And that was why the feature flag was introduced. But in the years since, you know, the business has shifted to a model where you have to pay for those features. And some customers are still hanging on to this beta feature flag that lets them get the features for free. So one of the ways that we're trying to convert those people to be paying for the feature is to, you know, gradually remove the feature flag and maybe, you know, give them a heads up that this is happening. I'm also getting to improve the codebase with this change as well because it has really been propagating [laughs] in there. There wasn't necessarily a single, I guess, entry point for determining whether customers should get access to this feature through the flag or not. So it ended up being repeated in a bunch of different places because the feature set has grown. And so, now we have to do this check for the flag in several places, like, different pages of the application. And it's been really interesting to see just how this kind of stuff can grow and mutate over several years. JOËL: So, if I understand correctly, there's kind of two overlapping conditions now around this feature. So you have access to it if you've either paid for the feature or if you were a beta tester. STEPHANIE: Yeah, exactly. And the interesting thought that I had about this was it actually sounds a lot like the strangler fig pattern, which we've talked about before, where we've now introduced the new source of data that we want to be using moving forward. But we still have this, you know, old limb or branch hanging on that hasn't quite been removed or pruned off [chuckles] yet. So that's what I'm doing now. And it's nice in the sense that I can trust that we are already sending the correct data that we want to be consuming, and it's just the cleanup part. So, in some ways, we had been in that half-step for several years, and they're now getting to the point where we can finally remove it. JOËL: I think in kind of true strangler fig pattern, you would probably move all of your users off of that feature flag so that the people that have it active are zero, at which point it is effectively dead code, and then you can remove it. STEPHANIE: Yeah, that's a great point. And we had considered doing that first, but the thing that we had kind of come away with was that removing all of those customers from that feature flag would probably require a script or, you know, updating the production data. And that seemed a bit riskier actually to us because it wasn't as reversible as a code change. JOËL: I think you bring up a really interesting point, which is that production data changes, in general, are just scarier than code changes. At least for me, it feels like it's fairly easy generally to revert a code change. Whereas if I've messed up the production database, [laughs] that's going to be unpleasant few days. STEPHANIE: What's interesting is that this feature flag is not really supported by a nice user interface for managing it. And so, we inevitably had to do a more developer-focused solution to remove these customers from being able to access this feature. And so, the two options, you know, that we had available were to do it through data, like I mentioned, or do it through that code change. And again, I think we evaluated both options. But what's kind of nice about doing it with the code change is that when we eventually get to delete those feature flag records, it will be really nice and easy. JOËL: That's really exciting. One thing that's different about kind of more mature projects is that we often get to do some kind of change management, unlike a greenfield app where you just get to, oh, let's introduce this new thing, cool. Oftentimes, on a more mature project, before you introduce the new thing, you have to figure out, like, what is the migration path towards that? Is that a kind of work that you enjoy? STEPHANIE: I think this was definitely an exercise in thinking about how to break this down into steps. So, yeah, that change management process you mentioned, I, like, did find a lot of satisfaction in trying to break it up, you know, especially because I was also thinking that you know, maybe I am not able to see the complete, like, cleanup and removal, and, like, where can someone pick up after me? In some ways, I feel like I was kind of stepping into that migration, you know, six years [laughs] in the making from beta to the paid product. But I think I will feel really satisfied if I'm able to see this thing through and get to celebrate the success of saying, hey, like, I removed...at this point, it's a few hundred lines of code. [laughs] And also, you know, with the added business value of encouraging more customers to pay for the product. But I think I also I'm maybe figuring out how to accept like, okay, like, how could I, like, step away from this in the middle and be able to feel good that I've left it in a place that someone else could see through? JOËL: So you mentioned you're taking this over from somebody else, and this has been kind of six years in the making. I'm curious, is the person who introduced this feature flag six years ago are they even still at the company? STEPHANIE: No, they are not, which I think is pretty typical, you know, it's, like, really common for someone who had all that context about how it came to be. In fact, I actually didn't even realize that the feature flag was the original beta version of the product because that's not what it's called. [laughs] And it was when I was first onboarding onto this project, and I was like, "Hey, like, what is this? Like, why is this still here?" Knowing that the canonical, you know, version that customers were using was the paid version. And the team was like, "Oh, yeah, like, that's this whole thing that we've been meaning to remove for a long time." So it's really interesting to see the lifecycle, like, as to some of this code a little bit. And sometimes, it can be really frustrating, but this has felt a little more like an archaeology dig a little bit. JOËL: That sounds like a really interesting project to be on. STEPHANIE: Yeah. What about you, Joël, what's new in your world? JOËL: So, on my project, I've been having to do a lot of small database migrations. So I've got a bunch of these little features to do that all involve doing database migrations. They're not building on each other. So I'm just doing them all, like, in different feature branches, and pushing them all up to GitHub to get reviewed, kind of working on them in parallel. And the problem that happens is that when you switch from one branch where you've run a migration to another and then run migrations again, some local database state persists between the branch switch, which means that when you run the migrations, then this app uses a structure.sql. And the structure.sql has a bunch of extra junk from other branches you've been on that you don't want as part of your diff. And beyond, like, two or three branches, this becomes an absolute mess. STEPHANIE: Oh, I have been there. [laughs] It's always really frustrating when I switch branches and then try to do my development and then realize that I have had my leftover database changes. And then having to go back and then always forgetting what order of operations to do to reverse the migration and then having to re-migrate. I know that pain very well. JOËL: Something I've been doing for this project is when I switch branches, making sure that my structure SQL is checked out to the latest version from the main branch. So I have a clean structure SQL then I drop my local database, recreate an empty one, and run a rake db:schema:load. And that will load that structure file as it is on the main branch into the database schema. That does not have any of the migrations on this branch run, so, at that point, I can run a rake db:migrate. And I will get exactly what's on main plus what gets generated on this branch and nothing else. And so, that's been a way that I've been able to kind of switch between branches and run database operations without getting any cross-contamination. STEPHANIE: Cross-contamination. I like that term. Have you automated this at all, or are you doing this manually? JOËL: Entirely manually. I could probably script some of this. Right now...so it's three steps, right? Drop, create, schema load. I just have them in one command because you can chain Unix commands with a double ampersand. So that's what I'm doing right now. I want to say there's a db:reset task, but I think that it uses migrate rather than schema load. And I don't want to actually run migrations. STEPHANIE: Yeah, that would take longer. That's funny. I do love the up arrow key [laughs] in your terminal for, you know, going back to the thing you're running over and over again. I also appreciate the couple extra seconds that you're spending in waiting for your database to recreate. Like, you're paying that cost upfront rather than down the line when you are in the middle of doing [laughs] what you're trying to do and realize, oh no, my database is not in the state that I want it to be for this branch. JOËL: Or I'm dealing with some awful git conflict when trying to merge some of these branches. Or, you know, somebody comments on my PR and says, "Why are you touching the orders table? This change has nothing to do with orders." I'm like, "Oh, sorry, that actually came out of a different thing that I did." So, yep, keeping those diffs small. STEPHANIE: Nice. Well, I'm glad that you found a way to manage it. JOËL: So you mentioned the up arrow key and how that's really nice in the terminal. Something that I've been relying on a lot recently is reverse history search, CTRL+R in the terminal. That allows me to, instead of, like, going one by one in order of the history, filter for something that matches the thing that I've written. So, in this case, I'll hit CTRL+R, type, you know, Rails DB or whatever, then immediately it shows me, oh, did you want this long command? Hit enter, and I'm done. Even if I've done, you know, 20 git commands between then and the last time I ran it. STEPHANIE: Yeah, that's a great tip. So, a few weeks ago, we received a listener question from John, and he was responding to an episode where I'd asked about what the grammatical term is for verbs that are also nouns. He told us about the phrase, a verbal noun, for which there's a specific term called gerund, which is basically, in English, the words ending in ING. So, the gerund version of bike would be biking. And he pointed out a really interesting relationship that gerunds have to data modeling, where you can use a gerund to model something that you might describe as a verb, especially as a user interaction, but can be turned into a noun to form a resource that you might want to introduce CRUD operations for in your application. So one example that he was telling us about is the idea of maybe confirming a reservation. And, you know, we think of that as an action, but there is also a noun form of that, which is a confirmation. And so, confirmation could be a new resource, right? It could even be backed at the database level. And now you have a simpler way of representing the idea of confirming a reservation that is more about the confirmation as the resource itself rather than some kind of append them to a reservation itself. JOËL: That's really cool. We get to have a crossover between grammar terms and programming, and being able to connect those two is always a fun day for me. STEPHANIE: Yeah, I actually find it quite difficult, I think, to come up with noun forms of verbs on my own. Like, I just don't really think about resources that way. I'm so used to thinking about them in a more tangible way, I suppose. And it's really kind of cool that, you know, in the English language, we have turned these abstract ideas, these actions into, like, an object form. JOËL: And this is particularly useful when we're trying to design RESTful either APIs or even just resources for a Rails app that's server-rendered so that instead of trying to create all these, like, extra actions on our controller that are verbs, we might decide to instead create new resources in the system, new nouns that people can do the standard 7 to. STEPHANIE: Yes. I like that better than introducing custom controller actions or routes that deviate from RESTful conventions because, you know, I probably have seen a slash confirm reservation [laughs] URL. And, you know, this is, I think, an interesting way of avoiding having too many of those deviating endpoints. JOËL: Yeah, I found that while Rails does have support for those, just all the built-in things play much more nicely if you're restricting yourself to the classic seven. And I think, in general, it's easier to model and think about things in a Rails app when you have a lot of noun resources rather than one giant controller with a bunch of kind of verb actions that you can do to it. In the more formal jargon, I think we might refer to that as RESTful style versus RPC style, a Remote Procedure Call. STEPHANIE: Could you tell me more about Remote Procedure Calls and what that means? JOËL: The general idea is that it's almost like doing a method call on an object somewhere. And so, you would say, hey, I've got an account, and I want to call the confirm method on it because I know that maybe underlying this is an ActiveRecord account model. And the API or the web UI is just a really thin layer over those objects. And so, more or less, whatever your methods on your object are, can be accessed through the API. So the two kind of mirror each other. STEPHANIE: Got it. That's interesting because I can see how someone might want to do that, especially if, you know, the account is the domain object they're using at the, you know, persistence layer, and maybe they're not quite able to see an abstraction for something else. And so, they kind of want to try to fit that into their API design. JOËL: So I have a perhaps controversial opinion, which is that the resources in your Rails application, so your controllers, shouldn't map one-to-one with your database tables, your models. STEPHANIE: So, are you saying that you are more likely to have more abstractions or various resources than what you might have at the database level? JOËL: Well, you know what? Maybe more, but I would say, in general, different. And I think because both layers, the controller layer, and the model layer, are playing with very different sets of constraints. So when I'm designing database tables, I'm thinking in terms of normalization. And so, maybe I would take one big concept and split it up into smaller concepts, smaller tables because I need this data to be normalized so that there's no ambiguity when I'm making queries. So maybe something that's one resource at the controller layer might actually be multiple tables at the database layer. But the inverse could also be true, right? You might have, in the example that John gave, you know, an account that has a single table in the database with just a Boolean field confirmed yes or no. And maybe there's just a generic account resource. But then, separately, there's also a confirmation resource. And so, now we've got more resources at the controller layer than at the database layer. So I think it can go either way, but they're just not tightly coupled to each other. STEPHANIE: Yeah, that makes sense. I think another way that I've seen this manifest is when, like you said, like, maybe multiple database tables need to be updated by, you know, a request to this endpoint. And now we get into [chuckles] what some people may call services or that territory of basically something. And what's interesting is that a lot of the service classes are named as verbs, right? So order, creator. And, like, whatever order of operations that needs to happen on multiple database objects that happens as a result of a user placing an order. But the idea that those are frequently named as verbs was kind of interesting to me and a bit of a connection to our new gerund tip. JOËL: That's really interesting. I had not made that connection before. Because I think my first instinct would be to avoid a service object there and instead use something closer to a form object that takes the same idea and represents it as a noun, potentially with the same name as the resource. So maybe leaning really heavily into that idea of the verbal noun, not just in describing the controller or the route but then also maybe the object backing it, even if it's not connecting directly to a database table. STEPHANIE: Interesting. So, in this case, would the form object be mapped closer to your controller resource? JOËL: Potentially, yes. So maybe I do have some kind of, like, object that represents a confirmation and makes it nicer to render the confirmation form on the edit page or the new page. In this case, you know, it's probably just one checkbox, so maybe it's not worth creating an object. But if there were multiple fields, then yes, maybe it's nice to create an in-memory object that has the same name as the resource. Similar maybe for a resource that represents multiple underlying database tables. It can be nice to have kind of one object that represents all of them, almost like a facade, I guess. STEPHANIE: Yeah, that's really interesting. I like that idea of a facade, or it's, like, something at a higher level representing hopefully, like, some kind of meaning of all of these database objects together. JOËL: I want to give a shout-out to talk from a former thoughtboter, Derek Prior—actually, former Bike Shed host—from RailsConf 2017 called In Relentless Pursuit of REST, where he digs into a lot of these concepts, particularly how to model resources in your Rails app that don't necessarily map one to one with a database table, and why that can be a good thing. Have you seen that talk? STEPHANIE: I haven't, but I love the title of it. It's a great pun. It's very evocative, I think because I'm really curious about this idea of a relentless pursuit. Because I think another way to react to that could be to be done with REST entirely and maybe go with something like GraphQL. JOËL: So instead of a relentless pursuit, it's a relentless...what's the opposite of pursuing? Fleeing? STEPHANIE: Fleeing? [laughs] I like how we arrived there at the same time. Yes. So now I'm thinking of I had mentioned a little bit ago on the show we had our spicy takes Lightning Talks on our Boost Team. And a fellow thoughtboter, Chris White, he had given a talk about Why REST Is Not the Best and for -- JOËL: Also, a great title. STEPHANIE: Yes, also, a great title. JOËL: I love the rhyming there. STEPHANIE: Yeah. And his reaction to the idea of trying to conform user interactions that don't quite map to a noun or an obvious resource was to potentially introduce GraphQL, where you have one endpoint that can service really anything that you can think of, I suppose. But, in his example, he was making the argument that human interactions are not database resources, right? And maybe if you're not able to find that abstraction as a noun or object, with GraphQL, you can encapsulate those ideas as closer to actions, but in the GraphQL world, like, I think they're called mutations. But it is, I think, a whole world of, like, deciding what you want to be changed on the server side that is a little less constrained to having to come up with the right abstraction. JOËL: I feel like GraphQL kind of takes that, like, complete opposite philosophy in that instead of saying, hey, let's have, like, this decoupling between the API layer and the database, GraphQL almost says, "No, let's lean into that." And yeah, you want to traverse the graph of, like, tables under the hood? Absolutely. You get to know the tables. You get to know how they're related to each other. I guess, in theory, you could build a middle layer, and that's the graph that gets traversed rather than the graph of the tables. In practice, I think most people build it so that the API layer more or less has access directly to tables. Has that been your experience? STEPHANIE: That's really interesting that you brought that up. I haven't worked with GraphQL in a while, but I was reading up on it before we started recording because I was kind of curious about how it might play with what we're talking about now. But the idea that it's graphed based, to me, was like, oh, like, that naturally, it could look very much like, you know, an entity graph of your relational database. But the more I was reading about the GraphQL schema and different types, I realized that it could actually look quite different. And because it is a little bit closer to your UI layer, like, maybe you are building an abstraction that is more for serving that as that middle layer between your front end and your back end. JOËL: That's really interesting that you mentioned that because I feel like the sort of traditional way that APIs are built is that they are built by the back-end team. And oftentimes, they will reflect the database schema. But you kind of mentioned with GraphQL here, sometimes it's the opposite that happens. Instead of being driven kind of from the back towards the front, it might be driven from the front towards the back where the UI team is building something that says, hey, we need these objects. We need these connections. Can you expose them to us? And then they get access to them. What has been your experience when you've been working with front ends that are backed by a GraphQL API? STEPHANIE: I think I've tended to see a GraphQL API when you do have a pretty rich client-side application with a lot of user interactions that then need to, you know, go and fetch some data. And you, like, really, you know, obviously don't want a page reload, right? So it's really interesting, actually, that you pointed out that it's, like, perhaps the front end or the UI driving the API. Because, on one hand, the flexibility is really nice. And there's a lot more freedom even in maybe, like, what the product can do or how it would look. On the other hand, what I've kind of also seen is that eventually, maybe we do just want an API that we can talk to separate from, you know, any kind of UI. And, at that point, we have to go and build a separate thing [laughs] for the same data. JOËL: So we've been talking about structuring APIs and, like, boundaries and things like that. I think my personal favorite feature of GraphQL is not the graph part but the fact that it comes with a built-in schema. And that plays really nicely with some typed technologies. Particularly, I've used Elm with some of the GraphQL libraries there, and that experience is just really nice. Where it will tell you if your front-end code is not compatible with the current API schema, and it will generate some things based off the schema. So you have this really nice feedback cycle where somebody makes a change to the API, or you want to make a change to the code, and it will tell you immediately is your front end compatible with the current state of the back end? Which is a classic problem with developing front-end code. STEPHANIE: First of all, I think it's very funny that you admitted to not preferring the graph part of GraphQL as a graph enthusiast yourself. [laughs] But I think I'm in agreement with you because, like, normally, I'm looking at it in its schema format. And that makes a lot of sense to me. But what you said was really interesting because, in some ways, we're now kind of going back to the idea of maybe boundaries blurring because the types that you are creating for GraphQL are kind of then servicing both your front end and your back end. Do you think that's accurate? JOËL: Ooh. That is an important distinction. I think you can. And I want to say that in some TypeScript implementations, you do use the types on both sides. In Elm, typically, you would not unless there's something really primitive, like a string or something like that. STEPHANIE: Okay, how does that work? JOËL: So you have some conversion layer that happens. STEPHANIE: Got it. JOËL: Honestly, I think that's my preference, and not just at the front end versus API layer but kind of all throughout. So the shape of an object in the database should not be the same shape as the object in the business logic that runs on the back end, which should not be the same shape as the object in transport, so JSON or whatever, which is also not the same shape as the object in your front-end code. Those might be similar, but each of these layers has different responsibilities, different things it's trying to optimize for. Your code should be built, in my opinion, in a way that allows all four of those layers to diverge in their interpretation of not only what maybe common entities are, so maybe a user looks slightly different at each of these layers, but maybe even what the entities are to start with. And that maybe in the database what, we don't have a full user, we've got a profile and an account, and those get merged somehow. And eventually, when it gets to the front end, all we care about is the concept of a user because that's what we need in that context. STEPHANIE: Yeah, that's really interesting because now it almost sounds like separate systems, which they kind of are, and then finding a way to make them work also as one bigger [laughs] system. I would love to ask, though, what that conversion looks like to you. Or, like, how have you implemented that? Or, like, what kind of pattern would you use for that? JOËL: So I'm going to give a shout-out to the article that I always give a shout-out to: Parse, Don't Validate. In general, yeah, you do a transformation, and potentially it can fail. Let's say I'm pulling data from a GraphQL API into an Elm app. Elm has some built-in libraries for doing those transformations and will tell you at compile time if you're incorrectly transforming the data that comes from the shape that we expect from the schema. But just because the schema comes in as, like, a flat object with certain fields or maybe it's a deeply nested chain of objects in GraphQL, it doesn't mean that it has to be that way in your Elm app. So that transformation step, you get to sort of make it whatever you want. So my general approach is, at each layer, forget what other people are sending you and just design the entities that you would like to. I've heard the term wish-driven development, which I really like. So just, you know, if you could have, like, to make your life easy, what would the entities look like? And then kind of work backwards from there to make that sort of perfect world a reality for you and make it play nicely with other systems. And, to me, that's true at every layer of the application. STEPHANIE: Interesting. So I'm also imagining that the transformation kind of has to happen both ways, right? Like, the server needs a way to transform data from the front end or some, you know, whatever, third party. But that's also true of the front end because what you're kind of saying is that these will be different. [laughs] JOËL: Right. And, in many ways, it has to be because JSON is a very limited format. But some of the fancier things that you might have access to either on the back end or on the front end might be challenging to represent natively in JSON. And a classic one would be what Elm calls a custom type. You know, they're also called tagged unions, discriminated unions, algebraic data types. These things go by a bajillion names, and it's confusing. But they're really kind of awkward and hard, almost impossible to represent in straight-up JSON because JSON is a very limited kind of transportation format. So you have to almost, like, have a rehydration step on one side and a kind of packing down step on the other when you're reading or writing from a JSON API. STEPHANIE: Have you ever heard of or played that Wikipedia game Getting to Philosophy? JOËL: I've done, I think, variations on it, the idea that you have a start and an end article, and then you have to either get through in the fewest amount of clicks, or it might be a timed thing, whoever can get to the target article first. Is that what you're referring to? STEPHANIE: Yeah. So, in this case, I'm thinking, how many clicks through Wikipedia to get to the Wiki article about philosophy? And that's how I'm thinking about how we end up getting to [laughs] talking about types and parsing, and graphs even [laughs] on the show. JOËL: It's all connected, almost as if it forms a graph of knowledge. STEPHANIE: Learning that's another common topic on the show. [laughs] I think it's great. It's a lot of interesting lenses to view, like, the same things and just digging further and further deeper into them to always, like, come away with a little more perspective. JOËL: So, in the vein of wish-driven development, if you're starting a brand-new front-end UI, what is your sort of dream approach for working with an API? STEPHANIE: Wish-driven development is very visceral to me because I often think about when I'm working with legacy code and what my wishes and dreams were for the, you know, the stack or the technology or whatever. But, at that point, I don't really have the power to change it. You know, it's like I have what I have. And that's different from being in the driver's seat of a greenfield application where you're not just wishing. You're just deciding for yourself. You get to choose. At the end of the day, though, I think, you know, you're likely starting from a simple application. And you haven't gotten to the point where you have, like, a lot of features that you have to figure out how to support and, like, complexity to manage. And, you know, you don't even know if you're going to get there. So I would probably start with REST. JOËL: So we started this episode from a very back-end perspective where we're talking about Rails, and routes, and controllers. And we kind of ended it talking from a very front-end perspective. We also contrasted kind of a more RESTful approach, versus GraphQL, versus more kind of old-school RPC-style routing. And now, I'm almost starting to wonder if there's some kind of correlation between whether someone primarily works from the back end and maybe likes, let's say, REST versus maybe somebody on the front end maybe preferring GraphQL. So I'd be happy for any of our listeners who have strong opinions preferring GraphQL, or REST, or something else; message us at hosts@bikeshed.fm and let us know. And, if you do, please let us know if you're primarily a front-end or a back-end developer because I think it would be really fun to see any connections there. STEPHANIE: Absolutely. On that note, shall we wrap up? JOËL: Let's wrap up. STEPHANIE: Show notes for this episode can be found at bikeshed.fm. JOËL: This show has been produced and edited by Mandy Moore. STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter. STEPHANIE: Or reach both of us at hosts@bikeshed.fm via email. JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeee!!!!!! ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.
Charity Majors is Co-Founder and CTO of Honeycomb, which provides full-stack observability that enables engineers to deeply understand and debug production software together. Victoria and Will talk to Charity about observability, her technical background and decision to start Honeycomb.io, thoughts about the whole ops SRE profession, and things that surprised her along her journey of building a company around observability as a concept. Honeycomb (https://www.honeycomb.io/) Follow Honeycomb on Facebook (https://www.facebook.com/honeycombio), Twitter (https://twitter.com/honeycombio), Youtube (https://www.youtube.com/channel/UCty8KGQ3oAP0MQQmLIv7k0Q), or LinkedIn (https://www.linkedin.com/company/honeycomb.io/). Follow Charity Majors on LinkedIn (https://www.linkedin.com/in/charity-majors/) or Twitter (https://twitter.com/mipsytipsy), or visit her website (https://charity.wtf/). Follow thoughtbot on Twitter (https://twitter.com/thoughtbot) or LinkedIn (https://www.linkedin.com/company/150727/). Become a Sponsor (https://thoughtbot.com/sponsorship) of Giant Robots! Transcript: VICTORIA: This is the Giant Robots Smashing Into Other Giant Robots Podcast, where we explore the design, development, and business of great products. I'm your host, Victoria Guido. WILL: And I'm your other host, Will Larry. And with us today is Charity Majors, Co-Founder and CTO of Honeycomb, which provides full-stack observability that enables engineers to deeply understand and debug production software together. Charity, thank you for joining us. How are you doing? CHARITY: Thanks for having me. I'm a little bit crunchy from a [laughs] long flight this morning. But I'm very happy to be home in San Francisco and happy to be talking to you. VICTORIA: Wonderful. And, Charity, I looked at your profile and noticed that you're a fan of whiskey. And I thought I might ask you just to get us started here, like, what's your favorite brand? CHARITY: Oh, goodness, that's like asking me to choose my favorite child if I had children. [laughter]. You know, I used to really be into the peaty scotches, the Islays, in particular. But lately, I've been more of a bourbon kick. Of course, everybody loves Pappy Van Winkle, George T. Stagg; impossible to find now, but it's so, so good. You know, if it's high-proof and single barrel, I will probably drink it. VICTORIA: That sounds great. Yeah, I tend to have the same approach. And, like, people ask me if I like it, and I like all of them. [laughter] I don't [inaudible 01:21] that I didn't like. [laughs] CHARITY: [inaudible 01:23] tongue sting? Then I'm in. [laughs] VICTORIA: Yeah, [inaudible 01:26]. WILL: See, I'm the opposite. I want something smooth. I'm a fruity drink type of guy. I'm just going, to be honest. CHARITY: There's no shame in that. WILL: No shame here. [laughs] Give me a margarita, and you have a happy Will for life. [laughs] VICTORIA: We'll have to get you to come out and visit San Diego for some margaritas, Will. That's -- CHARITY: Oh yeah. VICTORIA: Yeah, it's the place to be. Yeah, we do more of a bourbon drink in our house, like bourbon soda. That's usually what we make, like, my own custom simple syrup, and mix it with a little bourbon and soda water. And that's what we do for a cool down at the end of the day sometimes, yeah. Well, awesome. Let's see. So, Charity, why don't you just tell me a little more about Honeycomb? What is it? CHARITY: Well, it's a startup that hasn't failed yet, so... [laughs] to my own shock. [laughs] We're still around seven and a half years in. And I say that just so much joking. Like, you're not really supposed to say this as a founder, but, like, I 100% thought we were going to fail from the beginning. But we haven't yet, and we just got more money. So we'll be around for a while. We kind of pioneered the whole concept of observability, which now doesn't really mean anything at all. Everybody and their mother is like, well, I do observability, too. But back when we started talking about it, it was kind of a little bit revolutionary, I guess in that, you know, we started talking about how important it is to have high cardinality data in your systems. You really can't debug without it. And the fact that our systems are getting just astronomically more complex, and yet, we're still trying to debug it with these tools based on, you know, the metric data type [laughs] defined since the '70s when space was incredibly rare and expensive. And now space is incredibly cheap, but we should be wasteful with it so we can understand our incredibly complex systems. So that's us. We really try to empower software engineers to own their own code in production. For a long time, it was like, all of the tools for you to understand your software were really written for low-level ops people because they speak the language of, like, RAM, and disks, and CPU, which you shouldn't have to understand that in order to be able to understand I just deployed something, what went wrong? WILL: I love the honesty because there are so many founders that I'll talk to, and I'm like, okay, you're very successful. But did you really expect this to be what it is today? Did you really expect to survive? Because, like, just some of their ideas, I'm like, it's brilliant, but if I was with you back in the day, I'd be like, it ain't going to work. It's not going to work. [laughs] CHARITY: Yeah. And I feel like the VC culture really encourages delusion, just, like, self-delusion, like, this delusive thinking. You're supposed to, like, broadcast just, like, rock-solid confidence in yourself and your ideas at all time. And I think that only sociopaths do that. [laughs] I don't want to work for anyone who's that confident in themselves or their idea. Because I'm showing my own stripes, I guess, you know, I'm a reliability engineer. I wake up in the morning; I'm like, what's wrong with the day? That's just how my brain works. But I feel like I would rather work with people who are constantly scanning the horizon and being like, okay, what's likely to kill us today? Instead of people who are just like, I am right. [laughs] You know? VICTORIA: Yeah. And I can relate that back to observability by thinking how, you know, you can have an idea about how your system is supposed to work, and then there's the way that it actually works. [laughs] CHARITY: Oh my God. VICTORIA: Right? CHARITY: Yes. It's so much that. VICTORIA: Maybe you can tell us just a little bit more about, like, what is observability? Or how would you explain that to someone who isn't necessarily in it every day? CHARITY: I would explain it; I mean, it depends on who your audience is, of course. But I would explain it like engineers spend all day in their IDEs. And they come to believe that that's what software is. But software is not lines of code. Software is those lines of code running in production with real users using it. That's when software becomes real. And, for too long, we've treated like that, like, an entirely different...well, it's written. [laughs] You know, for launch, I was like, well, it's ops' problem, as the meme says. But we haven't really gotten to a point yet where...I feel like when you're developing with observability; you should be instrumenting your code as you go with an eye towards your future self. How am I going to know if this is working or not? How am I going to know if this breaks? And when you deploy it, you should then go and look at your code in production and look at it through the lens of the telemetry that you just wrote and ask yourself, is it doing what I expected it to? Does anything else look weird? Because the cost of finding and fixing bugs goes up exponentially from the moment that you write them. It's like you type a bug; you backspace. Cool, good for you. That's the fastest you can fix it. The next fastest is if you find it when you're running tests. But tests are only ever going to find the things you could predict were going to fail or that have already failed. The first real opportunity that you have to see if your code really works or not is right after you've deployed it, but only if you've given yourself the telemetry to do so. Like, the idea of just merging your code, like walking out the door or merging your code and waiting to get paged or to get [laughs] escalated to this is madness. This should be such an artifact of the battle days when dev writes, and ops runs it. That doesn't work, right? Like, in the beginning, we had software engineers who wrote code and ran that code in production, and that's how things should be. You should be writing code and running code in production. And the reason I think we're starting to see that reality emerge again is because our systems have gotten so complicated. We kind of can't not because you can't really run your code as a black box anymore. You can't ignore what's on the inside. You have to be able to look at the code in order to be able to run it effectively. And conversely, I don't think you could develop good code unless you're constantly exposing yourself to the consequences of that code. It lets you know when it breaks, that whole feedback loop that completely severed when we had dev versus ops. And we're slowly kind of knitting it together again. But, like, that's what's at the heart of that incredibly powerful feedback loop. It's the heart of all software engineering is, instrumenting your code and looking at it and asking yourself, is it doing what I expected it to do? WILL: That's really neat. You said you're a reliability engineer. What's your background? Tell me more about it because you're the CTO of Honeycomb. So you have some technical background. What does that look like? CHARITY: Yeah, well, I was a music major and then a serial dropout. I've never graduated from anything, ever. And then, I worked at startups in Silicon Valley. Nothing you'd ever...well, I worked at Linden Lab for a few years and some other places. But honestly, the reason I started Honeycomb was because...so I worked at Parse. I was the infrastructure lead at Parse; rest in peace. It got acquired by Facebook. And when I was leaving Facebook, it was the only time in my life that I'd ever had a pedigree. Well, I've actually been an ops engineer my entire career. When I was leaving Facebook, I had VCs going, "Would you like some money to do something? Because you're coming from Facebook, so you must be smart." On the one hand, that was kind of offensive. And on the other hand, like, I kind of felt the obligation to just take the money and run, like, on behalf of all dropouts, of women, and queers everywhere. Just, you know, how often...am I ever going to get this chance again? No, I'm not. So, good. VICTORIA: Yes, I will accept your money. [laughs] CHARITY: Yeah, right? VICTORIA: I will take it. And I'm not surprised that you were a music major. I've met many, I would say, people who are active in social media about DevOps, and then it turns out they were a theater major, [laughs] or music, or something different. And they kind of naturally found their way. CHARITY: The whole ops SRE profession has historically been a real magnet for weirdo people, weird past, people who took very non-traditional. So it's always been about tinkering, just understanding systems. And there hasn't been this high bar for formal, you know, knowledge that you need just to get your first job. I feel like this is all changing. And it makes me kind of...I understand why it's changing, and it also makes me kind of sad. VICTORIA: So I think you have a quote about, you know, working on infrastructure teams that everything comes back to databases. CHARITY: [laughs] VICTORIA: I wonder if you could expand on that. CHARITY: I've been an accidental DBA my entire career. I just always seemed to be the one left holding the bag. [laughs] We were playing musical chairs. I just feel like, you know, as you're moving up the stack, you can get more and more reckless. As you move down the stack, the closer you get to, like, bits on disk, the more conservative you have to be, the more blast radius your mistakes could have. Like, shit changes all the time in JavaScript land. In database land, we're still doing CRUD operators, like, since Stonebreaker did it in the '70s. We're still doing very fundamental stuff. I love it, though, because, I don't know, it's such a capsule of computers at large, which is just that people have no idea how much shit breaks. [laughs] Stuff breaks all the time. And the beauty of it is that we keep going. It's not that things don't break. You have no idea how much stuff is broken in your stack right now. But we find ways to resolve it after the fact. I just think that data is so fascinating because it has so much gravity. I don't know, I could keep going, but I feel like you get the point. I just think it's really fun. I think danger is fun, I think. It might not surprise you to learn that I, too, was diagnosed with ADHD in the past couple of years. I feel like this is another strand that most DevOps, SRE types have in common, which is just [laughs] highly motivated in a good way by panic. [laughs] WILL: I love that you said you love danger because I feel like that is right in your wheelhouse. Like, you have to love danger to be in that field because it's predictable. You're the one that's coming in and putting out the fires when everyone sometimes they're running for the window. Like you said, like, you got caught holding the bag. So that's really neat. This is a big question for me, especially for being an engineer, a dev, do you find that product and design teams understand and see the value in SRE? CHARITY: Oooh. These types of cultural questions are always so difficult for me to gauge whether or not my sample is representative of the larger population. Because, in my experience, you know, ops teams typically rule the roost, like, they get final say over everything. But I know that that's not typically true. Like, throughout the industry, like, ops teams kind of have a history of being kind of kicked around. I think that they do see the value because everybody can see when it breaks. But I think that they mostly see the value when it breaks. I think that it takes a rare, farsighted product team to be able to consent to giving, like, investments all along in the kinds of improvements that will pay off later on instead of just pouring all of the resources into fast fixes and features and feature, feature, feature. And then, of course, you know, you slowly grind to a halt as a team because you're just amassing surface area. You're not paying down your tech debt. And I think it's not always clear to product and design leaders how to make those investments in a way that actually benefits them instead of it just being a cost center. You know, it's just something that's always a break on them instead of actually enabling them to move faster. WILL: Yeah, yeah. And I can definitely see that being an engineer dev. I'm going to change it a little bit. And I'm going to ask, Victoria, since you're the managing director of that team, how do you feel about that question? Do you feel that's the same thing, or what's your observation of that? VICTORIA: I think Charity is, like, spot on because it does depend on the type of organization that you're working in, the hierarchy, and who gets priority over budget and things like that. And so the interesting thought for me coming from federal IT organizations into more commercial and startup organizations is that there is a little bit of a disconnect. And we started to ask our designers and developers like, "Well, have you thought much about, like, what happens when this fails?" [laughs] And especially -- CHARITY: Great question. VICTORIA: Yeah, like, when you're dealing with, like, healthcare startups or with bank startups and really thinking through all the ways it could go wrong. Is it a new pathway? Which I think is exciting for a lot of people. And I'm curious, too, Charity, like with Honeycomb, was there things that surprised you in your journey of discovery about, you know, building a company about observability and what people wanted out of this space? CHARITY: Oh my goodness. [laughs] Was anything not a surprise? I mean, [laughs] yeah, absolutely. You're a director of what team? VICTORIA: I'm a managing director of our Mission Control team. CHARITY: Oooh. VICTORIA: Which is our platform engineering, and DevOps, and SRE team. CHARITY: Now, does your platform engineering team have product managers? VICTORIA: I think it might be me. [laughs] CHARITY: Aha. VICTORIA: It might be me. And we have a team lead, and our CTO is actually our acting development director. So he's really leading the development of that project platform. CHARITY: When I was in New York the last couple of days, I just gave a talk at KubeCon about the Perils, Pitfalls, and Pratfalls of Platform Engineering, just talking about all of the ways that platform teams accidentally steer themselves into the ditch. One of the biggest mistakes that people make in that situation is not running the platform team like a product team, you know, having a sort of, like, if we build it, they will come sort of a mentality towards the platform that they're building internally for their engineers, and not doing the things like, you know, discovery or finding out like, am I really building, you know, the most important thing, you know, that people need right now? And it's like, I didn't learn those skills as an engineer. Like, in the infrastructure land, we didn't learn how to work with product people. We didn't learn how to work with designers. And I feel like the biggest piece of career advice that I give, you know, people like me now, is learn how to work with product and like a product org. I'm curious, like, what you're observing in your realm when it comes to this stuff. Like, how much like a product org do you work? VICTORIA: Oh, I agree 100%. So I've actually been interested in applying our platform project to the thoughtbot Incubator Program. [laughs] CHARITY: Mmm. VICTORIA: So they have this method for doing market strategy, and user interviews, and all of that...exactly what you're saying, like, run it like a product. So I want them to help me with it. [laughs] CHARITY: Nice. VICTORIA: Yes, because I am also a managing director, and so we're managing a team and building business. And we also have this product or this open-source project, really. It's not...we don't necessarily want to be prescriptive with how we, as thoughtbot, tell people how to build their platforms. So with every client, we do a deep dive to see how is their dev team actually working? What are the pain points? What are the things we can do based on, like, you know, this collection of tools and knowledge that we have on what's worked for past clients that makes the most sense for them? So, in that way, I think it is very customer-focused [laughs], right? And that's the motto we want to keep with. And I have been on other project teams where we just try to reproduce what worked for one client and to make that a product. And it doesn't always work [laughs] because of what you're saying. Like, you have to really...and especially, I think that just the diversity of the systems that we are building and have been built is kind of, like, breathtaking [laughs], you know. CHARITY: Yeah. [chuckles] VICTORIA: I'm sure you have some familiarity with that. CHARITY: [laughs] VICTORIA: But what did you really find in the market that worked for you right away, like, was, like, the problem that you were able to solve and start building within your business? CHARITY: We did everything all wrong. So I had had this experience at Facebook, which, you know, at Parse, you know, we had all these reliability issues because of the architecture. What we were building was just fundamentally...as soon as any customer got big, like, they would take up all the resources in this shared, you know, tenancy thing, and the whole platform would go down. And it was so frustrating. And we were working on a rewrite and everything. Like, it was professionally humiliating for me as a reliability engineer to have a platform this bad at reliability. And part of the issue was that you know, we had a million mobile apps, and it was a different app every time, different application...the iTunes Store, like, top five or something. And so the previous generation of tools and strategies like building dashboards and doing retros and being like, well, I'll make a dashboard so that I can find this problem next time immediately, like, just went out the window. Like, none of them would work because they were always about the last battle. And it was always something new. And at one point, we started getting some datasets into this tool at Facebook called Scuba. It was butt ugly. Like, it was aggressively hostile to users. But it let us do one thing really well, which was slice and dice high cardinality dimensions in near real-time. And having the ability to do that to, like, break down by user ID, which is not possible with, you know, I don't know how familiar -- I'll briefly describe high cardinality. So imagine you have a collection of 100 million users. And the highest possible cardinality would be a unique ID because, you know, social security number, very high cardinality. And something much lower cardinality would be like inches of height. And all of metrics and dashboards are oriented around low cardinality dimensions. If you have more than a couple hundred hosts, you can no longer tag your metrics with a host ID. It just falls apart. So being able to break down by, like, you know, one of a million app IDs. It took...the amount of time it took for us to identify and find these brand-new problems, it dropped like a rock, like, from hours of opening it. We never even solved a lot of the problems that we saw. We just recovered. We moved on [laughs] with our day, dropped from that to, like, seconds or minutes. Like, it wasn't even an engineering problem anymore. It was like a support problem, you know, you just go click, click, click, click, click, oh, there it is. Just follow the trail of breadcrumbs. That made such an impression on me. And when I was leaving, I was just like, I can't go back to not having something like this. I was so much less powerful as an engineer. It's just, like, it's unthinkable. So when we started Honeycomb, we were just, like, we went hands down, and we started building. We didn't want to write a database. We had to write a database because there was nothing out there that could do this. And we spent the first year or two not even really talking to customers. When we did talk to customers, I would tell our engineers to ignore their feedback [laughs] because they were all telling us they wanted better metrics. And we're like, no, we're not doing metrics. The first thing that we found we could kind of connect to real problems that people were looking for was that it was high cardinality. There were a few, not many; there were a few engineers out there Googling for high cardinality metrics. And those engineers found us and became our earliest customers because we were able to do breathtaking...from their perspective, they were like, we've been told this is impossible. We've been told that this can't be done. Things like Intercom was able to start tagging other requests with, like, app ID and customer ID. And immediately started noticing things like, oh, this database that we were just about to have to, like, spend six months sharding and extending, oh, it turns out 80% of the queries in flight to this database are all coming from one customer who is paying us $200 a month, so maybe we shouldn't [laughs] do that engineering labor. Maybe we should just, you know, throttle this guy who is only paying us 200 bucks a month. Or just all these things you can't actually see until you can use this very, very special tool. And then once you can see that... So, like, our first customers became rabid fans and vouched for us to investors, and this still blows people's minds to this day. It's an incredibly difficult thing to explain and describe to people, but once they see it on their own data, it clicks because everybody's run into this problem before, and it's really frustrating. VICTORIA: Yeah, that's super interesting and a great example to illustrate that point of just, like, not really knowing what's going on in your system. And, you know, you mentioned just, like, certainly at scale, that's when you really, really need to have [laughs] data and insight into your systems. CHARITY: Yeah. VICTORIA: But one question I get a lot is, like, at what scale do you actually need to start worrying about SRE? [laughs] Which -- CHARITY: SRE? VICTORIA: Yeah, I'll let you answer that. Yeah, site reliability or even things like...like, everything under that umbrella like observability, like, you know, putting in monitoring and tracing and all this stuff. Sometimes people are just like, well, when do I actually have to care? [laughs] CHARITY: I recognize this is, you know, coming from somebody who does this for a living, so, like, people can write it off all they want. But, like, the idea of developing without observability is just sad to me, like, from day one. This is not a tax. It's not something that slows you down or makes your lives worse. It's something that makes your lives better from day one. It helps you move more quickly, with more confidence. It helps you not make as many mistakes. It helps you... Like, most people are used to interacting with their systems, which are just like flaming hairballs under their bed. Nobody has ever understood these systems. They certainly don't understand them. And every day, they ship more code that they don't understand, create systems that they've never understood. And then an alarm goes off, and everybody just, like, braces for impact because they don't understand them. This is not the inevitable end state of computing. It doesn't have to be like this. You can have systems that are well-understood, that are tractable, that you could...it's just...it's so sad to me that people are like, oh God, when do I have to add telemetry? And I'm just like, how do you write software without telemetry? How do you have any confidence that the work you're doing is what you thought you were doing? You know, I just... And, of course, if you're waiting to tack it on later, of course, it's not going to be as useful because you're trying to add telemetry for stuff you were writing weeks, or months, or years ago. The time to add it is while you're writing it. No one is ever going to understand your software as well as you do the moment that you're writing it. That's when you know your original intent. You know what you're trying to do. You know why you're trying to do it. You know what you tried that didn't work. You know, ultimately, what the most valuable pieces of data are. Why wouldn't you leave little breadcrumbs for yourself so that future you can find them? You know, it's like...I just feel like this entire mental shift it can become just as much of a habit as like commenting your code or adding, you know, commenting in your pull request, you know. It becomes second nature, and reaching for it becomes second nature. You should have in your body a feeling of I'm not done until you've looked at your telemetry in production. That's the first moment that you can tell yourself, ah, yes, it probably does what I think it does, right? So, like, this question it makes me sad. It gets me a little worked up because I feel like it's such a symptom of people who I know what their jobs are like based on that question, and it's not as good as it could be. Their jobs are much sadder and more confusing than it could be if they had a slightly different approach to telemetry. That's the observability bit. But about SRE, very few ops engineers start companies, it seems, when I did, you know, I was one of three founding members. And the first thing I did was, of course, spin up an infrastructure and set up CI/CD and all this stuff. And I'm, like, feeling less useful than the others but, you know, doing my part. But that stuff that I spun up, we didn't have to hire an SRE for years, and when we did, it was pretty optional. And this is a system, you know, things trickle down, right? Doing things right from the beginning and having them be clear and well-understood, and efficient, we were able to do so much with so few people. You know, we were landing, you know, hundreds of thousand-dollar deals with people who thought we had hundreds of employees. We had 12 engineers for the first almost five years, just 12 engineers. But, like, almost all of the energy that they put into the world went into moving the business forward, not fighting with the system, or thrashing the system, or trying to figure out bugs, or trying to track down things that were just, like, impossible to figure out. We waste so much time as engineers by trying to add this stuff in later. So the actual answer to your question is, like, if you aren't lucky enough to have an ops co-founder, is as soon as you have real users. You know, I've made a career out of basically being the first engineer to join from infrastructure when the software engineers are starting to have real customers. Like, at Parse, they brought me in when they were about to do their alpha release. And they're like, whoa, okay, I guess we better have someone who knows how to run things. And I came in, and I spent the next, you know, year or so just cleaning up shit that they had done, which wasn't terrible. But, you know, they just didn't really know what they were doing. So I kind of had to undo everything, redo it. And just the earlier, the better, right? It will pay off. Now, that said, there is a real risk of over-engineering early. Companies they don't fail because they innovated too quickly; let's put it that way. They fail because they couldn't focus. They couldn't connect with their customers. They couldn't do all these things. And so you really do want to do just enough to get you to the next place so that you can put most of your effort into making product for your customers. But yeah, it's so much easier to set yourself up with auto-deployment so that every CI/CD run automatically deploys your code to production and just maintain as you grow. That is so easy compared to trying to take, you know, a long, slow, you know, leaky deploy process and turn it into one that could auto-deploy safely after every commit. So yeah, do it early. And then maintain is the easiest way in the world to do this stuff. Mid-Roll Ad: As life moves online, bricks-and-mortar businesses are having to adapt to survive. With over 18 years of experience building reliable web products and services, thoughtbot is the technology partner you can trust. We provide the technical expertise to enable your business to adapt and thrive in a changing environment. We start by understanding what's important to your customers to help you transition to intuitive digital services your customers will trust. We take the time to understand what makes your business great and work fast yet thoroughly to build, test, and validate ideas, helping you discover new customers. Take your business online with design‑driven digital acceleration. Find out more at tbot.io/acceleration or click the link in the show notes for this episode. WILL: Correct me if I'm wrong, I think you said Facebook and mobile. Do you have, not experience with mobile but do you...does Honeycomb do anything in the mobile space? Because I feel like that portion is probably the most complicated for mobile, like, dealing with iOS and Android and everything that they're asking for. So... CHARITY: We don't have mobile stuff at Honeycomb. Parse was a mobile Backend as a Service. So I went straight from doing all mobile all the time to doing no mobile at all. I also went from doing databases all the time to doing, you know...it's good career advice typically to find a niche and then stay in it, and I have not followed that advice. [laughter] I've just jumped from...as soon as I'm good at something, I start doing something else. WILL: Let me ask you this, how come you don't see more mobile SRE or help in that area? CHARITY: I think that you see lots of SREs for mobile apps, but they're on the back-end side. They're on the server side. So it's just not as visible. But even if you've got, like, a stack that's entirely serverless, you still need SRE. But I think that the model is really shifting. You know, it used to be you hired an SRE team or an ops team to carry the pager for you and to take the alerts and to, like, buffer everything, and nowadays, that's not the expectation. That's not what good companies do. You know, they set up systems for their software engineers to own their code in production. But they need help because they're not experts in this, and that's where SRE types come in. Is that your experience? WILL: Yeah, for the most part. Yeah, that is. CHARITY: Yeah, I think that's very healthy. VICTORIA: And I agree with that as well. And I'm going to take that clip of your reaction to that question about when you should start doing [laughter] observability and just play for everybody whenever someone asks [laughs] me that. I'm like, here's the answer. That's great. CHARITY: I think a good metaphor for that is like, if you're buying a house and taking out a loan, the more of a down payment you can put down upfront, the lower that your monthly payments are going to be for the rest of your...you amortize that out over the next 20-30 years. The more you can do that, the better your life is going to be because interest rates are a bitch. VICTORIA: It makes sense. And yeah, like, to your point earlier about when people actually do start to care about it is usually after something has broken in a traumatic way that can be really bad for your clients and, like, your legal [laughter] stance -- CHARITY: That's true. VICTORIA: As a company. CHARITY: Facing stuff, yeah, is where people usually start to think about it. But, like, the less visible part, and I think almost the more important part is what it does to your velocity and your ability to execute internally. When you have a good, clean system that is well-tended that, you know, where the amount of time between when you're writing the code and when the code is in production, and you're looking at...when that is short and tight, like, no more than a couple of hours, like, it's a different job than if it takes you, like, days or weeks to deploy. Your changes get bashed up with other people's. And, you know, like, you enter, like, the software development death spiral where, you know, it takes a while. So your diffs get even bigger, so code review takes even longer, so it takes even longer. And then your changes are all getting bashed up. And, you know, now you need a team to run deploys and releases. And now you need an SRE team to do the firefighting. And, like, your systems are...the bigger it gets, the more complicated it gets, the more you're spending time just waiting on each other or switching contexts. You ever, like, see an app and been like, oh, that's a cool app? I wonder...they have 800 engineers at that company. And you're just like, what the hell are they all doing? Like, seriously, how does it take that many engineers to build this admittedly nice little product? I guarantee you it's because their internal hygiene is just terrible. It takes them too long to deploy things. They've forgotten what they've written by the time it's out, so nobody ever goes and looks at it. So it's just like, it's becoming a hairball under your bed. Nobody's looking at it. It's becoming more and more mysterious to you. Like, I have a rule of thumb which there's no mathematical science behind this, just experience. But it's a rule of thumb that says that if it takes you, you know, on the order of, say, a couple of hours tops to deploy your software, if it takes you that many engineers to build and own that product, well, if your deploys take on the order of days instead of hours, it will take you twice as many people [chuckles] to build and support that product. And if it takes you weeks to deploy that product, it will take you twice as many again; if anything, that is an underestimate because it actually goes up exponentially, not linearly. But, like, we are so wasteful when it comes to people's time. It is so much easier for managers to go, uh, we're overloaded. Let's hire more people. For some reason, you can always get headcount when you can't actually get the discipline to say no to things or the people to work on internal tools to, like, shrink that gap between when you've written it and when it's live. And just the waste, it just spirals out of control, man, and it's not good. And, you know, it should be such a fun, creative, fulfilling job where you spend your day solving puzzles for money and moving the business materially forward every day. And instead, how much of our time do we just sit here, like, twiddling our thumbs and waiting for the build to finish or waiting for code review [laughs] to get turned around? Or, you know, swapping projects and, like, trying to page all that context in your brain? Like, it's absurd, and this is not that hard of a problem to fix. VICTORIA: Engineering should be fun, and it should be dangerous. That's what [laughs] I'm getting out of this -- CHARITY: It should be fun, and it should be dangerous. I love that. VICTORIA: Fun and dangerous. I like it. [laughs] And speaking of danger, I mean, maybe it's not dangerous, but what does success really look like for you at Honeycomb in the next six months or even in the next five years? CHARITY: I find it much more easier to answer what failure would look like. VICTORIA: You can answer that too if you like. [laughs] CHARITY: [laughs] What would success look like? I mean, obviously, I have no desire to ever go through another acquisition, and I don't want to go out of business. So it'd be nice not to do either of those things, which means since we've taken VC money, IPO would be nice eventually. But, like, ultimately, like, what motivates Christine and me and our entire company really is just, you know, we're engineers. We've felt this pain. We have seen that the world can be better. [laughs] We really just want to help, you know, move engineering into the current decade. I feel like there are so many teams out there who hear me talk about this stuff. And they listen wistfully, and they're like, yeah, and they roll their eyes. They're like, yeah, you work in Silicon Valley, or yeah, but you work at a startup, or yeah...they have all these reasons why they don't get nice things. We're just not good enough engineers is the one that breaks my heart the most because it's not true. Like, it has nothing to do...it has almost nothing to do with how good of an engineer you are. You have to be so much better of an engineer to deal with a giant hairball than with software that gets deployed, you know, within the hour that you can just go look at and see if it's working or not. I want this to go mainstream. I want people...I want engineers to just have a better time at work. And I want people to succeed at what they're doing. And just...the more we can bring that kind of change to more and more people, the more successful I will feel. VICTORIA: I really like that. And I think it's great. And it also makes me think I find that people who work in the DevOps space have a certain type of mentality sometimes, [laughs] like, it's about the greater community and, like, just making being at work better. And I also think it maybe makes you more willing to admit your failures [laughs] like you were earlier, right? CHARITY: Probably. VICTORIA: That's part of the culture. It's like, well, we messed up. [laughs] We broke stuff, and we're going to learn from it. CHARITY: It's healthy. I'm trying to institute a rule where at all hands when we're doing different organizations giving an update every two weeks, where we talk two-thirds about our successes and things that worked great and one-third about things that just didn't work. Like, I think we could all stand to talk about our failures a lot more. VICTORIA: Yeah, makes it a lot less scary, I think [laughs], right? CHARITY: Yeah, yeah. It democratizes the feeling, and it genuinely...it makes me happy. It's like, that didn't work, great. Now we know not to do it. Of the infinite number of things that we could try, now we know something for real. I think it's exciting. And, I don't know, I think it's funny when things fail. And I think that if we can just laugh about it together... You know, in every engineering org that I've ever worked at, out of all the teams, the ops types teams have always been the ones that are the most tightly bonded. They have this real, like, Band of Brothers type of sentiment. And I think it's because, you know, we've historically endured most of the pain. [laughs] But, like, that sense of, like, it's us against the system, that there is hilarity in failure. And, at the end of the day, we're all just monkeys, like, poking at electrical sockets is, I think...I think it's healthy. [laughs] WILL: That's really neat. I love it. This is one of my favorite questions. What advice would you give yourself if you could go back in time? CHARITY: I don't know. I think I'd just give myself a thumbs up and go; it's going to be all right. I don't know; I wouldn't... I don't think that I would try to alter the time continuum [laughs] in any way. But I had a lot of anxiety when I was younger about going to hell and all this stuff. And so I think...but anything I said to my future self, I wouldn't have believed anyway. So yeah, I respectfully decline the offer. VICTORIA: That's fair. I mean, I think about that a lot too actually, like, I sometimes think like, well, if I could go back to myself a year ago and just -- CHARITY: Yeah. I would look at me like I was stupid. [laughter] VICTORIA: That makes sense. It reminds me a little bit about what you said, though, like, doing SRE and everything upfront or the observability pieces and building it correctly in a way you can deploy fastly is like a gift to your future self. [laughs] CHARITY: Yes, it is, with a bow. Yes, exactly. VICTORIA: There you go. Well, all right. I think we are about ready to wrap up. Is there anything you would like to promote specifically? CHARITY: We just launched this really cool little thing at Honeycomb. And you won't often hear me say the words cool and AI in proximity to each other, but we just launched this really dope little thing. It's a tool for using natural language to ask questions of your telemetry. So, if you just deployed something and you want to know, like, what's slow or did anything change, you can just ask it using English, and it does a ChatGPT thing and generates the right graphs for you. It's pretty sweet. VICTORIA: That's really cool. So, if you have Honeycomb set up and working in your system and then you can just ask the little chatbot, "Hey, what's going on here?" CHARITY: Yeah. What's the slowest endpoint? And it'll just tell you, which is great because I feel like I do not think graphically at all. My brain just really doesn't. So I have never been the person who's, like, creating dashboards or graphs. My friend Ben Hartshorne works with me, and he'll make the dashboards. And then I get up in the morning, and I bookmark them. And so we're sort of symbiotic. But everyone can tweak a query, right? Once you have something that you know is, like, within spitting distance as the data you want, anyone can tweak it, but composing is really hard. So I feel like this really helps you get over that initial hurdle of, like, er, what do I break down by? What do I group by? What are the field names? You just ask it the question, and then you've got to click, click, click, and, like, get exactly what you want out of it. I think it's, like, a game changer. VICTORIA: That sounds extremely cool. And we will certainly link to it in our show notes today. Thank you so much for being with us and spending the time, Charity. CHARITY: Yeah, this was really fun. VICTORIA: You can subscribe to the show and find notes along with a complete transcript for this episode at giantrobots.fm. If you have questions or comments, email us at hosts@giantrobots.fm. And you can find me on Twitter @victori_ousg. WILL: And you could find me on Twitter @will23larry. This podcast is brought to you by thoughtbot and produced and edited by Mandy Moore. Thanks for listening. See you next time. ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com. Special Guest: Charity Majors.
Joël's new work project involves tricky date formats. Stephanie has been working with former Bike Shed host Steph Viccari and loved her peer review feedback. The concept of truthiness is tough to grasp sometimes, and JavaScript and Ruby differ in their implementation of truthiness. Is this a problem? Do you prefer one model over the other? What can we learn about these design decisions? How can we avoid common pitfalls? [EDI](https://www.stedi.com/blog/date-and-time-in-edi](https:/www.stedi.com/blog/date-and-time-in-edi) [Booleans don't exist in Ruby](https://thoughtbot.com/blog/what-is-a-boolean](https://thoughtbot.com/blog/what-is-a-boolean) [Rails valid? method](https://api.rubyonrails.org/classes/ActiveRecord/Validations.html#method-i-valid-3F](https://api.rubyonrails.org/classes/ActiveRecord/Validations.html#method-i-valid-3F) Parse, don't validate (https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-validate/) Javascript falsiness rules (https://www.sitepoint.com/javascript-truthy-falsy/) Transcript: STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn. JOËL: And I'm Joël Quenneville. And together, we're here to share a little bit of what we've learned along the way. STEPHANIE: So, Joël, what's new in your world? JOËL: So I'm on a new project at work. And I'm doing some really interesting work where I'm connecting to a remote database third-party system directly and pulling data from that database into our system, so not via some kind of API. And one thing that's been really kind of tricky to work with are the date formats on this third-party database. STEPHANIE: Is the date being stored in an unexpected format or something like that? JOËL: Yes. So there's a few things that are weird with it. So this is a value that represents a point in time, and it's not stored as a date-time value. Instead, it's stored separately as a date column and a time column. So a little bit of weirdness there. We can work with it, except that the time column isn't actually a time value. It is an integer. STEPHANIE: Oh no. JOËL: Yeah. And if you're thinking, oh, okay, an integer, it's going to be milliseconds since midnight or something like that, which is basically how Postgres' time of day works under the hood, nope, that's not how it works. It's a positional digit thing. So, if you've got the number, you know, 1040, that means 10:40 a.m. STEPHANIE: Oh my gosh. Is this in military time or something like that, at least? JOËL: Yes, it is military time. But it does allow for all these, like, weird invalid values to creep in. Because, in theory, you should never go beyond 2359. But even within the hours that are allowed, let's say, between 1000 and 1100, so between 10:00 and 11:00 a.m., a clock only goes up to 59 minutes. But our base 10 number system goes up to 99, so it's possible to have 1099, which is just an invalid time. STEPHANIE: Right. And I imagine this isn't validated or anything like that. So it is possible to store some impossible time value in this database. JOËL: I don't know for sure if the data is validated or not, but I'm not going to trust that it is. So I have to validate it on my end. STEPHANIE: That's fair. One thing that is striking me is what time is zero? JOËL: So zero in military time or just 24-hour clocks in general is midnight. So 0000, 4 zeros, is midnight. What gets interesting, though, is that because it's an integer, if you put the number, you know, 0001 into the database, it's just going to store it as 1. So I can't even say, oh, the first two digits are the hours, and the second two digits are the minutes. And I'm actually dealing with, I think, seconds and then some fractional part of seconds afterwards. But I can't say that because the number of digits I have is going to be inconsistent. So, first, I need to zero pad. Well, I have to, like, turn it into a string, zero pad the numbers so it's eight characters long. And then, start slicing out pairs of numbers, converting them back into integers, validating them within a range of either 0 to 23 or 0 to 59, and then reconstructing a time object out of that. STEPHANIE: That sounds quite painful. JOËL: It's a journey for sure. STEPHANIE: Do you have any idea why this is the case or why it was created like this originally? JOËL: I'm not sure. I have a couple of theories. I've seen this kind of thing happen before. And I think it's a common way for developers who maybe haven't put a lot of thought into how time works to just sort of think, oh, the human representation. I need something to go in the database. On my digital clock, I have four digits, so why not put four digits in the database? Simple enough. And then don't always realize that there's all these edge cases to think about and that human representations aren't always the best way to store data. STEPHANIE: I like how you just said that that, you know, we as humans have developed systems that are not quite, you know, the same as how a computer would. But what was interesting to me...something you said earlier about time being a fixed point. And that is different from time being a value, right? And so here in this situation, it sounds like we're storing time as a value, but really, it's more of the idea of, like, a point. JOËL: Interesting. What is the difference for you between a point and a value? STEPHANIE: I suppose a value to me...And I think we talked about this a little bit on a previous episode about value objects and also how we stored numbers, like phone numbers and credit card numbers and stuff like that. But a value, like, I might want to do math on. But I don't really want to do math on time. Or, specifically, if I have this idea of a specific point in time, like, that is fixed and not something that I could mutate and expect it to be the same thing that I was trying to express the first time around. JOËL: Oh, that's interesting because I think when it comes to time and specifically points in time, I sometimes do want to do math on them. And so, specifically, I might want to say, what is the time that has elapsed between two points in time? Maybe I have a start time and an end time, and I want to say how much distance is there between the two? If you use this time system where you're storing it as an integer number where the digits have positional values, because there's all those gaps between, you know, 59 and 99 that are not valid, math breaks down. You've broken math by storing it that way. So you can't get an accurate difference by doing math on that, as opposed to if you store it as a counter, which is what databases do under the hood, but you could do manually. If you just wanted to use an integer column, then you can do math because it's just a number of seconds since the beginning of the day. And you can subtract those from each other. And now you have these number of seconds between the two of them. And if you want them in minutes or hours, you divide by six here, 3600, and you get the correct response. STEPHANIE: Yeah, that is really interesting because [chuckles] in this situation, you have the worst of both worlds, it seems like. [laughs] JOËL: The one potential benefit is, I think, it's maybe more human-readable. Although, at that point, I would say if you're not doing math on it and you want something human-readable, you probably don't want an integer. You probably want a string. And maybe you even store it as, like, ISO 8601 time string in the database, or even just hour:minute:second split by a colon or whatever it is but just as a string. Now it's human-readable. You can still sort by it if you go from largest to smallest increment in your format. You can't do math, but then you weren't doing math on it anyway. So that's probably a nice compromise solution. But, ideally, you'd use a native, you know, time of day column or a date-time or something like that. STEPHANIE: For sure. Well, it sounds like something fun to contend with. [laughs] JOËL: One thing that was brought to my attention that I'd never heard about before is that potentially a reason it's stored that way is because of an old data format called EDI—I think it's Electronic Data Interchange—that dates from ages ago, you know, the '60s or '70s, something like that. Before, we had a lot of standards for data; this is how...an emerging standard that came for moving data between systems. And it has a lot of, like, weird things with the way it's set up. But if you're dealing with any sort of older data warehouses or older business systems, they will often exchange. And sometimes, you're going to store data in something that approximates this older EDI format. And, apparently, it has some weirdness around dates where it kind of does something like this. So someone was suggesting, oh, well, if you're interacting with maybe an older, you know, a lot of, like, e-commerce platforms or banking systems, probably airline systems, the kind of things you'd expect to be written in, let's say, COBOL... STEPHANIE: [laughs] JOËL: Have a system that's kind of like this. So maybe that wouldn't be quite as surprising. STEPHANIE: Yeah, that is really interesting. It just sounds like sometimes you're limited by the technology that you're interacting with. And I guess the one plus side is that, in your system, you can make the EDI work for you, hopefully. [laughs] Whereas perhaps if you are talking to some of those older technologies that don't know how else to convert date types and things like that, like, you just kind of have to work with what's available to you. JOËL: Yeah. And that's got me realizing that a lot of these older, archaic systems are still online and very much a part of our software ecosystem and that there's a lot of value in learning some software history so that I'm able to recognize them and sort of work constructively with them when I have to interact with that kind of system. STEPHANIE: Yeah, I really like that mindset. JOËL: So, Stephanie, what's new in your world? STEPHANIE: So, last week, we talked about writing reviews for ourselves and our peers. And one thing that happened in between the last episode and this one is Steph Viccari, former co-host of this podcast, who I've been working with really closely on this project of mine; she was writing a peer review for me. And one thing that she did that I really loved was she sent me a message and asked me a few questions about the direction of the review that I was wanting and what kind of feedback would be helpful for me. And some of the things she asked were, you know, "Is there a skill that you're actively working on? Is there a skill you'd like to start working on?" And, like, what my goals are for the feedback. Like, how can she tailor this feedback to things that would help my progression and what I hope to achieve? And then my favorite question that she asked was, "What else should I know but didn't think to ask?" And I thought that was a really cool way of approaching. You know, she's coming to this, like, wanting to be helpful, but then even still, like, there are things that she knows that I am kind of the expert on in my own career progression, and I really liked that. I think I'd mentioned last week that part of the feedback you want to be giving is, you know, something that will be helpful for that person, and centering them in it, instead of you is just a really awesome way to do that. So I was very appreciative that she asked me those questions. JOËL: That's incredibly thoughtful. I really appreciate that she sent that out to you. What did you respond for the is there something else I should know but didn't know to ask? STEPHANIE: Yeah. I mentioned that more and more, I'm realizing that I am not interested in management. And so what would be really helpful for me was to ground most of the feedback in terms of my, like, technical contributions. And also, that one thing that I'm thinking about a lot is how to be an individual contributor and still have an impact on team health and culture because that is something I care about. And so I wanted to share that with her because if there are things that she can identify in those aspects, that would be really awesome for me. And that can kind of help guide her away from a path that I'm not interested in. JOËL: I think having that kind of self-awareness is really powerful for yourself. But then, when you can leverage that to get better reviews that will help you get even further down the path that you're hoping to go, and, wow, isn't that just, like, a virtuous cycle right there that's just building on itself? STEPHANIE: Yeah, for sure. I think the other thing I wanted to share about what's new in my world that has been just a real boost to my mood is how long the days are right now because it's summer in North America. And yesterday was the summer solstice, and so we had the longest day of the year. The sun didn't set until 8:30 p.m. And I just took the opportunity to be outside. I took a swim in the lake, which was my first swim of the season, which was really special. And my friend had just a nice, little, like, backyard campfire hang out. And we got to roast some marshmallows and just be outside till the sunset. And that was really nice. JOËL: When you say the lake, is that Lake Michigan? STEPHANIE: Yes, I do mean Lake Michigan. [laughs] I forget that some people just don't have a giant lake next to them [laughs] that they refer to as the lake. JOËL: It's practically an inland sea. STEPHANIE: Yes, you can't see the other side of it. So, to me, it kind of feels like an ocean. And yesterday, when I was in the water, I also was thinking that I felt like I was just in a giant bathtub. [chuckles] JOËL: So I'm in New England, and most of the bodies of water here are not called lakes. They're called ponds. STEPHANIE: Really? JOËL: No matter the size. STEPHANIE: Oh. JOËL: I guess lakes is reserved for things like what you have that are absolutely massive, and everything else is a pond. STEPHANIE: That's so funny because I think of ponds as much smaller in scale, like a quaint, little pond. But that's a really fun piece of regional vocabulary. So one interesting thing happened on my client project this week that I wanted to get your input on because I've definitely seen this problem before, and still, it continues to crop up. But I was working on a background job that we were passing a Boolean value into as one of the parameters that we would then, you know, use down the line in determining some logic. And we, you know, made this change, and then we were surprised to find out that it continued to not work the way we expected. So we got some bug reports that we weren't getting into one of the branches of the conditional based on that Boolean value that we were passing in. And we learned, after a little bit of digging, that it turns out that those values are serialized because this job is actually saved in -- JOËL: Oh no. STEPHANIE: [chuckles] Yeah. It inherits from the ActiveRecord, actually, and is saved in our database. And so, in that process, the Boolean value got serialized into a string and then did not get converted [chuckles] back into a Boolean. And so when we do that if variable check, it was always evaluating to true because strings are truthy in Ruby. JOËL: Right. The string false is still truthy. STEPHANIE: A string false is still truthy. And we ended up having to coerce it into a Boolean value to fix our little bug. But it was just one of those things that was really frustrating, you know when you feel really confident that you know what you're doing. You're just writing a conditional statement. And it turns out the language beguiled you. [laughs] JOËL: I've run into similar bugs when I'm reading from environment variables because environment variables are always strings. But it's common that you'll be setting some kind of flag. So when you're setting the environment variable, you're setting something to true or false. But then, when you're reading it, you have to explicitly check if this environment variable double equals the string true, then do the thing. Because if you just check for the value, it will never be false. STEPHANIE: Right. And I kind of hate seeing code like that. I don't know; something about it just rubs me the wrong way because it just seems so strange, I suppose. JOËL: Is it just, like, those edge cases where you specifically have to do some kind of, like, double equals check on a value that feels like it should be a Boolean? Or do you kind of feel a bit weird about the concept of truthiness in general? STEPHANIE: I think the concept of truthiness is very hard to grasp sometimes. And, you know, when you're talking about that edge case where we are setting...we're checking if the string is the string true. That means that everything else is false, right? So, in some ways, I think it's just really confusing because we've expanded the definition of what true and false mean to be anything. JOËL: That's really interesting because now you have to pick. Are you checking against the string true, or are you negatively checking against the string false? And those are not equivalent because, like you said, now you're excluding every other string. So, is the string "Hello, World" put you in the false branch or the true branch? STEPHANIE: Who's to say? [laughs] I think a similar conundrum also occurs when we use predicate matchers in our tests. I think this is a gripe that I've talked about a little bit with others when we're writing tests and especially if we're writing a predicate method, and then that's what we're testing, right? We kind of are expecting a true or false value. And when our test expects something to be truthy rather than explicitly saying that we expect the return value to be true, that is sometimes a bit confusing to me as well because someone could theoretically change this method and just have it return "Hello, World," like you said, as a string, like, anything else. And that would still pass the test. JOËL: And it might even pass your code in most places. STEPHANIE: Right. And I suppose that's okay. Is it okay? I don't know. I'm not sure where I land on this. JOËL: I used to be a kind of hardcore Boolean person. STEPHANIE: [laughs] That's a sentence no one has ever [laughs] said. JOËL: I like my explicit trues and falses. I don't like the ambiguity of saying, like, oh, if person do a thing, it's, like, oh, what is person here? Is this a nil check? Is it explicitly false? Do you just want to know that this person is non-empty? Well, what exactly are you checking? So I like the explicitness of saying, oh, if person dot present, or if person dot empty, or if person dot nil. And I think maybe spending some time in some more strongly typed languages has also kind of pushed me a little bit in that direction, where it's nice to have something that is explicitly either just true or just false. And then you completely eliminate that problem of, like, oh, but what if it's neither true nor false, then what do we do for that branch there? And the answer is your compiler will reject that program or say, "You've written a bad program." And you never reach that point where there's a bug. I've slowly been softening my stance. A fellow thoughtbot colleague has written an article why there is no such thing as a Boolean in Ruby. Everything is just shades of gray and truthiness and falsiness. But from the perspective of a program, there is no such thing as a Boolean. And that really opened my eyes to a different perspective. I don't know that I fully agree, but I'm kind of begrudgingly acknowledging that Mike makes a good point. STEPHANIE: Yes, I read the blog post that he wrote about this exact problem. And I think it's called "Booleans Don't Exist in Ruby." And I think I similarly, like, came away with, like, yeah, I think I get it if I just suspend my disbelief, you know, hard enough. [laughs] But what you were saying about, like, liking the explicitness, right? And liking the lack of ambiguity, right? Because when you start to believe that Booleans don't exist, I think that really messes with your [laughs] head a little bit. And one takeaway that I got from that blog post, kind of like we mentioned earlier, is that there is such thing as false, and then everything else is true. And I guess that's kind of how Ruby operates. JOËL: Sort of, because then you have the problem of nil, which is also falsy. STEPHANIE: That's true, but nil is nothing. [laughs] JOËL: That's one of the classic problems as well when you're trying to do a nil check, or maybe some memoization, or maybe even, say, cache this value, or store this value, or initialize this value if it's not set. And assuming that doing nil is falsy, so you'll do some kind of, like, or equals, or just some kind of expression with an or in it thinking, oh, do this extra work if it's nil because then it will trigger the branch. But that all breaks down if potential for your value to be false because false and nil get treated the same in conditional code. STEPHANIE: Right. I think this could be a whole separate conversation about nil and the idea of nothingness. But I do think that, as Ruby developers, at least in the Ruby world, based on what I've seen, is that we lean on nil in ways that we maybe shouldn't. And we end up having to be very defensive about this idea of nil being falsy. But that's because we aren't necessarily thinking as hard about our return values and what our arguments are that; it ends up causing problems in evaluating truthiness when we're having to check those objects that could be nil. JOËL: In terms of the way we communicate with the readers of our code, and, as a reader, I generally assume that a Ruby method that ends with a question mark will return a true Boolean, either true or false. Is that generally your expectation as well? STEPHANIE: I want to say yes, but I've clearly experienced enough times where that's not the case that, you know, it's like, my ideal world and then reality [laughs] and having to figure out how to hold both of those things. JOËL: It's one of those things that's mostly true. STEPHANIE: I want to believe it because predicate methods and, like, the Ruby Standard Library mostly return Boolean values, at least to my knowledge. And if we all kind of followed that [laughs] pattern, then it would be clear. But I think there's a part of me that these days mostly believes it to be true that I will be getting a Boolean value (And, wow, even as I say this, I realize how confusing [laughs] this is starting to sound.) and that until I'm not, right? Until I'm surprised at some point. JOËL: I think there's two things I expect of predicate methods in Ruby. One is that they will return, like, a hard Boolean, either true or false. The second is that they are purely query methods; they don't do side effects. Neither of those are consistent across the ecosystem. And a classic example of violating that second guideline I have in my mind is the valid question mark method from Rails. And this really surprised me the first time I tripped into this because when you call that on an object, it doesn't just tell you whether or not the object is valid. It actually mutates the underlying object by populating the error messages' hash. So if you have an invalid object and you examine its error messages' hash, it will be empty until you call the valid question mark method. So sometimes, you don't even care about the return value. You're just calling valid to mutate the object so that you can access the underlying hash, which is that's weird code when you call a predicate method but then totally ignore the output. STEPHANIE: Yeah, that is strange because I have definitely seen it where we are calling the valid method to validate, and then we end up using the error messages that are set on that object later. I think that's tough because, in some ways, you do care about whether the object is valid or not. But then also, the error messages are helpful usually and when you're trying to use that method. The point is to validate it so that you can hopefully, like, tell the user or, like, the consumer of your system, like, what's wrong in validation. But it is almost, like, two separate things. JOËL: It is. And sometimes, it's really hard to split those two apart. So I'm not throwing shade at the Rails dev team here. Some of these design decisions are legitimately difficult to make. And what's most useful for the most people the most time is often a compromise. I think you brought up the idea of separating those two things. And I think there's a general principle here called command-query separation. That's, like, the fancy way of talking about what you were saying. STEPHANIE: One thing that I was just thinking about kind of when we initially picked off this conversation was the idea of how things outside the Ruby ecosystem or the Ruby world interact with what we're returning in terms of Boolean values. And so when I mentioned the object being serialized because of, you know, our database and, like, background job system, that's an entity that's figuring out what to do with the things that we are returning from Ruby. And similarly, when you're talking about environment variables, it's like, our computer system talking to now our language and those things being a bit different. Because when we, like, suspend our disbelief about what is truthy or falsy in Ruby, at least we're doing it in, like, the world of Ruby. And as soon as we have to interact with something else, like, maybe that's when things can get a little hairy because there's different ideas about truthiness there. And so I'm kind of also thinking about what we return in APIs and maybe, like, that being an area where some explicitness is more required. JOËL: Whenever I'm consuming third-party data, I'm a big fan of having some kind of transformation or parse step. This is inspired in part by the "Parse, Don't Validate" article, which I'll link in the show notes. So, if I'm reading data from a third-party API and I want it to be a Boolean, then maybe I should do the transformation myself. So maybe I check literally, is it the string true or the string false, and anything else gets rejected? Maybe I have...and maybe I'm a little bit more permissive, where I also accept capital T or capital F, and I have, like, some rules for transforming that. But the important thing is I have an explicit conversion step and reject any bad output. And so for something like an environment variable, maybe that would look like looking for true or false and raising if anything else is there. So that we try to boot the app, and it immediately crashes because, hey, we've got some, like, undefined, like, bad configuration that we're trying to load the app with. Don't even try to keep running. Hard crash immediately. Fix it, and then come back. STEPHANIE: Yeah, I like that a lot because the way we ended up fixing this issue with the background job that I mentioned was just coercing our string value into a Ruby Boolean in the job that we were then, like, running the conditional in. But really, what we should have done is have fixed that at a higher level and where we parse and deserialize, like, the values we're getting from the job to prevent this kind of in the future because right now, someone can do this again, and that's a real bummer. JOËL: I always love those deeper conversations that happen after you've had a bug that are like, how do we prevent this from happening again? Because sometimes that's where you have the deepest learnings or the most interesting insights or, you know, ideas for Bike Shed episodes. I'm really curious to contrast JavaScript's approach to truthiness to Ruby's because even though they both use the same idea, they kind of go about it differently. STEPHANIE: Tell me more. JOËL: So, in Ruby, an empty array and an empty string are truthy. JavaScript decided that empty things are falsy. And I forget...there's a whole table that shows the things that are truthy and falsy in JavaScript. I want to say zero is falsy in JavaScript but don't quote me on that, which can also lead to some interesting edge cases you have to think about. STEPHANIE: Okay, yes. This is coming back to me now. I think depending on what, you know, ecosystem or language or world I'm in, I have to just only be able to think about what is true in this world [laughs] and then do that context switching when I am working in something else. But yeah, that is a really interesting idea. Someone decided [laughs] that this was their idea of true or false. JOËL: I'm curious if you have a preference for sort of JavaScript's approach to falsiness where a lot more types of values are falsy versus Ruby, which said pretty much only nil and false are falsy. Everything else is truthy. STEPHANIE: Hmm, that is an interesting question. JOËL: Because in Ruby then or, I guess, in Rails, we end up with the present predicate method that is specifically checking for not only nil and false but also for empty array, empty string, those kinds of things. So, if you find yourself writing a lot of present matchers in your code, you're kind of leaning on something that's closer to JavaScript's definition of falsiness than Ruby's. But maybe you're making it more explicit. STEPHANIE: Right. In JavaScript, I see a lot of double bangs in lieu of those predicate methods. But I suppose by nature of having to write those predicate methods in Ruby, we're, like, really wanting something else, I think. And maybe...I guess it is just a question of explicitness like you're saying, and which I prefer. Is it that I need to be explicit to convey the idea that I want, or is it nice that the language has just been encoded that way for me? JOËL: Or maybe when you write conditionals, if you find yourself doing a lot of presence checks, do you find that you typically are trying to branch on if not null, not false, not empty more frequently than just if not null, not false? Because that's kind of the difference between Ruby's model and JavaScript's model. STEPHANIE: Hmm, the way you posed that question is interesting because it makes me think that sometimes it's quite defensive because we have to check for all these possible return values. We are unsure of what we are getting back. And so this is kind of, like, a catch-all for things that we aren't really sure about. JOËL: Yeah, I mean, that's the fun of dynamic programming languages. You never know exactly what you're going to get as long as things respond to certain methods. You really lean into the duck typing. And I think that's Mike's argument in his article that "Booleans Don't Exist" in that as long as something is responding to methods that you care about, it doesn't matter if you're dealing with a true Boolean or some kind of other value. STEPHANIE: Right. So I suppose the ideas of truthiness then are a little bit more dependent on how people are using the language though it seems like a chicken-and-egg situation to me. [laughs] JOËL: It is really interesting to me in terms of maybe thinking about use cases in my own code if I'm having to...if I'm writing code that leans on truthiness where I can say just, you know if user. But then knowing that, oh, that doesn't account for, like, an empty value. Do I then also need to add an extra check for emptiness? And maybe if I'm in a Rails project, I would reach for that present matcher where I wouldn't have to do that in JavaScript because I can just say, if user, and that already automatically checks for presence. So I'm kind of wondering now in my mind, like, which default would fit my use cases more? Or, if I go back to an older version of myself, I will say I don't want any of these defaults. They're all too ambiguous. I'm going to put explicitly if user dot nil question mark, if that's the thing that I'm checking for, or if user dot empty question mark because I want my reader to know what condition I'm checking. STEPHANIE: Yeah, that is interesting, this idea of, like, which mode do you find yourself needing to use more and if that is accommodated for you because that's just the more common, like, use case or problem. I think that's something that I will be thinking about the next time I write a conditional [laughs] because, like I was saying earlier, I think I end up just leaning on what someone else has decided for me in terms of truthiness and not so much how I would like it to work for me. JOËL: And sometimes we don't want to fight the language too much, you know if I'm writing Elm, that everything is hard Booleans. And I know I'm never going to get a nil in a place where I'd expect true or false because the compiler would prevent that from happening. I know that I'm not going to get an empty value, potentially. There's ways you can do things with a type system where you can explicitly say no empty values are even allowed at this point. And if you do allow them, then the type system will say, "Hey, you forgot to check for the empty case. Bad program. I'm rejecting that." And then you have to write that explicit branch for, oh, if empty versus if present. So I really appreciate that style of programming. But then, when you're in a language like Ruby where you're not dealing with explicit types on purpose, how do you shift that mindset so that you don't need to know the type of the value that you're dealing with? You only want to say, hey, in this context, here's the minimal interface that I want it to conform to. And maybe it's just the truthy or falsiness interface, and everything beyond that is not relevant. STEPHANIE: I think it's kind of wild to me that this idea of a binary that theoretically seems very clear turns out is actually quite confusing, ambiguous, philosophical, even. [chuckles] JOËL: Yeah. It's definitely...you can get into some deep, philosophical questions there, language design as well. One aspect, though, that I'm really curious about your thoughts is bringing new people in who are learning a language. It's really common for people who are learning a language for the first time, learning to code for the first time to write code that you and I would immediately know, like, that's not going to work. You can't add a Boolean and a number. You're just learning to code. You've never done that before. You don't know. And then how the language reacts to that kind of thing can help guide that experience. So, do you think that truthiness maybe makes things more confusing for newcomers? Or, maybe on the other side, it helps to smooth that learning curve because you don't have to be like, oh, wait, I have a user here. I can't put that in a condition because that's not a strict true or false. I'm going to coerce it, or I've got to find a predicate method or something. You can just be like, no, put it in. The interpreter will figure it out for you. STEPHANIE: Wow. That's a great question. I'm trying to put myself in the beginner's mindset a little bit and think about what it's like to just try something and the magic of it working. Because, like you said, the interpreter does it for you, or whatever, and something happens, and you're like, wow, like, that was really cool. And I didn't have to know all of the ins and outs of the types of things I was working with. That can be really helpful in just getting them, like, started and getting them just, like, on the ground writing code. And having that feeling of satisfaction that, like, that they didn't have to, you know, have to learn all these things that can be really scary to make their program work. But I do think it also kind of bites them later once they really realize [laughs] what is going on and the minute that they get that, like, unexpected behavior, right? Like, that becomes a time when you do have to figure out what might be going on under the hood. So two sides of the same coin. JOËL: What you're saying there about, like, maybe smoothing that initial curve but then it biting them later got me thinking. You know how we have the concept of technical debt where we write code in a way that's maybe not quite as clean today so we can move faster but that then later on we have to pay it back? And I almost wonder if what we have here is almost like a pedagogical debt where it's going to cost us a month from now, but today it helps us move faster and actually kind of get that momentum going. STEPHANIE: Pedagogical debt. I like that. I think you've coined a new term. Because I really relate to that where you learn just enough to do the thing now. But, you know, it's probably not, like, the right way or, like, the most informed—I think most informed is probably how I would best describe it—way of doing it. And later, you, yeah, just have to invest a little more into it. And I think that's okay. I think sometimes I do tend to, like, beat myself up over something down the line when I have to deal with some piece of less-than-ideal code that I'd written earlier. Like, I think that, oh, I could have avoided this if only I knew. But the whole point is that I didn't know. [laughs] And, like, that's okay, like, maybe I didn't need to know at the time. JOËL: Yeah, and code that's never shipped is of zero value. So having something that you could ship is better than having something perfect that you didn't ship. STEPHANIE: On that note, shall we wrap up? JOËL: Let's wrap up. STEPHANIE: Show notes for this episode can be found at bikeshed.fm. JOËL: This show has been produced and edited by Mandy Moore. STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter. STEPHANIE: Or reach both of us at hosts@bikeshed.fm via email. JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeee!!!!!! ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.
“She said that she wanted a break from the relationship. I asked her if I should at least hope that she will come back. She said no because she wanted to stay single for some time and that might change things for her. I could not accept that.” Parse through any teenage highschooler's diary, and words like these are all too easy to find. Breaks vs break ups, situationships vs relationships, boyfriend vs ex-boyfriend, these duels define adolescence. We've all been through some version of it. But as we retrospect, we cringe at that bygone era; of course, we are older now. Now we're too mature to put up with these silly little dramas. The same insecurities do not haunt us any more? Or do they? Deep down in our psyche, are we still that insecure teenage kid? Some folks just never grow up. And those words you just heard weren't of a teenager, but a grown man. Sometimes, not growing up, not getting out of the shell of puberty, can have adverse consequences in your adulthood: for you, and for your partner. This is the story of a man still trapped in a teenage drama of his own making, this is the story of Anuj Kumar. FULL TRANSCRIPT: https://docs.google.com/document/d/1wHfMhhyIV5h2A7JlOjlklSzGAUectyHGR1j9S5nbgKc/edit SOURCES: SNU murder-suicide: Police file FIR under Arms Act | Delhi News, The Indian Express Shiv Nadar University murder-suicide: Shooter's video note cites ‘cancer, ex-girlfriend's affairs' Shiv Nadar University Murder: Victim Had Already Filed 2-3 Complaints With Varsity Authorities Against Killer | Delhi News, Times Now ‘She Was Talented, Always Smiling,' Father of Student Allegedly Shot by Classmate ‘She Cheated, Left Me & I Could Not Accept it': Student Who Shot Classmate, Then Self Suspected student accused of calling Shiv Nadar University shooting as dog bite - Hindustan Times Shiv Nadar University shooting row: Unravelling relationship-based violence in young adults- Edexlive Welcome To IANS Live - NATION - Shiv Nadar Varsity murder case: Anuj sent e-mail to varsity officials 12 mins before killing Sneha Who is Neha Chaurasia Shiv Nadar University shooting victim killed by Anuj Singh in viral video - The SportsGrail Greater Noida's Shiv Nadar University student shot video before killing female classmate, self: 'I have cancer, she cheated on me' (1) News1Indiatweet on Twitter: "#GreaterNoida : छात्रा की हत्या कर सुसाइड का मामला हत्या के बाद सुसाइड से पहले का वीडियो #Viral छात्र ने प्रेमिका की गोली मारकर की थी हत्या @noidapolice @CP_Noida @DCPGreaterNoida @Uppolice @dgpup @ShivNadarUniv https://t.co/VHzkD6owyi" / Twitter (1) POLICE COMMISSIONERATE GAUTAM BUDDH NAGAR on Twitter: "ग्रेटर नोएडा शिव नादर यूनिवर्सिटी परिसर में छात्र द्वारा परिचित छात्रा को गोली मारने व बाद में खुद को भी गोली मारकर आत्महत्या करने की घटना के संबंध में @DCPGreaterNoida द्वारा दी गई बाइट। थाना-दादरी @Uppolice https://t.co/sgIvXYJWN3" / Twitter Noida Univ Murder-suicide: Accused Blames Cancer, Traumas & Ex-girlfriend's Affair Behind Incident in Viral Video - News18 Murder-suicide at Shiv Nadar university | Woman complained in March of harassment by accused, college counselled them: Police | Delhi News, The Indian Express Ex-girlfriend's affair, cancer, past traumas: Video of Greater Noida varsity suicide-murder case surfaces - India Today shiv nadar murder - Google Search Student murder: Shiv Nadar University among 5 named in FIR; it says took up the issue seriously, gave restraining order | Delhi News, The Indian Express Murder-suicide at Shiv Nadar university | Woman complained in March of harassment by accused, college counselled them: Police | Delhi News, The Indian Express Shiv Nadar University Murder and Suicide Case: Family said - will file a defamation case against those who defame our daughter - news7 noida
Stephanie just got back from a smaller regional Ruby Conference, Blue Ridge Ruby, in Asheville, North Carolina. Joël started a new project at work. Review season is upon us. Stephanie and Joël think about growth and goals and talk about reviews: how to do them, how to write them for yourself, and how to write them for others. Blue Ridge Ruby (https://blueridgeruby.com/) Impactful Articles of 2022 (https://www.bikeshed.fm/369) Constructive vs Predicative Data by Hillel Wayne (https://www.hillelwayne.com/post/constructive/) Parse, don't validate by Alexis King (https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-validate/) Working Iteratively (https://thoughtbot.com/blog/working-iteratively) thoughtbot's 20th Anniversary Live AMA (https://thoughtbot.com/events/ama-developers-20th-anniversary) 20th Anniversary e-book (https://thoughtbot.com/resources/20-for-20) Transcript: JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville. STEPHANIE: And I'm Stephanie Minn. And, together, we're here to share a bit of what we've learned along the way. JOËL: So, Stephanie, what's new in your world? STEPHANIE: I just came back from a smaller regional Ruby Conference, Blue Ridge Ruby, in Asheville, North Carolina. And I had a really great time. JOËL: Oooh, I'll bet this is a great time of year to be in Asheville. It's The Blue Ridge Mountains, right? STEPHANIE: Yeah, exactly. It was perfect weather. It was in the 70s. And yeah, it was just so beautiful there, being surrounded by mountains. And I got to meet a lot of new and old Ruby friends. That was really fun, seeing some just conference folks that I don't normally get to see otherwise. And, yeah, this was my second regional conference, and I think I am really enjoying them. I'm considering prioritizing going to more regional conferences over the ones in some of the bigger cities that Ruby Central puts on moving forward. Just because I really like visiting smaller cities in the U.S., places that I otherwise wouldn't have as strong of a reason to go to. JOËL: And you weren't just attending this conference; you were speaking. STEPHANIE: I was, yeah. I gave a talk that I had given before about pair programming and nonviolent communication. And this was my first time giving a talk a second time, which was interesting. Is that something that you've done before? JOËL: I have not, no. I've created, like, a new bespoke talk for every conference that I've been at, and that's a lot of work. So I love the idea of giving a talk you've given before somewhere else. It seems like, you know, anybody can watch it on the first time on YouTube, generally. But it's not the same as being in the room and getting a chance for someone to see you live and to give a talk, especially at something like a regional conference. It sounds like a great opportunity. What was your experience giving a talk for the second time? STEPHANIE: Well, I was very excited not to do any more work [chuckles] and thinking that I could just show up [chuckles] and be totally prepared because I'd already done this thing before. And that was not necessarily the case. I still kind of came back to my talk after a few months of not looking at it for a while and had some fresh eyes, rewrote some of the things. I was able to apply a few things that I had learned since giving it the first time around, which was good, just having more perspective and insight into the things that I was talking about. Otherwise, the content didn't really change, just polished it further. I think in the editing process, you could edit forever, really. So I imagine if I revisit it again, I'll find other things that I want to change. But this time around, I also memorized my slides because, last time, I was a little more dependent on my speaker notes. And part of what I wanted to do this time around, because I had a little more time in preparing, was trying to go from memory. And that went pretty well, I think. JOËL: How did you feel about the delivery of it? Because now you had a chance to have a practice run in front of a real audience. And, as much as you practice at home in front of the mirror, it's not the same as actually giving a talk in front of an audience. STEPHANIE: Yeah. I was surprised by how the audience is also different, and the things that they'll react to is slightly different. There were some jokes that landed similarly and others that didn't land a little bit with this crowd, but maybe other parts, there was more of a reaction. So that was surprising. And I think I had to kind of adjust those expectations on the fly as I delivered whatever, you know, line I was kind of expecting some kind of reaction to. And I also, other than memorizing my slides, you know, I think had the mental capacity to focus a little more on the delivery component that you're talking about because I wasn't, you know, up until the last minute still working on the content itself, and just being able to direct my mental energy to, I guess, the next level of performance when giving a presentation. And, yeah, I would definitely give this talk again. I really liked that it was something that feels pretty evergreen, something I care a lot about. I don't think it will be a topic that I get kind of bored of anytime soon. So those were all some of the things I was thinking about in giving a talk a second time. JOËL: When you write your speaker notes, do you give yourself directions for expected audience reactions, so something like a pause for laughter after a joke or something like that? STEPHANIE: No. I think I am too nervous about presuming [laughs] how the audience will react to put something in and then have to be, like, super surprised and figure out what to do if they don't react the way that I think they will. So it ends up being that I just kind of go forth. And if I do get a reaction out of them, that's great. But not expecting it works for me because then, at least, I can control how I am presenting and how I'm showing [chuckles] up a little bit more. JOËL: So you're really working with the energy in the room then. STEPHANIE: Yeah, I think so. JOËL: Was this talk recorded? So if people in the audience want to go and watch this talk. STEPHANIE: Yeah. The first version that I gave of it is online if you search for the title "Empathetic Pair Programming with Nonviolent Communication." And this version was recorded as well. So, eventually, it'll also be up. And, I don't know, maybe I'll watch it back and [chuckles] see the difference in presentation. I would be very curious. I've never watched any one of my conference talks fully through the recording from start to end before. But I know that that's something that I could continue to improve on. So maybe one day I'll find the confidence. My other highlight that I wanted to share about this regional conference is how well-organized it was. So it was mainly organized by Jeremy Smith, and I thought he did such an awesome job. He organized a bunch of activities in Asheville for the Saturday after the conference if folks wanted to stay a little longer and just check out the city. There was a group that went hiking, a group that did a brewery tour. And the activity I chose to do was to go tubing. JOËL: Fun. STEPHANIE: Yeah, it was my first time. So you're basically in an inner tube floating down a very calm river, just hanging out. You...we were on the group, and you could clip yourself to the rest of the group so you're all, you know, kind of floating down together. But some people would unclip themselves and just go free for a little while. And, yeah, when you get too hot, you can dip into the water to cool off. And I just had such a great time. [laughs] It was almost like being on a Disney ride but out in nature, which I just, like, is totally my jam. JOËL: I tried tubing once in Texas. And the inner tubes are black, and in the Texas sun, they get really hot. So every, I don't know, 20 minutes or so, I had to get off the inner tube. It was too hot to sit on. And I had to flip it just because it absorbed so much heat. STEPHANIE: Wow. Yeah, that does sound like it would get very hot. I think the funny thing that I wasn't expecting was how hard it would be to get back into the inner tube after you had gotten in the water, at least for me, because the inner tubes were quite large. And so I couldn't get enough leverage to pull myself [laughs] back up onto it, and ended up several times just, like, flopping belly first into the inner tube and then having to, like, flop over so that I could be on my back and be sitting in it again. And other times that I had to wait a little while until the river got shallower so I could actually stand and just sit in it. So there were times that it was kind of a struggle, but 90% of it was very chill and fun. So, Joël, what's new in your world? JOËL: I started a new project at work. I'm working with a data warehouse, pulling data in from a variety of sources, getting it all into one kind of unified schema, doing some transformations on it. And then also setting up some sort of outgoing plugins to allow different sources to access that unified data. So this is not in a Rails app, but we do have a Rails app connecting to this data warehouse. Data engineering is, at least in this style, is newer to me. So I think it's a really interesting world to get into. I don't know if, technically, this counts as big data. I don't think the term is cool anymore. But five or so years ago, everybody was all about the big data, and that was the hip term to toss around. STEPHANIE: So, is this something pretty new to you? You haven't had too much experience doing this kind of data engineering work before? JOËL: Yeah, at least not with, like, a data warehouse. I think a lot of the work around data transformations, or creating unified schemas, thinking in terms of data in different stages that are at different levels of correctness...I've done a fair amount of ETL, Extract, Transform, Load, or sometimes people shift it around and say, ELT, Extract, Load, Transform. I've done a fair amount of those because I've done a lot of integrations with third-party systems. STEPHANIE: So I've always thought of data engineering as, in some ways, a separate role or a track. And I'm really curious about you having, you know, mostly been doing software development if that gives you an interesting lens to look at these problems. JOËL: So, to get the full answer, you should probably ask me again in six months. STEPHANIE: That's fair. JOËL: Initial thoughts is that there's a shocking amount of overlap between some of these ideas, again, because I've done ETL-style projects a lot. You know, if you've got any kind of Rails app and you're integrating with a third-party API, you're often doing ETL at a very small level. To a certain extent, even if you're doing, let's say, some front-end code, and you're interacting with a back end, depending on how you want to deal with that transformation of getting data from your API, you might be doing something kind of like an ETL. Designing types in something like a TypeScript or an Elm and thinking in terms of the data that you have, the transforms that you're doing has a lot of similarities to what you would do in a data warehouse. I think a lot of the general ideas apply. I know I talked at the beginning of this year articles that were impactful for me. And one of those articles that was really impactful was Hillel Wayne's "Constructive Versus Predicative Data," which is all about structuring data and when you can enforce constraints via the data structure versus when you need to enforce it via code. Similarly, a lot of the ideas from the article "Parse, Don't Validate" by Alexis King. The articles focused on designing types. But it also, I think, applies to when you're thinking of schemas because schemas and types are, in a sense, isomorphic to each other. STEPHANIE: I like what you said there about as a software developer; you've probably done this at a much smaller scale. And, yeah, like you were saying, things that you had already learned about before or thought about before you're able to apply to this different set of problems or, like, different approach to programming. Is there anything that has been challenging for you? JOËL: Yes, and it's a weird one. Because we're working with enterprise systems, navigating the websites for these enterprise systems and the documentation for them is not a pleasant experience, trying to get a feel for how the system is made to work. It's just so different when you're used to tools and documentation written by the open-source community. Even third-party tutorials and things it's never, like, oh, here's a great article where you can scan and find the thing that you want. It's, hey, I'm a consultant guru on this thing. Sign up for my webinar, and you can have a 15-hour course on how to use this tool. And that's not what I want to do. I just want give me the five-paragraph blog post on how to do data imports, or how to set up a staging area for data, or something like that. STEPHANIE: Right. You're basically being asked to develop skills in using the enterprise software rather than more general skills for the problem or task; it sounds like. Because apparently, there are people making a business out of teaching other people how to use or navigate the software. JOËL: And I think that's fine. I love that people are making businesses of teaching these. But just the way things are structured, information is not generally as available for this large enterprise software as it is in the open-source world, and even when it is, it's just different patterns of access. So even you go to a particular technology's website, and it's all marketing copy. It's all sales funnel and not a lot of actually telling you really what the technology does. It's all, like, really vague, you know, business speak on, you know, empowering your team, and gathering insights, and all this stuff. So you really do a lot of drilling down. And what you need to find is the developer site. That's where you get the actual tech documentation. Depending on the tech, it's more or less good. But yeah, the official website of the technologies is just...it's not aimed at me as a developer. It's speaking to a different audience. STEPHANIE: That is interesting. I didn't realize that once you are, you know, working on a data warehouse, it is because you are consuming so many different external sources of data, and having to figure out how to work with each one is part of the process to get what you need. JOËL: So there's the external services but the data warehouse itself that we're using is an enterprise product. STEPHANIE: Got it. JOËL: So, just figuring out how this data warehouse works, it feels like it's a different culture, a different developer culture. STEPHANIE: That's cool. I'll definitely ask you again in a few months, and I look forward to hearing what you report back. So the other topic that I wanted to get into today is reviews, specifically self-reviews. To be honest, our review cycle is happening right now. And I have very much procrastinated [chuckles] on writing them until, you know, one or two days before. So I came into our conversation today, like, in that mind space of thinking about my growth, and my goals, and that kind of stuff. And it got me thinking that I don't hear a lot of people talk about reviews, and how to do them, how to write them for yourself, how to write them for others, how people approach them. Though I would guess that the procrastination part is pretty common, [chuckles] just based on what I'm hearing from other folks on our team too, and what they're up to for the next couple of days before they do. Joël, have you written your review yet? JOËL: So it's interesting because this review cycle has a few different components. You write a self-review. You write a review of your manager, and then you write a review of several of your peers who have nominated you to write a review. So I've done my own review. I've done my manager's review. I've not completed all of my peer reviews yet. STEPHANIE: That's pretty good. That's better than me. I've only done my own. [laughs] So, yeah, the deadline is coming up. And I'll probably get back to it right after this. I'm curious about your process, though, for writing a self-review. Do you come into it having thought about how you've been doing so far in the last six months or so? Or, when you sit down to write it, are you thinking about these things for the first time in a while? JOËL: Combination. So I think I do come in without necessarily having, like, planned for the review cycle. That being said, throughout the year, I try to build a fair amount of, like, personal self-reflection, professional self-reflection at various points throughout the year. So I'm not coming into the review cycle being like, oh, I have not thought about professional growth at all. What have I done this year? I think one thing I haven't done quite as well is when I'm doing these moments of self-reflection on my own throughout the year, writing down notes that I could then use to apply when the review cycle comes up. So I am having to rely on memory on, like, oh yeah, last month, when I kind of sat down and thought about areas that I want to improve in or areas that, like, what are my goals that I want to have? And I just commit that to memory. So, yeah, I think live in the moment; now that you've asked me this question, you've made me think that maybe I should be taking more regular notes about this. STEPHANIE: One thing I've been really liking about the software that we're using for reviews and other professional growth things is...it's called 15Five. And you can give your co-workers shout-outs using this tool. And as I was writing my review, I could actually open all of the kudos and shout-outs that I received from my peers and just remember some of the things that I worked on or a lot of the things that other people noticed. I tend to sometimes have a hard time remembering some of the smaller things that I've done that made an impact, but other people are usually better about pointing that out than I am. [chuckles] And that has been really helpful because it's, yeah, nice to see like, oh, like, you know, so and so really appreciated when I paired with them on, you know, debugging this thing. And maybe I can pull that into something that I'm writing about the kind of mentorship I've been doing in the last few months. JOËL: How do you feel about the aspect where you have to then give feedback on colleagues? STEPHANIE: I really value and enjoy this aspect because most of the time, I am just gassing my colleagues up [chuckles] and writing, you know, really encouraging things about all of the awesome work that they're doing. So, for me, it actually feels really good. And I was thinking a little bit about my approach to reviewing my peers and review culture in general. I have worked at companies where we have had a very, like, healthy and positive review culture. So it happens often enough that it's become normalized. It's not a really scary thing. And I also like to think about feedback in two types, where you have feedback that you want to give someone so that they can change behavior in a way that helps you work with them better, and then feedback you have for someone for their growth. And once I separated those two things, I realized that really, the former, if you're, you know, giving someone constructive feedback because you maybe would like them to be doing something different. That's not necessarily what you want to be writing in their annual review. Those things are usually better communicated in a more timely manner, like, right when you are noticing what you might want to be changed. And so then when you are doing reviews, like, you've hopefully already kind of gotten all of that stuff out of the way. And you can just focus on areas of growth for them, which is the fun part, I think, in reviewing peers because, yeah, you can give some suggestions to further support them in, like, where they want to go. JOËL: I like that distinction between just general growth, suggestions, and then interaction suggestions. And just to give an example, it sounds like interaction suggestions would be like, "Oh, when we pair, I would like it if you used this style of communication from, let's say, nonviolent communication. Here's a talk; go watch it." STEPHANIE: [laughs] Yeah, I did talk on this; go watch it. There used to be a framework for reviews that I've done before that I actually don't quite like. It's the Stop, Start, Continue framework where you answer questions about, okay, what should this person stopped doing? What should they continue doing? And what should they start doing? And the things that you would put in stop, I think, are probably what you would want to have communicated in a more timely manner, like, not necessarily it happening, you know, really divorced from whatever behavior you might be asking. And, in general, I think focusing on what you would like others to be doing instead is usually a better approach to handling that kind of feedback just because it avoids making someone feel bad about having done something wrong and, instead, kind of redirecting them into what you would like them to be doing. JOËL: So you're saying if you have something in the stop category, let's say stop interrupting me all the time when we're in meetings, you're saying this is something you prefer not to bring up at all or something that you prefer to bring up one on one and not in the context of review? STEPHANIE: Something to bring up one on one. Ideally, pretty soon after, that might have happened. It's a little more top of mind. And then you don't end up in that position of maybe misremembering or having the other person misremember and having to figure out, like, who was in the right or in the wrong in understanding how that interaction went. Especially if you're able to do it a little sooner after it happened, you can point out, like, hey, this happened. And instead of framing it as please stop interrupting me, you could say, "Could you please make some space for some folks who've been a little more quiet in the meetings to make sure that they've been able to share?" Still, I think once you've made more space to give that kind of constructive feedback when you are writing reviews, you can then, like, focus on the growth aspect and not the redirection of how others are doing their work. JOËL: That makes sense. So, what would be an example of the kind of feedback that you like to give to other people in the context of a review? STEPHANIE: Yeah, I think especially if I know what someone is wanting to focus on, right? If I'm working with someone, hopefully, we've kind of gotten to talk about what they like to work on, what they don't like to work on, what they are hoping to spend more time doing, or yeah, just their hopes and dreams for their professional [chuckles] development, being able to point out some things that they maybe haven't thought about trying it I really like to do. I was thinking about a time when I gave a co-worker some feedback as a mentee of theirs where they had been really awesome at providing information to me about things that I was unfamiliar with. But one thing that I was really hoping for was more tools to figure things out on my own. So instead of sending me a link to some documentation, maybe helping me figure out how to search for the documentation that I'm looking for. And that was something that I could share with them because I knew that they wanted to work on their mentorship skills and an opportunity, I think, for them to take it to a level where it's closer to coaching and not just providing information. JOËL: That makes a lot of sense. Maybe flipping it around, is there a point in time where you've received a review feedback that has been really valuable to you or really helped you hit the next level in your career? STEPHANIE: I really appreciate feedback that encourages me when I'm maybe a little bit too timid to go seek the things out myself. So there were times when I received some feedback about how great of a leader I could be before I thought I was ready to be a leader. And they pointed out the qualities of leadership that I had demonstrated that led them to believe that I would be ready for a role like that. And that was really helpful because I don't think that was even necessarily a short-term goal of mine. And it took someone else saying, "I think you're ready," that made me feel a lot more confident about opening that door. I guess this is all to say that I really love review season because of, you know, all of the support I get from my co-workers. And, yeah, just remembering that it's not just a journey I have to take all by myself, that the point of working with other people is for all of us to help each other grow. JOËL: I think something that you mentioned earlier really connected with me, the idea of trying to give feedback in the...even, like, feedback that's about changing or improving, phrasing it in a more positive way, or at least framing it in a more positive way. So here's an opportunity for growth rather than here's the thing you're doing wrong. Because that reminds me of two pieces of review that I got when I was a fairly junior developer that have stuck with me ever since. And one of them was really a catalyst for growth, and the other one kind of haunted me. So this first one I got, someone in a review just mentioned that they thought that I was just generally a slow developer, just not fast at writing code. Not a whole lot of context; just that's who I was. And, in a sense, it was almost like I'd been given this identity, like, oh, I am now Joël, the slow developer. And I didn't want that identity. So I'm kind of like, I want to refuse to accept it. But at the same time, there's always that self-doubt in the back. And now, anytime I'm on a project with someone else, I'm comparing, oh, am I shipping stories quite as fast as someone else? And if not, why? Is it because I'm a slow developer? Or if I'm having a rough day and I'm not getting the ticket done that I was hoping to get done by the end of the day, you know, you just get that voice in the back of your head that's like, oh, it's because you're a slow developer. Someone called that out last year, and they were right. So, in a sense, it kind of haunted me. On the flip side, I once got some feedback talking about an opportunity for growth. If I focused on working in more iterative, incremental chunks, it would help have a smoother workflow and probably help me work faster as well. And that was really kind of an exciting opportunity. It's also stuck with me for years but not in the sort of haunting sort of way or this, like, bring in self-doubt but more in terms of opportunity. Because now I'm always like, oh, can I break this down into even smaller chunks? Would that help me move faster? Would that help me be less blocked on other people? Would that be easier for our QA team? Would this be easier for review for my colleagues? Just a lot of different opportunities for benefits with working in smaller iterative chunks. And, for years, I've just been kind of honing that skill. And now, looking back over, you know, a decade of doing this, I think it's one of the best skills that I have. And so, in a sense, I feel like both of these people that left me that review, in a sense, they're trying to get me to maybe have a slightly higher velocity. But they're different approaches, radically different in terms of how it impacted me as a person. STEPHANIE: Yeah, I am really glad you brought that up. Because I definitely have also received, quote, unquote, "constructive feedback," but maybe wasn't phrased in the right way, that also haunted me. And it doesn't feel good. I think that that sucks. That person wasn't really able to frame it in a way that pushed you to progress in the positive way that you mentioned with learning to work incrementally. And in fact, I almost think that the difference in those two phrasings is encapsulated by a framework for giving feedback that's actionable, specific, and kind. So suggesting you to work incrementally is all of those things, especially if they know that you do want to increase your velocity. But you're being supported in doing it in a way that is positive and growth-oriented as opposed to, like, out of fear that other people think that you are a slow developer. And, you know, that's certainly a way that people are motivated. But I would say that that's not the way that we want to be motivated. [laughs] JOËL: I'm glad we're having this conversation because I think it just reinforces to me just the value of good communication skills for developers. And, you know, you can see that when developers have to write documentation, or even things like comments or commit messages. You see it when developers write blog posts. So it's really valuable to work on your communication skills in a lot of these technical areas. But reviews are a very particular area where it's easy to maybe have not the impact that you wanted because you communicated a core idea that's probably right, but just the way it was communicated was not going to have the impact that you're hoping for. And so getting good at communicating specifically in the area of reviews, which I assume most of us in the software industry are doing on a semi-regular basis, is probably a good tool to have in your professional tool belt. STEPHANIE: Absolutely. JOËL: We recently hit a big milestone at thoughtbot, where thoughtbot turned 20 years old in early June. And so, throughout June, we've been doing a lot of fun internal things and some external things to celebrate turning 20. And one of those is we're hosting a live AMA with a variety of thoughtbot devs. That's going to be on Friday, June 23rd, so a couple of days after this podcast goes live. So, to our listeners, if you're listening to this, in the first few days after it goes live, you get a chance to join in on the live AMA and ask your questions of our team as we celebrate 20 years. There's a blog post with all the details, and we'll link to that in the show notes. STEPHANIE: One other thing that I think we're doing that's really cool for our 20th anniversary is we published a short ebook with a curated collection of 20 hits from our blog, the thoughtbot blog, over the course of its history, some of the more popular and impactful blog posts that we've ever published. So I highly recommend checking that out. You know, the thoughtbot blog is such an awesome resource. And I discovered a few things that I hadn't read before on the blog from this ebook. So that will also be linked in the show notes. JOËL: I mentioned earlier how one of my opportunities for growth through review was getting better at working iteratively. And, a couple of years ago, I took a lot of the lessons that I'd learned over the years of getting better at working iteratively, and I put them in a blog post, and that blog post made it into that 20th Anniversary ebook. So we can probably link the blog post itself in the show notes. But also, if you're picking up that ebook, you'll get a chance to see that article on my lessons learned on how to work iteratively. STEPHANIE: Awesome. On that note, shall we wrap up? JOËL: Let's wrap up. STEPHANIE: Show notes for this episode can be found at bikeshed.fm. JOËL: This show has been produced and edited by Mandy Moore. STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter. STEPHANIE: Or reach both of us at hosts@bikeshed.fm via email. JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeee!!!!!!! ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.
Lina Khan the "Khan" Artist is at it again. In this speech she talk about the "problems" that data creates, but her only solution seems to be an internet that only serves the rich. In other words - she doesn't understand the value that data can supply.
Yerr!!! On this episode, the fellas cover the latest in sports, play new game Double Double, Start Bench Cut is back, and Drew confuses everyone. *Time Codes* Open Run- 3:34 Double Double- 1:38:26 Start Bench Cut- 2:02:56 *Cast* Eddie: https://www.instagram.com/eddiehurchata/ Devin: https://www.instagram.com/dons_prezi/ Jonny: https://www.instagram.com/jonny_cinco_/ Kevin: https://www.instagram.com/kjett_jl/ Drew: https://www.instagram.com/mr.drew190/ Dicap (6th Man): https://www.instagram.com/dondicap/ Podcast brought to you by Culture Capsule. https://www.instagram.com/culturecaps... Video sponsored by D.O.N.S. LLC https://www.instagram.com/donsllc/ *Intro Song* Jimi Six & Hop- Goat Mode https://soundcloud.com/donsinc/jimi-six-dez-lansky-goatmode *Outro Song* Jerellz-Good Energy https://www.youtube.com/watch?v=ReaEWauUbt8 *Affiliates* DONS LLC: https://www.donsforever.com Boy From Da Bronx: https://www.youtube.com/channel/UCR7b... Cloud Camp Clothing: https://www.instagram.com/cloudcampclothing/
The much-anticipated February jobs report is out, and it’s a harder one to dissect than last month’s — job gains exceeded expectations, but the unemployment rate ticked up. Christopher Low, chief economist at FHN Financial, gives us some insight. President Biden released his budget request yesterday, which could serve as a policy preview for a potential 2024 reelection campaign. And, we take a look at what big players in the energy industry are talking about at a big conference going on in Houston right now.
The much-anticipated February jobs report is out, and it’s a harder one to dissect than last month’s — job gains exceeded expectations, but the unemployment rate ticked up. Christopher Low, chief economist at FHN Financial, gives us some insight. President Biden released his budget request yesterday, which could serve as a policy preview for a potential 2024 reelection campaign. And, we take a look at what big players in the energy industry are talking about at a big conference going on in Houston right now.
A 45:00 testimony in a joint committee in Arizona offers compelling accusations of RICO-level corruption that allegedly involves officer holders all the way up to Katie Hobbs, recently installed as governor. While I have no problem believing this level of corruption is happening in America--look no further than Pharma purchasing the Mockingbirds, the CDC, FDA, schools and the military--I don't know what to make of the testimony and cannot judge it fully until I get to see the full report. Revolver News doubts the veracity of the report, but says the reality os probably worse. But, it is absolutely worth considering and analyzing because it would explain so much: why politicians adopt, push and defend demonstrably insane positions on crime, drugs and election security; why it was so easy for supposed “hackers” to steal Covid relief funds from separate countries like Washngton and California. Why government office holders refuse to do any actual investigation into elections that are obviously structured to invite fraud. What does God say? On making false allegations Exodus 20:1616 “You shall not give false testimony against your neighbor.On corruption 2 Peter 2:19 They promise them freedom, but they themselves are slaves of corruption. For whatever overcomes a person, to that he is enslaved.Galatians 6:8 ESV For the one who sows to his own flesh will from the flesh reap corruption, but the one who sows to the Spirit will from the Spirit reap eternal life.Bombshell cartel allegations against Katie Hobbs may be false… but the reality is much darker…Judge Rejected Congressman's Bid to Shield Emails From DOJArizona's election have been rigged with help of cartels' money and Katie Hobbs should be in prison not be running the state. Report: Preliminary findings of activities impacting Arizona's election integrity with specific focus on the 2020 and 2022 general elections. Entire report in the tweet below.Time Stamp 34:52 - listing of people taking bribes, objection from Senator Ken Bennett Time Stamp 38:35 - Rep. Rachel Jones people have a right to know about who is getting bribedBREAKING: Georgia Elections Board Accepted $2 Million From Mark Zuckerberg's Group, Violating State LawJUST IN: Governor Ron DeSantis just officially signed into law the removal of Walt Disney's special tax privileges, appointing a state board, and ensuring debts are not transferred to taxpayers.Kavanaughing DeSantis: Leftie Hacks Dig Up DeSantis Yearbook, Emerge With “Scoop” That He Took AP Courses; Obviously, AP courses in the '90s weren't (as) infested with woke ideology and leftist indoctrination, so there is literally no comparison at all except in the name “AP.”James O'Keefe on the suffering that can come from telling the truth‘Maoist Struggle Session': Former NY Times Writer Details Newsroom Chaos After Tom Cotton Op-Ed; “It was like Caesar on the floor of the Roman Senate or something … I remember closing my laptop and pouring a huge glass of wine even though it was at like noon. Because I was so f–king freaked out by what we had just witnessed.”Allegations made against these office holders, we need to see the reportIn 2022, according to Maricopa County: o 75.4% of Republicans turned out, up from 2018 o 68.5% of Democrats turned out, down from 2018 o There are more registered Republicans (1,436,852) than Democrats (1,270,544) in AZCBS LOCAL NEWS: Lodi city council member Shakir Khan arrested, now faces voter fraud charges Show AdvertisersAlan's Soaps https://alanssoaps.com/TODD Use coupon code ‘TODD' to save an additional 10% off the bundle price. Bonefrog Coffee https://bonefrog.us Enter promo code TODD at checkout to receive 5% off your subscription. Bulwark Capital https://knowyourriskradio.comGet your free copy of “Common Cents Investing” Call 866-779-RISK or visit the website. Healthycell https://healthycell.com/todd Journey to better health and save 20% off your first order with promo code TODD.My Pillowhttps://mypillow.com Sleep cool with the new MyPillow 2.0 now Buy One Get One Free with code TODD.RuffGreens https://ruffgreens.com/todd Get your FREE Jumpstart Trial Bag of Ruff Greens, simply cover shipping. SOTA Weight Loss https://sotaweightloss.com SOTA Weight Loss is, say it with me now, STATE OF THE ART!Texas Superfoods https://texassuperfoods.com Texas Super Foods is whole food nutrition at its best.GreenHaven Interactivehttps://greenhaveninteractive.comGet more customer online with a world class website and Google
Joël has been pondering another tool for thought from Maggie Appleton: diagramming. What does drawing complex things reveal? Stephanie has updates on how Soup Group went, plus a clarification from last week's episode re: hexagons and tessellation. They also share the top most impactful articles they read in 2022. This episode is brought to you by Airbrake (https://airbrake.io/?utm_campaign=Q3_2022%3A%20Bike%20Shed%20Podcast%20Ad&utm_source=Bike%20Shed&utm_medium=website). Visit Frictionless error monitoring and performance insight for your app stack. Maggie Appleton tools for thought (https://maggieappleton.com/tools-for-thought) Squint test (https://www.youtube.com/watch?v=8bZh5LMaSmE&themeRefresh=1) Cardinality of types (https://guide.elm-lang.org/appendix/types_as_sets.html) Honeycomb hexagon construction (https://www.nature.com/articles/srep28341) Coachability (https://cate.blog/2021/02/22/coachability/) Strangler Fig Pattern (https://shopify.engineering/refactoring-legacy-code-strangler-fig-pattern) Finding time to refactor (https://thoughtbot.com/blog/finding-the-time-to-refactor) Parse don't validate (https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-validate/) Errors cluster around boundaries (https://thoughtbot.com/blog/debugging-at-the-boundaries) Transcript: STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot that has basically become a two-person book club between me and Joël. [laughter] JOËL: I love that. STEPHANIE: I'm so sorry, I had to. I think we've been sharing so many things we've been reading in the past couple of episodes, and I've been loving it. I think it's a lot of the conversations we have off-air too, and now we're just bringing it on on-air. And I am going to lean into it. [laughs] JOËL: I like it. STEPHANIE: So, Joël, what's new in your world? JOËL: So, in a recent episode, I think it was two episodes ago, you shared an article by Maggie Appleton about tools for thought. And I've kind of been going back to that article a few times in the past few weeks. And I feel like I always see something new. And one tool for thought that Maggie explicitly mentions in the article is diagramming, and that's something that we've used as an industry for a long time to deal with conditional logic is just writing a flow diagram. And I feel like that's such a useful tool sometimes to move away from code and text into visuals and draw your problem rather than write your problem. It's often useful either when I'm trying to figure out how to structure some of my own code or when I'm reviewing a PR for somebody else, and something just feels not quite right, but I'm not quite sure what I want to say. And so drawing the problem all of a sudden might give me some insights, might help me identify why does something feel off about this code that I can't quite put into words? STEPHANIE: What does drawing complex things reveal for you? Is there a time where you were able to see something that you hadn't seen before? JOËL: One thing I think it can make more obvious is the shape of the problem. When we describe a problem in words, sometimes there's a sense of like, okay, there are two main paths through this problem or something. And then when we do our code, we try to make it DRY, and we try all these things. And it's really hard to see the flow of logic. And we might actually have way more paths through our code than are actually needed by the initial problem definition. I think we talked about this in a past episode as well, structuring a multi-step form or a wizard. And oftentimes, that is structured way more complex than it needs to be. And you can really see that difference when you draw out a flow diagram, the difference between forcing everything down a single linear flow with a bunch of little independent conditions versus branching up front three or four or five ways, however many steps you have. And then, from there, it's just executing code. STEPHANIE: I have two thoughts here. Firstly, it's very tragic that this is an audio medium only [laughs] and not also a visual one. Because I think we've joked in the past about when we've, you know, talked about complex problems and branching conditionals and stuff like that, like, oh, like, if only we could show a visual representation to our listeners. [laughs] And secondly, now that makes a lot more sense why there are so many whiteboards just hanging out in offices everywhere. [laughs] JOËL: We should use them more. It's interesting you mentioned the limitations of an audio format that we have. But even just describing the problem in an audio format is different than implementing it in code. So if I were to describe a problem to you that says, oh, we have a multi-step form that has three different steps to it, in that description, you might initially think, oh, that means I want to branch three ways up front, and then each step will need to do some processing. But if you look at the implementation in the code, maybe whoever coded it, and maybe that's yourself, will have done it totally differently with a lot more branching than just three up front because it's a different medium. STEPHANIE: That's a really good point. I also remember reading something about how you can reason about how many branches a piece of code might have if you just look at the structure of the lines of code in your editor if you either step away from it and are just looking at the code not really able to see the text itself but just the shape that it makes. If you have some shorter lines and then a handful of longer lines, you might be able to see like, oh, like these are multiple conditionals happening, which I think is kind of related to what you're saying about taking a piece of code and then diagramming it out to really see the different paths. And I know that that can also be obscured a little bit if you are stylistically using different syntax. Like, if you are using a guard clause to return early, that's a conditional, but it gets a bit hidden from the visual representation than if you had written out the full if statement, for example. JOËL: I think that's a really interesting distinction that you bring up because a lot of languages provide syntactic sugar for common conditional tasks that we do. And sometimes, that syntactic sugar will almost obfuscate the fact that there is a conditional happening at all, which can be great in a lot of cases. But when it comes to analyzing and particularly comparing different implementations, a second conversion that I like to do is converting all of the conditional code to some standardized form, and, for me, that's typically just your basic if...elsif...else expressions. And so any fancy Boolean operators we're doing, any safe navigation that we're doing around nil, maybe some inline conditionals, early returns, things like that, all of the implicit elses that are involved as well, putting them all into some normalized form then allows me to compare two implementations with each other. And sometimes, two approaches that we initially thought were identical, just with different syntax, turned out to have slightly different behavior because maybe one has this sort of implicit branch that the other one doesn't. And by converting to a normalized syntax, all of a sudden, this difference becomes super obvious. To be clear, this is not something I do necessarily in the actual code that I commit, not necessarily writing everything long-form. But definitely, when I'm trying to think about conditional code or analyzing somebody else's code, I will often convert it to long-form, some normalized shape so that I can then see some things about it that were not obvious in the final form. Or to make a comparison with something else, and then you can compare apples to apples and say, okay, both these approaches that we're considering in normalized form, here's what they look like. There's some difference here that we do care about or don't care about. STEPHANIE: That's really interesting. I find it very curious that there is a value in having the long-form approach of writing the code out and being able to identify things. But then the end result that we commit might not look like that and be shortened and be kind of, quote, unquote, "polished," or at least condensed with syntactic sugar. And I'm kind of wondering why that might be the case. JOËL: I think a lot of that will come down to your personal or your company's style guide. Personally, I think I do lean a little bit more towards a slightly more explicit form. But there are plenty of times that I will use syntactic sugar as well, as long as everybody knows what it does. But sometimes, it will come at the cost of other analysis techniques. You had mentioned the squint test earlier, which I believe is a term coined by Sandi Metz. STEPHANIE: I think it might be. That rings a bell. JOËL: And that is a benefit that you get by writing explicit conditionals all the time. But sometimes, it is much nicer to write code that is a little bit more terse. And so you have to do the trade-offs there. STEPHANIE: Yeah, that's a really good point. JOËL: So that's two of the sort of three formats that I was thinking about for converting conditional code to gain more insight. The other format is honestly a little bit weird. It's almost a stretch. But from my time spent working with the Elm language, I learned how to use its type system, which uses a concept called algebraic data types, or some languages will call these tagged unions, some languages will call these sum types. This concept goes by a lot of different names. But they're used to define types into model data. But there's a really fun property, which is that you can model conditional code using this as well. And so you can convert executable code into these algebraic data types. And now, you can apply a lot of tools and heuristics that you have from the data modeling world to this conditional code. STEPHANIE: Do you have a practical example? JOËL: So a classic thing that data modelers will say is you should make impossible states impossible. So in practice, this means that when you define a type using these algebraic data types, you should not be able to create more distinct values than are actually valid in this particular system. So, for example, if a value is required to always be present for something and there's no way in the system for a value to become not present, then don't allow it to be nullable. We do something similar when we design a database schema when we put a null false on a column because we know that this will never be null. And so, why allow nulls when you know they should never be there? So it's a similar thing with the types. This sort of analysis that you can do looking at...the fancy term is the types cardinality. I'll link to an article that digs into that for people who are curious. But that can show you whether a type can represent, let's say, ten possible values, but the domain you're trying to model only has 5. And so when there's that discrepancy, there are five valid values that can be modeled by your type and an additional extra five that are not valid that just kind of shake out from the way you implemented things. So you can take that technique and apply it to a conditional that you've converted to algebraic data type form. And that can help find things like paths through your conditional code that don't line up with the problem that you're trying to solve. So going back to the example I talked about earlier of a multi-step form with three different steps, that's a problem that should have three paths through your conditional. But depending on your implementation, if it's a bunch of independent if clauses, you might have a bit of a combinatorial explosion. And there might be 25 different paths through that chunk of code. And that means three of them are the ones that your problem wants, and then the extra 22 are things that should quote, unquote, "never happen," but we all know that they eventually will. So that kind of analysis can help maybe give you pointers to the fact that your current structure is not well-suited to the problem that you're trying to solve. STEPHANIE: I think another database schema example that came to mind for me was using an enum to declare acceptable values for a field. And, yeah, I know exactly what you mean when working with code where you might know, because of the way the business works, that this thing is impossible, and yet, you still have to either end up coding defensively for it or just kind of hold that complexity in your head. And that can lead to some gnarly situations, and it makes debugging down the line a lot more difficult too. JOËL: It definitely makes it really hard for somebody else to know the original intention of the code when a conditional has more paths through it than there actually are actual paths in the problem you're trying to solve. Because you have to load all of that in your head, and our programmer brains are trained to think about all the edge cases, and what if this condition fires but this other one doesn't? Could that lead to a bug? Is that just a thing that's like, well, but the inputs will never trigger that, so you can ignore it? And if there are no comments to tell you, and if there are comments, then do you trust them? Because it -- STEPHANIE: Yes. [laughter] I'll just jump in here and say, yeah, I have seen the comments then conflict with the code as well. And so you have these two sources of information that are conflicting with each other, and you have no idea what is true and what's not. JOËL: So I'm a big fan of structuring conditional code such that the number of unique paths through a set of conditions is the same as the sort of, you might say, logical paths through the problem domain that we haven't added extra paths, just sort of accidentally due to the way we implemented things. STEPHANIE: Yeah. And now you have three different ways to visualize that information in your head [laughs] with these mental models. JOËL: Right. So from taking code that is conditional code and then transforming it into one of these other representations, I don't always do all three, but there are tools that I have. And I can gain all sorts of new insights into that code by looking at it through a completely different lens. STEPHANIE: That's super cool. JOËL: So the last episode, you had mentioned that you were going to try a soup club. How did that turn out? STEPHANIE: It turned out great. It was awesome, the inaugural soup group. I had, I think, around eight people total. And I spent...right after work, I went straight to chopping celery [laughs] and onions and just soup prepping. And it was such a good time. I invited a different group of friends than normally come together, and that turned out really well. I think we all kind of had at least one thing in common, which was my goal was just to, you know, have my friends come together and meet new people too. And we had soup, and we had bread. Someone brought a spiced crispy chickpea appetizer that went really well inside of our ribollita vegetable bean soup. And then I had the perfect amount of leftovers. So after making a really big batch of food and spending quite a long time cooking, I wanted to make sure that everyone had their fill. But it was also pretty nice to have two servings left over that I could toss in the freezer just for me and as a reward for my hard work. And then it ended up working out really well because I went on vacation last week. And the night we got back home, we were like, "Oh, it's kind of late. What are we going to do for dinner?" And then I got to pull out the leftover soup from my freezer. And it was the perfect coming home from a big trip, and you have nothing in your fridge kind of deal. So it worked out well. JOËL: I guess that's the advantage of hosting is that you get to keep the leftovers. STEPHANIE: It's true. JOËL: You also have to, you know, make the soup. [laughs] STEPHANIE: Also true. [laughs] But like I said, it wasn't like I had so much soup that I was going to have to eat it every single day for the next week and a half. It was just the amount that I wanted. So I'm excited to keep doing this. I'm hoping to do the next soup group in the next week or two. And then some other folks even offered to host it for next time. So maybe we might experiment with doing a rotating thing. But yeah, it has definitely brought me joy through this winter. JOËL: That's so lovely. What else has been new in your world? STEPHANIE: I have a clarification to make from last week's episode. So last week, we were talking about hexagons and tessellation. And we had mentioned that hexagons and triangles were really strong shapes. And we mentioned that, oh yeah, you can see it in the natural world through honeycomb. And I've since learned that bees don't actually build the hexagon shape themselves. That was something that scientists did think to be true for a little bit, that bees were just geometrically inclined, but it turns out that the accepted theory for how honeycomb gets its shape is that bees build cylindrical cells that later transform into hexagons, which does have a lot of surface area for holding the honey, though the process itself is actually still debated by scientists. So there's some research that has supported the idea that it's formed through physical forces like the changing temperature of the wax that transforms it from a cylinder shape into a hexagon, though, yeah, apparently, the studies are still a bit inconclusive. And the last scientific paper I read about this, just to really get my facts straight [laughs], they were kind of exploring aspects of bee behavior that led to the hexagons eventually forming because that does require that the cylinders are perfectly the same size and are at least built in a hexagonal pattern, even though the cells themselves are not hexagons. JOËL: Fascinating. So it sounds like it's either a social thing where the bees do it based off of some behavior. Or if it's a physical thing, it's some sort of like hexagons are a natural equilibrium point that everything kind of trends to, and so as temperature changes, the beehive will naturally trend towards that. STEPHANIE: Yeah, exactly. I have a good friend who is a beekeeper, so I got to pick her brain a little bit about honeycomb. [laughs] MID-ROLL AD: Debugging errors can be a developer's worst nightmare...but it doesn't have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help cut your debugging time in half. So why do developers love Airbrake? It has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking! Airbrake's debugging tool catches all of your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted. In addition to stellar error monitoring, Airbrake's lightweight APM helps developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction. Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality. Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps to include modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back. Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. Head on over to airbrake.io/try/bikeshed to create your FREE developer account today! JOËL: So in the past few episodes, we've talked about books we're reading, articles that we're reading. This is kind of turning into the Stephanie and Joël book club. STEPHANIE: I love it. JOËL: That got me thinking about things that I've read that were impactful in the past year. So I'm curious for both of us what might be, let's say, the top two or three most impactful articles that you read in 2022. Or maybe to put it another way, what are the top two or three articles that you reference the most in conversations with other people? STEPHANIE: So listeners might not know this, but I actually joined thoughtbot early last year in February. So I was coming into this new job, and I was so excited to be joining an organization with so many talented developers. And I was really excited to learn from everyone. So I kind of came in with really big goals around my technical growth. And the end of the year just passed, and I got to do a little bit of reflection. And I was quite proud of myself actually for all the things that I had learned and all the ways that I had grown. And I was reminded of this blog post that I think I had in the back of my mind around "Coachability" by Cate, and she talks about how coaching is different from mentorship. And she provides some really cool mental models for different ways of providing support to your teammates. Let's say mentorship is teaching someone how to swim, and maybe helping someone out with a task might be throwing them a life raft. Coaching is more like seeing someone in the water, but you are up on a bridge, and you are kind of seeing all of their surroundings. And you are identifying ways that they can help themselves. So maybe there's a branch, a tree branch, a few feet away from them. And can they go grab that tree branch? How can they help themselves? So I came to this new job at thoughtbot, and I had these really big goals. But I also knew that I wanted to lean on my new co-workers and just be able to not only learn the things that I was really excited to learn but also trust that they had my best interests in mind as well and for them to be able to point out things that could help my career growth. So the idea of coachability was really interesting to me because I had been coming from a workplace that had a really great feedback culture. But I think this article touches on what to do with feedback in a way that I hadn't seen before. So she also describes being coachable as having two axes, one of them being receptiveness to feedback and the other being actionability in response to feedback. So receptiveness is when you hear feedback; do you listen to it? Do you work through it? How does that feedback fit into your mental model of your goals and your skills? And then actionability is like, okay, what do you do with that? How do you change your behavior? How do you change the way you approach problems? And those two things in mind were really helpful in terms of understanding how I respond to feedback and how to really make the most of it when I receive it. Because there are times when I get feedback, and I don't know what to do with it, you know, maybe it just wasn't specific enough. And so, in that sense, I want to work on my actionability and figuring out, okay, someone said that testing would be a really great opportunity for me to learn. But what can I do to learn how to write better tests? And that might involve figuring that out on my own, like, what strategies work for me. Or that might involve asking them, being like, "What do you recommend?" So yeah, I had this really big year of growth. And I'm excited to keep this mental model in mind when I feel like I might be stuck and I'm not getting the growth that I want and using those axes to kind of determine how to move forward. JOËL: I think the first thing that comes to mind for me is the episode that you and I did a while back about the value of precise language. For example, you talked about the distinction between coaching and mentorship, which I think in sort of colloquial speech, we kind of use interchangeably. But having them both mean different things, and then being able to talk about those or at least analyze yourself through the lens of those two words, I think, is really valuable and may be helping to drive either insights or actions that you can take. And similarly, this idea of having two different axes for receptiveness versus...was it changeability you said was the other one? STEPHANIE: Actionability. JOËL: Actionability, I think, is really helpful when you're feeling stuck because now you can realize, oh, is it because I'm not accepting feedback or not getting good feedback? Or is it that I'm getting feedback, but it's hard to take action on it? So just all of a sudden, having those terms and having that mental model, that framework, I feel like equips me to engage with feedback in a way that is much more powerful than when we kind of used all those terms interchangeably. STEPHANIE: Yeah, exactly. I think that it's very well understood that feedback is important and having a good feedback culture is really healthy. But I think we don't always talk about the next step, which is what do you do with feedback? And with the help of this article, I've kind of come to realize that all feedback is valuable, but not all of it is good. And she makes a really excellent point of saying that the way you respond to feedback also depends on the relationship you have with the person giving it. So, ideally, you have a high trust high respect relationship with that person. And so when they give you feedback, you are like, yeah, I'm receptive to this, and I want to do something about it. But sometimes you get feedback from someone, and you might not have that trust in that relationship or that respect. And it just straight up might not be good feedback for you. And the way you engage with it could be figuring out what part of it is helpful for me and what part of it is not? And if it's not helpful in terms of helping your growth, it might at least be informative. And that might help you learn something about the other person or about the circumstances or environment that you're in. JOËL: Again, I love the distinction you're making between helpful and informative. STEPHANIE: Yeah. I think I had to learn that the hard way this year. [laughs] So, yeah, I really hope that folks find this vocabulary or this idea...or consider it when they are thinking about feedback in terms of giving it or receiving it and using it in a way that works for them to grow the way they want to. JOËL: I'm curious, in your interactions, and learning, and growth over the past year, do you feel like you've leaned a little bit more into the mentorship or the coaching side of things? What would you say is the rough percentage breakdown? Are we talking 50-50, 80-20? STEPHANIE: That's such a good question. I think I received both this year. But I think I'm at a point in my career where coaching is more valuable to me. And I'm reminded of a time a few months into joining thoughtbot where I was working and pairing with a principal developer. And he was really turning the workaround on me and asking, like, what do I want to do? What do I see in the code? What areas do I want to explore? And I found it really uncomfortable because I was like, oh, I just want you to tell me what to do because I don't know, or at least at the time, I was really...I found it kind of stressful. But now, looking back on it and with this vocabulary, I'm like, oh, that's what true coaching was because I gained a lot of experience towards my foundational skill set of figuring out how to solve problems or identifying areas of refactoring through that process. And so sometimes coaching can feel really uncomfortable because you are stretching outside of your comfort zone and that your coach is hopefully supporting you but not just giving you the help but teaching you how to help yourself. JOËL: That's a really interesting thing to notice. And I think what I'm hearing is that coaching can feel less comfortable than mentoring because you're being asked to do more of the work yourself. And you're maybe being stretched in some ways that aren't exactly the same as you would get in a more mentoring-focused scenario. Does that sound right? STEPHANIE: Yeah, I think that sounds right because, like I said, I was also receiving mentorship, and I learned about new things. But those didn't always solidify in terms of empowering me next time to be able to do it without the help of someone else. Joël, what was an article that really spoke to you this last year? JOËL: So I really appreciated an article by Adrianna Chang, who's a developer at Shopify, about "Refactoring Legacy Code with the Strangler Fig Pattern." And it talks about this approach to moving refactoring code from one implementation to another. And it's a longer-ranged process, and how to do so incrementally. And a big theme for me this year has been refactoring and incremental change. I've had a lot of conversations with people about how to spot smaller steps. I've written an article on working incrementally. And so I think this was really nice because it gave a very particular technique on how to do so with an example. And so, because these sorts of conversations kept coming up this year, I found myself referencing this article all the time. STEPHANIE: I really loved this article too. And this last year, I also saw a strangler fig tree for the first time in real life in Florida. And I think that was after I had read this article. And it was really cool to make the connection between something I was seeing in nature with a pattern in software development or technique. JOËL: We have this metaphor, and now you get to see the real thing. I was excited because, at RubyConf Mini this year, I actually got to meet Adrianna. So it was really cool. It's like, "Hey, I've been referencing your article all year. It's super cool to meet you in person." STEPHANIE: That's awesome. I love that, just being able to support members of the community. What I really liked about the approach this article advocated for is that it allowed developers to continue working. You don't have to halt everything and dedicate time to refactor and not get any new feature work done. And that's the beauty of the incremental approach that you were talking about earlier, where you can continue development. Sometimes that refactoring might be paused for some reason or another, but then you can pick back up where you left off. And that is really intriguing to me because I think this past year, I was working on a client where refactoring seemed like something we had to dedicate special time for. And it constantly became tough to prioritize and sell to stakeholders. Whereas if you incorporate it into the work and do it in a way that doesn't stop the show [laughs] from going on, it can work really well and work towards sustainability and maintenance, which is another thing that we've talked a lot about on the show. JOËL: Something that's really powerful, I think, with that technique is that it allows you to have all of the intermediate steps get merged into your main branch and get shipped. So you don't have to have this long-running branch with a big change that's constantly going stale, and you're having to keep in sync with the main branch. And, unfortunately, I've often seen even this sort of thing where you create a long-running branch for a big change, a big refactor, and eventually, it just gets abandoned, and you have not locked in any wins. STEPHANIE: Yeah, that's the worst of both worlds where you've dedicated time and resources and don't get the benefits of that work. I also liked that the strangler fig pattern kind of forces you to really understand the existing code. I think working with legacy code can be really challenging. And a lot of people don't like to do it because it involves a lot of spelunking and figuring out, okay, what's really going on. But in order to isolate the pieces to, you know, slowly start to stop making calls to the old code, it requires that you take a hard look at your legacy code and really figure it out. And I honestly think that that then informs the new code that you write to better support both the old feature and also any new features to come. JOËL: Definitely. The really nice thing about this pattern is that it also scales up and down. You can do this really small...even as part of a feature branch; maybe it's just part of your development process, even if you don't necessarily ship all of the intermediate steps. But it helps you work more incrementally and in a tighter scope. And then you can scale it up as big as changing out entire sections of a framework or...I think Adrianna's example is like switching out a data source. And so you can do some really large refactors. But then you could do it as well on just a small feature. I really like using this pattern anytime you're doing things like Rails upgrades, and you've got old gems that might not convert over where it's like, oh, the community abandoned this gem between Rails 4 and Rails 5. But now you need sort of a bridge to get over. And so I think that pattern is particularly powerful when doing something like a Rails upgrade. STEPHANIE: Very Cool. JOËL: So what would be a second article that was really impactful for you in the past year? STEPHANIE: So, speaking of refactoring, I really enjoyed a blog post called Finding Time to Refactor by a former thoughtboter, German Velasco. He makes a really great point that we should think of completeness in our work, not just when the code works as expected or meets the product requirements, but also when it is clear and maintainable. And so he really advocates for baking refactoring into just your normal development process. And like I said, that goes back to this idea that it can be incremental. It doesn't have to be separate or something that we do later, which is kind of what I had learned before coming to thoughtbot. So when I was also speaking about just my technical growth, this shift in philosophy, for me, was a really big part of that. And I just started kind of thinking and seeing ways to just do it in my regular process. And I think that has really helped me to feel better about my work and also see a noticeable improvement in the quality of my code. So he mentioned the three times that he makes sure to refactor, and that is one when he is practicing TDD and going through the red-green-refactor cycle. JOËL: It's in the name. STEPHANIE: [laughs] It really is. Two, when code is difficult to understand, so if he's coming in and fixing a bug and he pays the tax of trying to figure out confusing code, that's a really great opportunity to then reduce that caring cost for others by making it clear while you're in there, so leaving things better than you found it. And then three, when the existing design doesn't work. We, I think, have mentioned the adage, "Make the change easy, and then make the easy change." So if he's coming in to add a new feature and it's just not quite working, then that's a really good opportunity to refactor the existing design to support this new information or new concept. JOËL: I like those three scenarios. And I think that second one, in particular, resonated with me, the making things easier to understand. And in the sort of narrower sense of the word refactoring, traditionally, this means changing the structure of the code without changing its behavior. And I once had a situation where I was dealing with a series of early return expressions in a method that were all returning Booleans. And it was really hard because there were some unlesses, some ifs, some weird negation happening. And I just couldn't figure out what this code was doing. STEPHANIE: Did you draw a diagram? [laughs] JOËL: I did not. But it turns out this code was untested. And so I pretty much just tried, like, it took two Booleans as inputs and gave back a Boolean. So I just tried all the combinations, put it in, saw what it gave me out, and then wrote tests for them. And then realized that the test cases were telling me that this code was always returning false unless both inputs were true. And that's when it kind of hits me, it's like, wait a minute, this is Boolean AND. We've reimplemented Boolean AND with this convoluted set of conditional code. And so, at the end there, once I had that test coverage to feel confident, I went in and did a refactor where I changed the implementation. Instead of being...I think it was like three or four inline conditionals, just rewrote it as argument one and argument two, and that was much easier to read. STEPHANIE: That's a great point. Because the next time someone comes in here, and let's say they have to maybe add another condition or whatever, they're not just tacking on to this really confusing thing. You've hopefully made it easier for them to work with that code. And I also really appreciated, you know, I was mentioning how this article affected my thought process and how I approach development, but it's a really great one to share to then foster a culture of just continuous refactoring, I guess, is what I'm going to call it [laughs] and hopefully, avoiding having to do a massive rewrite or a massive effort to refactor. The phrase that comes to mind is many hands make light work. And if we all incorporated this into our process, perhaps we would just be working all around with more delightful code. Joël, do you have one more article that really stood out to you this year? JOËL: One that I think I really connected with this year is "Parse, Don't Validate" by Alexis King. Long-time listeners of the show will have heard me talk about this a little bit with Chris Toomey when he was a guest on the show this past fall. But the gist of the article is that the process of parsing is converting a broader type into a narrower type with the potential for errors. So traditionally, we think of this as turning a string which a string is very broad. All sorts of things are strings, and then you turn it into something else. So maybe you're parsing JSON. So you take a string of characters and try to turn it into a Ruby hash, but not all strings are valid hashes. So there's also the possibility for errors. And so, JSON.parse() could raise an error in Ruby. This idea, though, can be then expanded because, ideally, you don't want to just check that a value is valid for your stricter rules. You don't want to just check that a string is valid JSON and then pass the string along to the next person. You actually want to transform it. And then everybody else down the line can interact with that hash and not have to do a check again is this valid JSON? You've already validated that you've already converted it into a hash. You don't need to check that it's valid JSON again because, by the nature of being a hash, it's impossible for it to be invalid. Now, you might have some extra requirements on that hash. So maybe you require certain keys to be present and things like that. And I think that's where this idea gets even more powerful because then you can kind of layer this on top and have a second parsing step where you say, I'm going to parse this hash into, let's say, a shopping cart object. And so, not all Ruby hashes are valid shopping carts. And so you try to take a broader value and coerce it into a narrower value or transform it into a narrower value and potentially raise an error for those hashes that are not valid shopping carts. And then, whoever down the line gets a shopping cart object, you can just call items on it. You can call price on it. You don't need to check is this key present? Because now you have that certainty. STEPHANIE: This reminds me of when I was working with TypeScript in the summer of last year. And having come from a dynamically-typed language background, it was really challenging but also really interesting to me because we were also parsing JSON. But once we had transformed or parsed that data into this domain object, we had a lot more confidence about what we were working in. And all the functions we wrote down the line or used on the line, we could know for sure that, okay, it has these properties about it. And that really shaped the code we wrote. JOËL: So use the word confident here, which, for me, it's a keyword. And so you can now assume that certain properties are true because it's been checked once. That can be tricky if you don't actually do a transformation. If you're just sort of passing a raw value down, you'll often end up with code that is defensive that keeps rechecking the same conditions over and over. And you see this lot around nil in Ruby where somebody checks for a value for nil, and then inside that conditional, three or four other conditions deep, we recheck the same value for nil again, even though, in theory, it should not be nil at that point. And so by doing transformations like that, by parsing instead of just validating, we can ensure that we don't have to repeat those conditions. STEPHANIE: Yeah, I mean, that refers back to the analyzing conditional code that we spent a bit of time talking about at the beginning of this episode. Because I remember in that application, we render different components based on the status of this domain object. And there was a condition for when the status was something that was not expected. And then someone had left a comment that was like, technically, this should never happen. But I think that he had to add it to appease the compiler. And I think had we been able to better enforce those boundaries, had we been more thoughtful around our domain modeling, we could have figured out how to make sure that we weren't then introducing that ambiguity down the line. JOËL: I think it's interesting that you immediately went to talking about TypeScript here because TypeScript has a type system. And the "f, Don't Validate" article is written in Haskell, which is another typed language. And types are great for showing you exactly like, here's the boundary. On this side of it, it's a string, and on this side here, it's a richly-typed value that has been parsed. In Ruby, we don't have that, everything is duck-typed, but I think the principle still applies. It's a little bit more implicit, but there are zones of high or low assumptions about the data. So when I'm interacting directly with raw input from a third-party endpoint, I'm really only expecting some kind of raw string from the body of the response. It may or may not be valid. There are all sorts of checks I need to do to make sure I can do anything with it. So that is a very low assumption zone. Later on, in the business logic part of the code, I might expect that I can call a method on the object to get the price of a shopping cart or a list of items or something like that. Now I'm in a much higher assumption zone. And being self-aware about where we transition from low assumptions to high assumptions is, I think, a really key takeaway for how we interact with code in Ruby. Because, oftentimes, where that boundary is a little bit fuzzy or where we think it's in one place but it's actually in a different place is where bugs tend to cluster. STEPHANIE: Do you have any thoughts about how to adhere to those rules that we're making so we're not having to assume in a dynamically-typed language? JOËL: One way that I think is often helpful is trying to use richer objects and to not just rely on primitives all the time. So don't pass a business process a hash and be just like, trust me, I checked it; it's got the right keys because the day will come when you pass it a malformed hash and now we're going to have an error in the business process. And now we have a dilemma because do we want to start adding defensive checks in the business process to be like, oh, are all our keys that we expect present, things like that? Do we need to elsewhere in the code make sure we process the hash correctly? It becomes a little bit messy. And so, oftentimes, it might be better to say, don't pass a raw hash around. Create a domain object that has the actual method that you want, and pass that instead. STEPHANIE: Oh, sounds like a great opportunity to use the new data class in Ruby 3.2 that we talked about in an episode prior. JOËL: That's a great suggestion. I would definitely reach for something like that, I think, in a situation where I'm trying to model something a little bit richer than just a hash. STEPHANIE: I also think that there have been more trends around borrowing concepts from functional programming, and especially with the introduction of classes that represent nil or empty states, so instead of just using the default nil, having at least a bit of context around a nil what or an empty what. That then might have methods that either raise an error or just signal that something is wrong with the assumptions that we're making around the flexibility that we get from duck typing. I'm really glad that you proposed this topic idea for today's episode because it really represented a lot of themes that we have been discussing on the show in the past couple of months. And I am excited to maybe do this again in the future to just capture what's been interesting or inspiring for us throughout the year. JOËL: On that note, shall we wrap up? STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm. JOËL: This show has been produced and edited by Mandy Moore. STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter. STEPHANIE: Or reach both of us at hosts@bikeshed.fm via email. JOËL: Thank you so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeeeeee!!!!!!!! ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.
Jeff Mayberry and Samuel Lau review stocks (2:39), fixed income (5:09), commodities (7:14), macro news (8:54) and Fedspeak (11:42) for the week ended Jan. 20, 2023. Then for their Topic of the Week (16:15), they explore the measurement of food prices, including significant divergences between international and U.S. metrics measuring the cost of food. In their preview of economic data prints for the week of Jan. 23-27 (32:38), Jeff and Sam say they will be watching for the Leading Economic Index on Monday; the S&P Global manufacturing and services PMIs, Tuesday; fourth quarter U.S. GDP, Thursday; and especially the Federal Reserve's favorite inflation gauge, the PCE Core Deflator, Friday.
Array Cast - January 6, 2023 Show NotesThanks to Bob Therriault, Adám Brudzewsky, and Marshall Lochbaum for gathering these links:[01] 00:01:13 Twitter Poll for APL Cast https://twitter.com/a_brudz/status/1607653845445873664[02] 00:04:30 Revamped BQNcrate https://mlochbaum.github.io/bqncrate/[03] 00:06:44 APLcart https://aplcart.info[04] 00:07:43 Inclusive Range in Q https://www.5jt.com/the-rest-is-silence p: Prime in J https://code.jsoftware.com/wiki/Vocabulary/pco Prime in Dyalog APL https://dfns.dyalog.com/n_pco.htm[05] 00:09:42 Consecutive values https://mlochbaum.github.io/bqncrate/?q=consecutive%20values[06] 00:11:46 APL Tacit help https://tacit.help BQN https://saltysylvi.github.io/bqn-tacit-helper/ J tte tacit to explicit https://code.jsoftware.com/wiki/Addons/debug/tte 13 : explicit to tacit https://code.jsoftware.com/wiki/Vocabulary/com J Phrases https://code.jsoftware.com/wiki/Phrases[07] 00:19:39 Fun Q https://fun-q.net/ APL Farm Discord/Matrix https://apl.wiki/APL_Farm[08] 00:22:00 Nick Psaris Episode on ArrayCast https://www.arraycast.com/episodes/episode42-nick-psaris-q[09] 00:24:20 Extended Precision and Rational Types in J https://www.jsoftware.com/help/jforc/elementary_mathematics_in_j.htm#_Toc191734516 BQN systemMath.fact https://github.com/mlochbaum/BQN/blob/master/spec/system.md#math NARS 2000 https://aplwiki.com/wiki/NARS2000[10] 00:26:55 Dyalog Licence https://www.dyalog.com/prices-and-licences.htm CBQN GPL-3 Licence https://github.com/dzaima/CBQN#license J GPL-3 Licence https://github.com/jsoftware/jsource/blob/master/license.txt q Licence https://kx.com/developers/download-licenses/[11] 00:29:05 April Programming Language https://aplwiki.com/wiki/April[12] 00:31:20 Sort in BQN https://github.com/mlochbaum/BQN/blob/master/doc/order.md#sort Without in APL https://aplwiki.com/wiki/Without Less in J https://code.jsoftware.com/wiki/Vocabulary/minusdot#dyadic[13] 00:34:30 Jelly programming language https://apl.wiki/Jelly https://github.com/DennisMitchell/jellylanguage[14] 00:35:08 Rust programming language https://www.rust-lang.org/[15] 00:36:40 Lesser of >. in J https://code.jsoftware.com/wiki/Vocabulary/ltdot#dyadic[16] 00:38:20 Code Golf https://apl.wiki/Code_golf Parse float functionhttps://mlochbaum.github.io/BQN/spec/system.html#input-and-output[17] 00:40:44 APL ⎕D https://help.dyalog.com/latest/#Language/System%20Functions/d.htm APL ⎕C https://help.dyalog.com/latest/#Language/System%20Functions/c.htm APL ⎕A https://help.dyalog.com/latest/#Language/System%20Functions/a.htm Advent of Code https://en.wikipedia.org/wiki/Advent_of_Code[18] 00:43:16 APLx https://aplwiki.com/wiki/APLX APL PLUS https://aplwiki.com/wiki/APL*PLUS[19] 00:46:23 Dyalog ⎕DT https://help.dyalog.com/latest/#Language/System%20Functions/dt.htm[20] 00:52:46 Jelly Tutorial https://github.com/DennisMitchell/jellylanguage/wiki/Tutorial[21] 00:57:10 Plus Scan in BQN https://github.com/mlochbaum/BQN/blob/master/doc/scan.md APL +.× https://help.dyalog.com/latest/#Language/Primitive%20Operators/Inner%20Product.htm J +/ . * https://www.jsoftware.com/help/jforc/applied_mathematics_in_j.htm#_Toc191734505[22] 01:00:30 q advent of code solutions http://github.com/qbists/studyq/[23] 01:01:30 SQL https://en.wikipedia.org/wiki/SQL q for Mortals https://code.kx.com/q4m3/[24] 01:04:21 BQN Advent of Code list https://mlochbaum.github.io/BQN/community/aoc.html[25] 01:08:42 Adám's link http://www.jsfuck.com/ https://en.wikipedia.org/wiki/JSFuck[26] 01:10:02 q links for Advent of Code https://github.com/qbists/studyq/tree/main/aoc/2022 J forums Advent of Code https://www.jsoftware.com/cgi-bin/forumsearch.cgi?all=&exa=advent+of+code&one=&exc=&add=&sub=&fid=&tim=0&rng=0&dbgn=1&mbgn=1&ybgn=2005&dend=31&mend=12¥d=2022 J wiki Advent of Code https://code.jsoftware.com/wiki/Essays/Advent_Of_Code APL wiki Advent of Code https://apl.wiki/aoc K Wiki Advent of Code: https://k.miraheze.org/wiki/Advent_of_Code[27] 01:12:40 Convolutional Neural Networks in APL https://dl.acm.org/doi/pdf/10.1145/3315454.3329960 Neural Networks https://aplwiki.com/wiki/Neural_networks[28] 01:15:00 Dr. Raymond Polivka's new APL book: http://aplclass.com/book/ APL Stefan Kruger Learning APL https://aplwiki.com/wiki/Books#Learning_APL J J for C Programmers https://www.jsoftware.com/help/jforc/contents.htm J Playground Example|Neural Networks https://jsoftware.github.io/j-playground/bin/html2/# BQN Tutorials https://mlochbaum.github.io/BQN/tutorial/index.html[29] 01:17:38 APL Wiki Learning Resources https://aplwiki.com/wiki/Learning_resources k Wiki Learning Resources https://k.miraheze.org/wiki/Learning_Resources J Wiki Learning Resources https://code.jsoftware.com/wiki/Guides/GettingStarted[30] 01:19:21 Contact AT ArrayCast DOT com
Hey, it's Alex from Remote Work Life. On today's episode of the Remote Work Life Business Spotlight, I'm featuring yet another top remote business called, Parse.ly!.
Two things to know today TechAisle's 2023 Priorities and challenges insights AND Two competing sets of jobs data to parse Do you want the show on your podcast app or the written versions of the stories? Subscribe to the Business of Tech: https://www.businessof.tech/subscribe/ Support the show on Patreon: https://patreon.com/mspradio/ Want our stuff? Cool Merch? Wear “Why Do We Care?” - Visit https://mspradio.myspreadshop.com Follow us on: Facebook: https://www.facebook.com/mspradionews/ Twitter: https://twitter.com/mspradionews/ Instagram: https://www.instagram.com/mspradio/ LinkedIn: https://www.linkedin.com/company/28908079/
Vigil is nominated for Best Podcast of Milwaukee 2022. You can vote for it here! Voting runs from Nov. 3 through Dec. 1, 2022. You must vote in at least 3 categories for your vote to count. Best Podcast is listed under "City Confidential." You will need to provide a valid email address to vote. --- Vigil Part 8: Parse Part 2 Vigil has met his match in Parse as he dives deeper into conflict with the most powerful organization in the heroic world. How will he fare? Join Maria Kennedy in Part 8 to find out. Content Warning: This episode contains depictions of violence, crime, hacking, surveillance, loss of memory, loss of mental faculties, and a mortar strike, as well as mentions of criminal organizations and kidnapping. There are some sudden, loud noises including a large explosion. It also contains strong language and themes that may not be suitable for all audiences. Listener discretion is advised. The Transcript for Part 8 of Vigil is too large to fit in our show notes, but you can find it here: Episode Transcript (May Contain Spoilers) Vigil is a superhero, audio fiction thriller in ten parts from Adam Qutaishat (Button Podcasts) and All In Productions. Subscribe on your feed to be the first to listen to Vigil, and learn more about the series and creators by checking out our website at vigilpod.com. You can follow us on social media at buttonpods on Instagram and Facebook. You can drop us a line at contact@vigilpod.com. Learn more about your ad choices. Visit megaphone.fm/adchoices
This week, Editor-in-Chief Elliot Williams and Staff Writer Dan Maloney get together for a look at everything cool under the hardware-hacking sun. Think you need to learn how to read nerve impulses to run a prosthetic hand? Think again -- try spring-loaded plungers and some Hall effect sensors. What's Starlink saying? We're not sure, but if you're clever enough you can use the radio link for ad hoc global positioning. Historically awful keyboards, pan-and-scan cable weather stations, invisibility cloaks, plumbing fittings for electrical controls -- we'll talk about it all. And if you've never heard two Commodore 64s and a stack of old floppies turned into an electronic accordion, you really don't know what you're missing. We've go sooo many links. You must click on them all!
Chris Toomey is back! (For an episode.) He talks about what he's been up to since handing off the reins to Joël. He's been playing around with something at Sagewell that he enjoys. At the core of it? Serializers. Primalize gem (https://github.com/jgaskins/primalize) Derek's talk on code review (https://www.youtube.com/watch?v=PJjmw9TRB7s) Inertia.js (https://inertiajs.com/) Phantom types (https://thoughtbot.com/blog/modeling-currency-in-elm-using-phantom-types) io-ts (https://gcanti.github.io/io-ts/) dry-rb (https://dry-rb.org/) parse don't validate (https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-validate/) value objects (http://wiki.c2.com/?ValueObject) broader perspective on parsing (https://thoughtbot.com/blog/a-broader-take-on-parsing) Enumerable#tally (https://medium.com/@baweaver/ruby-2-7-enumerable-tally-a706a5fb11ea) RubyConf mini (https://www.rubyconfmini.com/) where.missing (https://boringrails.com/tips/activerecord-where-missing-associations) Transcript: JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville. And today, I'm joined by a very special guest, former host Chris Toomey. CHRIS: Hi, Joël. Thanks for having me. JOËL: And together, we're here to share a little bit of what we've learned along the way. So, Chris, what's new in your world? CHRIS: Being on this podcast is new in my world, or everything old is new again, or something along those lines. But, yeah, thank you so much for having me back. It's a pleasure. Although it's very odd, it feels somehow so different and yet very familiar. But yeah, more generally, what's new in my world? I think this was probably in development as I was winding down my time as a host here on The Bike Shed, but I don't know that I ever got a chance to talk about it. There has been a fun sort of deep-in-the-weeds technical thing that we've been playing around with at Sagewell that I've really enjoyed. So at the core of it, we have serializers. So we take some data structures in our Ruby on Rails code base, and we need to serialize them to JSON to send them to the front end. In our case, we're using Inertia, so it's not quite a JSON API, but it's fine to think about it in that way for the context of this discussion. And what we were finding is our front end has TypeScript. So we're writing Svelte, which is using TypeScript. And so we're stating or asserting that the types like, hey, we're going to get this data in from the back end, and it's going to have this shape to it. And we found that it was really hard to keep those in sync to keep, like, what does the user mean on the front end? What's the data that we're going to get? It's going to have a full name, which is a string, except sometimes that might be null. So how do we make sure that those are keeping up to date? And then we had a growing number of serializers on the back end and determining which serializer we were actually using, and it was just...it was a mess, to put it lightly. And so we had explored a couple of different options around it, and eventually, we found a library called Primalize. So Primalize is a Ruby library. It is for writing JSON serializers. But what's really interesting about it is it has a typing layer. It's like a type system sort of thing at play. So when you define a serializer in Primalize, instead of just saying, here are the fields; there is an ID, a name, et cetera, you say, there is an ID, and it is a string. There is a name, and it is a string, or an optional string, which is the even more interesting bit. You can say array. You can say object. You can say an enum of a couple of different values. And so we looked at that, and we said, ooh, this is very interesting. Astute listeners will know that this is probably useless in a Ruby system, which doesn't have types or a compilation step or anything like that. But what's really cool about this is when you use a Primalize serializer, as you're serializing an object, if there is ever a type mismatch, so the observed type at runtime and the authored type if those ever mismatch, then you can have some sort of notification happen. So in our case, we configured it to send a warning to Sentry to say, "Hey, you said the types were this, but we're actually seeing this other thing." Most often, it will be like an Optional, a null sneaking through, a nil sneaking through on the Ruby side. But what was really interesting is as we were squinting at this, we're like, huh, so now we're going to write all this type information. What if we could somehow get that type information down to the front end? So I had a long weekend, one weekend, and I went away, and I wrote a bunch of code that took all of those serializers, ran through them, and generated the associated TypeScript interfaces. And so now we have a build step that will essentially run that and assert that we're getting the same thing in CI as we have committed to the codebase. But now we have the generated serializer types on the front end that match to the used serializer on the back end, as well as the observed run-time types. So it's a combination of a true compilation step type system on the front end and a run-time type system on the back end, which has been very, very interesting. JOËL: I have a lot of thoughts here. CHRIS: I figured you would. [laughs] JOËL: But the first thing that came to mind is, as a consultant, there's a scenario with especially smaller startups that generally concerns me, and that is the CTO goes away for a weekend and writes a lot of code... CHRIS: [laughs] JOËL: And brings in a new system on Monday, which is exactly what you're describing here. How do you feel about the fact that you've done that? CHRIS: I wasn't ready to go this deep this early on in this episode. JOËL: [laughs] CHRIS: But honestly, that is a fantastic question. It's a thing that I have been truly not struggling with but really thinking about. We're going to go on a slight aside here, but I am finding it really difficult to engage with the actual day-to-day coding work that we're doing and to still stay close to the codebase and not be in the way. There's a pattern that I've seen happen a number of times now where I pick up a piece of work that is, you know, one of the tickets at the top of the backlog. I start to work on it. I get pulled into a meeting, then another meeting, then three more meetings. And suddenly, it's three days later. I haven't completed this piece of work that was defined to be the next most important piece of work. And suddenly, I'm blocking the team. JOËL: Hmmm. CHRIS: So I actually made a rule that I'm not allowed to own critical path work, which feels weird because it's like, I want to be engaged with that work. So the counterpoint to that is I'm now trying to schedule pairing sessions with each of the developers on the team once a week. And in that time, I can work on that sort of stuff with them, and they'll then own it and run with it. So it makes sure that I'm not blocking on those sorts of things, but I'm still connected to the core work that we're doing. But the other thing that you're describing of the CTO goes away for the weekend and then comes back with a new harebrained scheme; I'm very sensitive to that, having worked on; frankly, I think the same project. I can think of a project that you and I worked on where we experienced this. JOËL: I think we're thinking of the same project. CHRIS: So yes. Like, I'm scarred by that and, frankly, a handful of experiences of that nature. So we actually, I think, have a really healthy system in place at Sagewell for capturing, documenting, prioritizing this sort of other work, this developer-centric work. So this is the feature and bug work that gets prioritized and one list over here that is owned by our product manager. Separately, the dev team gets to say, here are the pain points. Here's the stuff that keeps breaking. Here are the things that I wish was better. Here is the observability hard-to-understand bits. And so we have a couple of different systems at play and recurring meetings and sort of unique ceremonies around that, and so this work was very much a fallout of that. It was actually a recurring topic that we kept trying a couple of different stabs at, and we never quite landed it. And then I showed up this one Monday morning, and I was like, "I found a thing; what do we think?" And then, critically, from there, I made sure I paired with other folks on the team as we pushed on the implementation. And then, actually, I mentioned Primalize, the library that we're using. We have now since deprecated Primalize within the app because we kept just adding to it so much that eventually, we're like, at this point, should we own this stuff? So we ended up rewriting the core bits of Primalize to better fit our use cases. And now we've actually removed Primalize, wonderful library. I highly recommend it to anyone who has that particular use case but then the additional type generation for the front end. Plus, we have some custom types within our app, Money being the most interesting one. We decided to model Money as our first-class consideration rather than just letting JavaScript have the sole idea of a number. But yes, in a very long-winded way, yes, I'm very sensitive to the thing you described. And I hope, in this case, I did not fall prey to the CTO goes away for the weekend and made a thing. JOËL: I think what I'm hearing is the key difference here is that you got buy-in from the team around this idea before you went out and implemented it. So you're not off doing your own things disconnected from the team and then imposing it from on high. The team already agreed this is the thing we want to do, and then you just did it for them. CHRIS: Largely, yes. Although I will say there are times that each developer on the team, myself included, have sort of gone away, come back with something, and said, "Hey, here's a WIP PR exploring an area." And there was actually...I'm forgetting what the context was, but there was one that happened recently that I introduced. I was like; I had to do this. And the team talked me out of it, and I ended up closing that PR. Someone else actually made a different PR that was an alternative implementation. I was like, no, that's better; we should absolutely do that. And I think that's really healthy. That's a hard thing to maintain but making sure that everyone feels like they've got a strong voice and that we're considering all of the different ways in which we might consider the work. Most critically, you know, how does this impact users at the end of the day? That's always the primary consideration. How do we make sure we build a robust, maintainable, observable system, all those sorts of things? And primarily, this work should go in that other direction, but I also don't want to stifle that creative spark of I got this thing in my head, and I had to explore it. Like, we shouldn't then need to never mind, throw away the work, put it into a ticket. Like, for as long as we can, that more organic, intuitive process if we can retain that, I like that. Critically, with the ability for everyone to tell me, "No, this is a bad idea. Stop it. What are you doing?" And that has happened recently. I mean, they were kinder about it, but they did talk me out of a bad idea. So here we are. JOËL: So you showed up on Monday morning, not with telling everyone, "Hey, I merged this thing over the weekend." You're showing up with a work-in-progress PR. CHRIS: Yes, definitely. I mean, everything goes through a PR, and everything has discussion and conversation around it. That's a strong, strong like Derek Prior's wonderful talk Building a Culture of Code Review. I forget the exact name of it. But it's one of my favorite talks in talking about the utility of code review as a way to share ideas and all of those wonderful things. So everything goes through code review, and particularly anything that is of that more exploratory architectural space. Often we'll say any one review from anyone on the team is sufficient to merge most things but something like that, I would want to say, "Hey, can everybody take a look at this? And if anyone has any reservations, then let's talk about it more." But if I or anyone else on the team for this sort of work gets everybody approving it, then cool, we're good to go. But yeah, code review critical, critical part of the process. JOËL: I'm curious about Primalize, the gem that you mentioned. It sounds like it's some kind of validation layer between some Ruby data structure and your serializers. CHRIS: It is the serializer, but in the process of serializing, it does run-time type validation, essentially. So as it's accessing, you know, you say first name. You have a user object. You pass it in, and you say, "Serializer, there's a first name, and it's a string." It will call the first name method on that user object. And then, it will check that it has the expected type, and if it doesn't, then, in our case, it sends to Sentry. We have configured it...it's actually interesting. In development and test mode, it will raise for a type mismatch, and in production mode, it will alert Sentry so you can configure that differently. But that ends up being really nice because these type mismatches end up being very loud early on. And it's surprisingly easy to maintain and ends up telling us a lot of truths about our system because, really, what we're doing is connecting data from many different systems and flowing it in and out. And all of the inputs and outputs from our system feel very meaningful to lock down in this way. But yeah, it's been an adventure. JOËL: It seems to me there could almost be two sets of types here, the inputs coming into Primalize from your Ruby data structures and then the outputs that are the actual serialized values. And so you might expect, let's say, an integer on the Ruby side, but maybe at the serialization level, you're serializing it to a string. Do you have that sort of conversion step as part of your serializers sometimes, or is the idea that everything's already the right type on the Ruby side, and then we just, like, to JSON it at the end? CHRIS: Yep. Primalize, I think, probably works a little closer to what you're describing. They have the idea of coercions. So within Primalize, there is the concept of a timestamp; that is one of the types that is available. But a timestamp is sort of the union of a date, a time, or I think they might let through a string; I'm not sure if there is as well. But frankly, for us, that was more ambiguity than we wanted or more blurring across the lines. And in the implementation that we've now built, date and time are distinct. And critically, a string is not a valid date or time; it is a string, that's another thing. And so there's a bunch of plumbing within the way you define the serializers. There are override methods so that you can locally within the serializer say, like, oh, we need to coerce from the shape of data into this other shape of data, even little like in-line proc, so we can do it quickly. But the idea is that the data, once it has been passed to the serializer, should be up the right shape. And so when we get to the type assertion part of the library, we expect that things are in the asserted type and will warn if not. We get surprisingly few warnings, which is interesting now. This whole process has made us pay a little more intention, and it's been less arduous simultaneously than I would have expected because like this is kind of a lot of work that I'm describing. And yet it ends up being very natural when you're the developer in context, like, oh, I've been reading these docs for days. I know the shape of this JSON that I'm working with inside and out, and now I'll just write it down in the serializer. It's very easy to do in that moment, and then it captures it and enforces it in such a useful way. As an aside, as I've been looking at this, I'm like, this is just GraphQL, but inside out, I'm pretty sure. But that is a choice that we have made. We didn't want to adopt the whole GraphQL thing. But just for anyone out there who is listening and is thinking, isn't this just GraphQL but inside out? Kind of. Yes. JOËL: I think my favorite part of GraphQL is the schema, which is not really the selling point for GraphQL, you know, like the idea that you can traverse the graph and get any subset of data that you want and all that. I think I would be more than happy with a REST API that has some kind of schema built around it. And someone told me that maybe what I really just want is SOAP, and I don't know how to feel about that comment. CHRIS: You just got to have some XML, and some WSDLs, and other fun things. I've heard people say good things about SOAP. SOAP seems like a fine idea. If anything, I think a critical part of this is we don't have a JSON API. We have a very tightly coupled front end and back end, and a singular front end, frankly. And so that I think naturally...that makes the thing that I'm describing here a much more comfortable fit. If we had multiple different downstream clients that we're trying to consume from the same back end, then I think a GraphQL API or some other structured JSON schema, whatever it is type of API, and associated documentation and typing layer would be probably a better fit. But as I've said many a time on this here, Bike Shed, Inertia is one of my favorite libraries or frameworks (They're probably more of a framework.) one of my favorite technological approaches that I have ever found. And particularly in buildings Sagewell, it has allowed us to move so rapidly the idea that changes are, you know, one fell swoop changes everything within the codebase. We don't have to think about syncing deploys for the back end and the front end and how to coordinate across them. Our app is so much easier to understand by virtue of that architecture that Inertia implies. JOËL: So, if I understand correctly, you don't serialize to JSON as part of the serializers. You're serializing directly to JavaScript. CHRIS: We do serialize to JSON. At the end of the day, Inertia takes care of this on both the Rails side and the client side. There is a JSON API. Like, if you look at the network inspector, you will see XHR requests happening. But critically, we're not doing that. We're not the ones in charge of it. We're not hitting a specific endpoint. It feels as an application coder much closer to a traditional Rails app. It just happens to be that we're writing our view layer. Instead of an ERB, we're writing them in Svelte files. But otherwise, it feels almost identical to a normal traditional Rails app with controllers and the normal routing and all that kind of stuff. JOËL: One thing that's really interesting about JSON as an interchange format is that it is very restrictive. The primitives it has are even narrower than, say, the primitives that Ruby has. So you'd mentioned sending a date through. There is no JSON date. You have to serialize it to some other type, potentially an integer, potentially a string that has a format that the other side knows how it's going to interpret. And I feel like it's those sorts of richer types when we need to pass them through JSON that serialization and deserialization or parsing on the other end become really interesting. CHRIS: Yeah, I definitely agree with that. It was a struggling point for a while until we found this new approach that we're doing with the serializers in the type system. But so far, the only thing that we've done this with is Money. But on the front end, a while ago, we introduced a specific TypeScript type. So it's a phantom type, and I believe I'm getting this correct. It's a phantom type called Cents, C-E-N-T-S. So it represents...I'm going to say an integer. I know that JavaScript doesn't have integers, but logically, it represents an integer amount of cents. And critically, it is not a number, like, the lowercase number in the type system. We cannot add them together. We can't -- JOËL: I thought you were going to say, NaN. CHRIS: [laughs] It is not a number. I saw a n/a for not applicable somewhere in the application the other day. I was like, oh my God, we have a NaN? It happened? But it wasn't, it was just n/a, and I was fine. But yeah, so we have this idea of Cents within the application. We have a money input, which is a special input designed exactly for this. So to a user, it is formatted to look like you're entering dollars and cents. But under the hood, we are bidirectionally converting that to the integer amount of cents that we need. And we strictly, within the type system, those are cents. And you can't do math on Cents unless you use a special set of helper functions. You cannot generate Cents on the fly unless you use a special set of helper functions, the constructor functions. So we've been really restrictive about that, which was kind of annoying because a lot of the data coming from the server is just, you know, numbers. But now, with this type system that we've introduced on the Ruby side, we can assert and enforce that these are money.new on the Ruby side, so using the Money gem. And they come down to the front end as capital C Cents in the type system on the TypeScript side. So we're able to actually bind that together and then enforce proper usage sort of on both sides. The next step that we plan to do after that is dates and times. And those are actually almost weirder because they end up...we just have to sort of say what they are, and they will be ISO 8601 date and time strings, respectively. But we'll have functions that know this is a date string; that's a thing. It is, again, a phantom type implemented within our TypeScript type system. But we will have custom functions that deal with that and really constrain...lock ourselves down to only working with them correctly. And critically, saying that is the only date and time format that we work with; there is no other. We don't have arbitrary dates. Is this a JSON date or something else? I don't know; there are too many date syntaxes. JOËL: I like the idea of what you're doing in that it sounds like you're very much narrowing that sort of window of where in the stack the data exists in the sort of unstructured, free-floating primitives that could be misinterpreted. And so, at this point, it's almost narrowed to the point where it can't be touched by any user or developer-written code because you've pushed the boundaries on the Rails side down and then on the JavaScript side up to the point where the translation here you define translations on one side or, I guess, a parser on one side and a serializer on the other. And they guarantee that everything is good up until that point. CHRIS: Yep, with the added fun of the runtime reflection on the Ruby side. So it's an interesting thing. Like, TypeScript actually has similar things. You can say what the type is all day long, and your code will consistently conform to that asserted type. But at the end of the day, if your JSON API gets in some different data...unless you're using a library like io-ts, is one that I've looked at, which actually does parsing and returns a result object of did we parse to the thing that you wanted or did we get an error in that data structure? So we could get to that level on the client side as well. We haven't done that yet largely because we've essentially pushed that concern up to the Ruby layer. So where we're authoring the data, because we own that, we're going to do it at that level. There are a bunch of benefits of defining it there and then sort of reflecting it down. But yeah, TypeScript, you can absolutely lie to yourself, whereas Elm, a language that I know you love dearly, you cannot lie to yourself in Elm. You've got to tell the truth. It's the only option. You've got to prove it. Whereas in TypeScript, you can just kind of suggest, and TypeScript will be like, all right, cool, I'll make sure you stay honest on that, but I'm not going to make you prove it, which is an interesting sort of set of related trade-offs there. But I think we found a very comfortable resting spot for right now. Although now, we're starting to look at the edges of the Ruby system where data is coming in. So we have lots of webhooks and other external partners that we're integrating with, and they're sending us data. And that data is of varying shapes. Some will send us a payload with the word amount, and it refers to an integer amount of cents because, of course, it does. Some will send us the word amount in their payload, and it will be a floating amount of dollars. And I get a little sad on those days. But critically, our job is to make sure all of those are the same and that we never pass dollars as cents or cents as dollars because that's where things go sad. That is job number one at Sagewell in the engineering team is never get the decimal place wrong in money. JOËL: That would be a pretty terrible mistake to make. CHRIS: It would. I mean, it happens. In fintech, that problem comes up a lot. And again, the fact that...I'm honestly surprised to see situations out there where we're getting in floating point dollars. That is a surprise to me because I thought we had all agreed sort of as a community that it was integer cents but especially in a language that has integers. JavaScript, it's kind of making it up the whole time. But Ruby has integers. JSON, I guess, doesn't have integers, so I'm sort of mixing concerns here, but you get the idea. JOËL: Despite Ruby not having a static type system, I've found that generally, when I'm integrating with a third-party API, I get to the point where I want something that approximates like Elm's JSON decoders or io-ts or something like that. Because JSON is just a big blob of data that could be of any shape, and I don't really trust it because it's third-party data, and you should not trust third parties. And I find that I end up maybe cobbling something together commonly with like a bunch of usage of hash.fetch, things like that. But I feel like Ruby doesn't have a great approach to parsing and composing these validators for external data. CHRIS: Ruby as a language certainly doesn't, and the ecosystem, I would say, is rather limited in terms of the options here. We have looked a bit at the dry-rb stack of gems, so dry-validation and dry-schema, in particular, both offer potentially useful aspects. We've actually done a little bit of spiking internally around that sort of thing of, like, let's parse this incoming data instead of just coercing to hash and saying that it's got probably the shape that we want. And then similarly, I will fetch all day instead of digging because I want to be quite loud when we get it wrong. But we're already using dry-monads. So we have the idea of result types within the system. We can either succeed or fail at certain operations. And I think it's just a little further down the stack. But probably something that we will implement soon is at those external boundaries where data is coming in doing some form of parsing and validation to make sure that it conforms to unknown data structure. And then, within the app, we can do things more cleanly. That also would allow us to, like, let's push the idea that this is floating point dollars all the way out to the edge. And the minute it hits our system, we convert it into a money.new, which means that cents are properly handled. It's the same type of money or dollar, same type of currency handling as everywhere else in the app. And so pushing that to the very edges of our application is a very interesting idea. And so that could happen in the library or sort of a parsing client, I guess, is probably the best way to think about it. So I'm excited to do that at some point. JOËL: Have you read the article, Parse, Don't Validate? CHRIS: I actually posted that in some code review the other day to one of the developers on the team, and they replied, "You're just going to quietly drop one of my favorite articles of all time in code review?" [laughs] So yes, I've read it; I love it. It's a wonderful idea, definitely something that I'm intrigued by. And sort of bringing dry-monads into Ruby, on the one hand, feels like a forced fit and yet has also been one of the other, I think strongest sort of architectural decisions that we've made within the application. There's so much imperative work that we ended up having to do. Send this off to this external API, then tell this other one, then tell this other one. Put the whole thing in a transaction so that our local data properly handles it. And having dry-monads do notation, in particular, to allow us to make that manageable but fail in all the ways it needs to fail, very expressive in its failure modes, that's been great. And then parse, don't validate we don't quite do it yet. But that's one of the dreams of, like, our codebase really should do that thing. We believe in that. So let's get there soon. JOËL: And the core idea behind parse, don't validate is that instead of just having some data that you don't trust, running a check on it and passing that blob of now checked but still untrusted data down to the next person who might also want to check it. Generally, you want to pass it through some sort of filter that will, one, validate that it's correct but then actually typically convert it into some other trusted shape. In Ruby, that might be something like taking an amorphous blob of JSON and turning it into some kind of value object or something like that. And then anybody downstream that receives, let's say, money object can trust that they're dealing with a well-formed money value as opposed to an arbitrary blob of JSON, which hopefully somebody else has validated, but who knows? So I'm going to validate it again. CHRIS: You can tell that I've been out of the podcasting game for a while because I just started responding to yes; I love that blog post without describing the core premise of it. So kudos to you, Joël; you are a fantastic podcast host over there. I will say one of the things you just described is an interesting...it's been a bit of a struggle for us. We keep sort of talking through what's the architecture. How do we want to build this application? What do we care about? What are the things that really matter within this codebase, and then what is all the other stuff? And we've been good at determining the things that really matter, thinking collectively as a group, and I think coming up with some novel, useful, elegant...I'm saying too many positive adjectives for what we're doing. But I've been very happy with sort of the thing that we decide. And then there's the long-tail work of actually propagating that change throughout the rest of the application. We're, like, okay, here's how it works. Every incoming webhook, we now parse and yield a value object. That sentence that you just said a minute ago is exactly what I want. That's like a bunch of work. It's particularly a bunch of work to convert an existing codebase. It's easy to say, okay, from here forward, any new webhooks, payloads that are coming in, we're going to do in this way. But we have a lot of things in our app now that exist in this half-converted way. There was a brief period where we had three different serializer technologies at play. Just this week, I did the work of killing off the middle ground one, the Primalized-based thing, and we now have only our new hotness and then the very old. We were using Blueprinter as the serializer as the initial sort of stub. And so that still exists within the codebase in some places. But trying to figure out how to prioritize that work, the finishing out those maintenance-type conversions is a tricky one. It's never the priority. But it is really nice to have consistency in a codebase. So it's...yeah, do you have any thoughts on that? JOËL: I think going back to the article and what the meaning of parsing is, I used to always think of parsing as taking strings and turning them into something else, and I think this really broadened my perspective on the idea of parsing. And now, I think of it more as converting from a broader type to a narrower type with failures. So, for example, you could go from a string to an integer, and not all strings are valid integers. So you're narrowing the type. And if you have the string hello world, it will fail, and it will give you an error of some type. But you can have multiple layers of that. So maybe you have a string that you parse into an integer, but then, later on, you might want to parse that integer into something else that requires an integer in a range. Let's say it's a percentage. So you have a value object that is a percentage, but it's encoded in the JSON as a string. So that first pass, you parse it from a string into an integer, and then you parse that integer into a percentage object. But if it's outside the range of valid percentage numbers, then maybe you get an error there as well. So it's a thing that can happen at multiple layers. And I've now really connected it with the primitive obsession smell in code. So oftentimes, when you decide, wait, I don't want a primitive here; I want a richer type, commonly, there's going to be a parsing step that should exist to go from that primitive into the richer type. CHRIS: I like that. That was a classic Joël wildly concise summary of a deeply complex technical topic right there. JOËL: It's like I'm going to connect some ideas from functional programming and a classic object-oriented code smell and, yeah, just kind of mash it all together with a popular article. CHRIS: If only you had a diagram. Podcast is not the best medium for diagrams, but I think you could do it. You could speak one out loud, and everyone would be able to see it in their mind's eye. JOËL: So I will tell you what my diagram is for this because I've actually created it already. I imagine this as a sort of like pyramid with different layers that keep getting smaller and smaller. So the size of type is sort of the width of a layer. And so your strings are a very wide layer. Then on top of that, you have a narrower layer that might be, you know, it could be an integer, or you could even if you're parsing JSON, you first start with a string, then you parse that into a Ruby hash, not all strings are valid hashes. So that's going to be narrower. Then you might extract some values out of that hash. But if the keys aren't right, that might also fail. You're trying to pull the user out of it. And so each layer it gets a richer type, but that richer type, by virtue of being richer, is narrower. And as you're trying to move up that pyramid at every step, there is a possibility for a failure. CHRIS: Have you written a blog post about this with said diagram in it? And is that why you have that so readily at hand? [laughs] JOËL: Yes, that is the case. CHRIS: Okay. Yeah, that made sense to me. [laughs] JOËL: We'll make sure to link to it in the show notes. CHRIS: Now you have to link to Joël blog posts, whereas I used to have to link to them [chuckles] in almost every episode of The Bike Shed that I recorded. JOËL: Another thing I've been thinking about in terms of this parsing is that parsing and serializing are, in a sense, almost opposites of each other. Typically, when you're parsing, you're going from a broad type to a narrow one. And when you're serializing, you're going from a narrow type to a broader one. So you might go from a user into a hash into a string. So you're sort of going down that pyramid rather than going up. CHRIS: It is an interesting observation and one that immediately my brain is like, okay, cool. So can we reuse our serializers but just run them in reverse or? And then I try and talk myself out of that because that's a classic don't repeat yourself sort of failure mode of, like, actually, it's fine. You can repeat a little bit. So long as you can repeat and constrain, that's a fine version. But yeah, feels true, though, at the core. JOËL: I think, in some ways, if you want a single source of truth, what you want is a schema, and then you can derive serializers and parsers from that schema. CHRIS: It's interesting because you used the word derive. That has been an interesting evolution at Sagewell. The engineering team seems to be very collected around the idea of explicitness, almost the Zen of Python; explicit is better than implicit. And we are willing to write a lot of words down a lot of times and be happy with that. I think we actually made the explicit choice at one point that we will not implement an automatic camel case conversion in our serializer, even though we could; this is a knowable piece of code. But what we want is the grepability from the front end to the back end to say, like, where's this data coming from? And being able to say, like, it is this data, which is from this serializer, which comes from this object method, and being able to trace that very literally and very explicitly in the code, even though that is definitely the sort of thing that we could derive or automatically infer or have Ruby do that translation for us. And our codebase is more verbose and a little noisier. But I think overall, I've been very happy with it, and I think the team has been very happy. But it is an interesting one because I've seen plenty of teams where it is the exact opposite. Any repeated characters must be destroyed. We must write code to write the code for us. And so it's fun to be working with a team where we seem to be aligned around an approach on that front. JOËL: That example that you gave is really interesting because I feel like a common thing that happens in a serialization layer is also a form of normalization. And so, for example, you might downcase all strings as part of the serialization, definitely, like dates always get written in ISO 8601 format whenever that happens. And so, regardless of how you might have it stored on the Ruby side, by the time it gets to the JSON, it's always in a standard format. And it sounds like you're not necessarily doing that with capitalization. CHRIS: I think the distinction would be the keys and the values, so we are definitely doing normalization on the values side. So ISO 8601 date and time strings, respectively that, is the direction that we plan to go for the value. But then for the key that's associated with that, what is the name for this data, those we're choosing to be explicit and somewhat repetitive, or not even necessarily repetitive, but the idea of, like, it's first_name on the Ruby side, and it's first capital N name camel case, or it's...I forget the name. It's not quite camel case; it's a different one but lower camel, maybe. But whatever JavaScript uses, we try to bias towards that when we're going to the front end. It does get a little tricky coming back into the Ruby side. So our controllers have a bunch of places where they need to know about what I think is called lower camel case, and so we're not perfect there. But that critical distinction between sort of the names for things, and the values for things, transformations, and normalizations on the values, I'm good with that. But we've chosen to go with a much more explicit version for the names of things or the keys in JSON objects specifically. JOËL: One thing that can be interesting if you have a normalization phase in your serializer is that that can mean that your serializer and parsers are not necessarily symmetric. So you might accept malformed data into your parser and parse it correctly. But then you can't guarantee that the data that gets serialized out is going to identically match the data that got parsed in. CHRIS: Yeah, that is interesting. I'm not quite sure of the ramifications, although I feel like there are some. It almost feels like formatting Prettier and things like that where they need to hold on to whitespace in some cases and throw out in others. I'm thinking about how ASTs work. And, I don't know, there's interesting stuff, but, again, not sure of the ramifications. But actually, to flip the tables just a little bit, and that's an aggressive terminology, but we're going to roll with it. To flip the script, let's go with that, Joël; what's been up in your world? You've been hosting this wonderful show. I've listened in to a number of episodes. You're doing a fantastic job. I want to hear a little bit more of what's new in your world, Joël. JOËL: So I've been working on a project that has a lot of flaky tests, and we're trying to figure out the source of that flakiness. It's easy to just dive into, oh, I saw a flaky Test. Let me try to fix it. But we have so much flakiness that I want to go about it a little bit more systematically. And so my first step has actually been gathering data. So I've actually been able to make API requests to our CI server. And the way we figure out flakiness is looking at the commit hash that a particular test suite run has executed on. And if there's more than one CI build for a given commit hash, we know that's probably some kind of flakiness. It could be a legitimate failure that somebody assumed was flakiness, and so they just re-run CI. But the symptom that we are trying to address is the fact that we have a very high level of people re-verifying their code. And so to do that or to figure out some stats, I made a request to the API grouped by commit hash and then was able to get the stats of how many re-verifications there are and even the distribution. The classic way that you would do that is in Ruby; you would use the GroupBy function from enumerable. And then, you would transform values instead of having, like, say; each commit hash then points to all the builds, an array of builds that match that commit hash. You would then thumb those. So now you have commit hashes that point to counts of how many builds there were for that commit hash. Newer versions of Ruby introduced the tally method, which I love, which allows you to basically do all of that in one step. One thing that I found really interesting, though, is that that will then give me a hash of commit hashes that point to the number of builds that are there. If I want to get the distribution for the whole project over the course of, say, the last week, and I want to say, "How many times do people run only one CI run versus running twice in the same commit versus running three times, or four times, or five or six times?" I want to see that distribution of how many times people are rerunning their build. You're effectively doing that tally process twice. So once you have a list of all the builds, you group by hash. You count, and so you end up with that. You have the Ruby hash of commit SHAs pointing to number of times the build was run on that. And then, you again group by the number of builds for each commit SHA. And so now what you have is you'll have something like one, and then that points to an array of SHA one, SHA two, SHA three, SHA four like all the builds. And then you tally that again, or you transform values, or however, you end up doing it. And what you end up with is saying for running only once, I now have 200 builds that ran only once. For running twice in the same commit SHA, there are 15. For running three times, there are two. For running four times, there is one. And now I've got my distribution broken down by how many times it was run. It took me a while to work through all of that. But now the shortcut in my head is going to be you double tally to get distribution. CHRIS: As an aside, the whole everything you're talking about is interesting and getting to that distribution. I feel like I've tried to solve that problem on data recently and struggled with it. But particularly tally, I just want to spend a minute because tally is such a fantastic addition to the Ruby standard library. I used to have in sort of like loose muscle memory transform value is grouped by ampersand itself, transform values count, sort, reverse to H. That whole string of nonsense gets replaced by tally, and, oof, what a beautiful example of Ruby, and enumerable, and all of the wonder that you can encapsulate there. JOËL: Enumerable is one of the best parts of Ruby. I love it so much. It was one of the first things that just blew my mind about Ruby when I started. I came from a PHP, C++ background and was used to writing for loops for everything and not the nice for each loops that a lot of languages have these days. You're writing like a legit for or while loop, and you're managing the indexes yourself. And there's so much room for things to go wrong. And being introduced to each blew my mind. And I was like, this is so beautiful. I'm not dealing with indexes. I'm not dealing with the raw implementation of the array. I can just say do a thing for each element. This is amazing. And that is when I truly fell in love with Ruby. CHRIS: I want to say I came from Python, most recently before Ruby. And Python has pretty nice list comprehensions and, in fact, in some ways, features that enumerable doesn't have. But, still, coming to Ruby, I was like, oh, this enumerable; this is cool. This is something. And it's only gotten better. It still keeps growing, and the idea of custom enumerables. And yeah, there's some real neat stuff in there. JOËL: I'm going to be speaking at RubyConf Mini this fall in November, and my talk is all about Enumerators and ranges in enumerable and ways you can use those to make the APIs of the objects that you create delightful for other people to use. CHRIS: That sounds like a classic Joël talk right there that I will be happy to listen to when it comes out. A very quick related, a semi-related aside, so, tally, beautiful addition to the Ruby language. On the Rails side, there was one that I used recently, which is where.missing. Have you seen where.missing? JOËL: I have not heard of this. CHRIS: So where.missing is fantastic. Let's assume you've got two related objects, so you've got like a has many blah, so like a user has many posts. I think you can...if I'm remembering it correctly, it's User.where.missing(:posts). So it's where dot missing and then parentheses the symbol posts. And under the hood, Rails will do the whole LEFT OUTER JOIN where the count is null, et cetera. It turns into this wildly complex SQL query or understandably complex, but there's a lot going on there. And yet it compresses down so elegantly into this nice, little ActiveRecord bit. So where.missing is my new favorite addition into the Rails landscape to complement tally on the Ruby side, which I think tally is Ruby 2.7, I want to say. So it's been around for a while. And where.missing might be a Ruby 7 feature. It might be a six-something, but still, wonderful features, ever-evolving these tool sets that we use. JOËL: One of the really nice things about enumerable and family is the fact that they build on a very small amount of primitives, and so as long as you basically understand blocks, you can use enumerable and anything in there. It's not special syntax that you have to memorize. It's just regular functions and blocks. Well, Chris, thank you so much for coming back for a visit. It's been a pleasure. And it's always good to have you share the cool things that you're doing at Sagewell. CHRIS: Well, thank you so much, Joël. It's been an absolute pleasure getting to come back to this whole Bike Shed. And, again, just to add a note here, you're doing a really fantastic job with the show. It's been interesting transitioning back into listener mode for the show. Weirdly, I wasn't listening when I was a host. But now I've regained the ability to listen to The Bike Shed and really enjoy the episodes that you've been doing and the wonderful spectrum of guests that you've had on and variety of topics. So, yeah, thank you for hosting this whole Bike Shed. It's been great. JOËL: And with that, let's wrap up. The show notes for this episode can be found at bikeshed.fm. This show is produced and edited by Mandy Moore. If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. If you have any feedback, you can reach us at @_bikeshed, or reach me at @joelquen on Twitter, or at hosts@bikeshed.fm via email. Thank you so much for listening to The Bike Shed, and we'll see you next week. Byeeeeeeeeeee!!!!!!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
This week John talks his latest software design triumph with a custom OverflowScrollView, a catch up with some old colleagues, the quality of the iPhone 14's cameras but quite a major complaint when it comes to a certain app and its subscription and cloud based storage when upgrading your device. Scotty talks more on adding a Parse backend for MoneyWell, the 32nd Anniversary of the Macintosh Portable Introduction and his woes with Xcode 14 and iOS 16 with a client project. Macintosh Portable Introduction | YouTube Splice CYME’s Avalanche and Peakto apps Splice does not preserve projects after restoring from backup on a new phone:
Dana Lok is a visual artist who lives and works in Brooklyn, NY. In her paintings and drawings, she aims to capture the magic and unease found in the gap between signs and the things they represent. Her recent work imagines metaphors for how we create knowledge and draw conceptual schemes together. Solo exhibitions include Part and Parse, Miguel Abreu Gallery, New York (2022) One Second Per Second at PAGE, New York (2020); Words Without Skin at Clima, Milan (2019); Mind's Mouth at Bianca D'Allessandro, Copenhagen (2018); Soft Fact at Clima, Milan (2017); and The Set of All Sets at Chewday's, London (2016). Group shows include Gravity, a proposal at Sikkema Jenkins & Co. (2022); Regroup Show at Miguel Abreu Gallery (2021); Fifteen Painters at Andrew Kreps Gallery (2021); PAGE (NYC) at Petzel Gallery (2021), all in New York. Dana received her MFA from Columbia University (2015) and her BFA from Carnegie Mellon University (2011), and attended the Skowhegan School of Painting and Sculpture (2016). In 2018, she was awarded the Rema Hort Mann Emerging Artist Grant. Lok's work has been covered in Hyperallergic, Cura and Frieze.
This week's challenge: Quantify yourself.You can hear the after show and support Do By Friday on Patreon!------Edited by Quinn RoseEngineered by Cameron Bopp------Show LinksICYMI: Slate podcast with hosts Madison Malone Kircher and Rachelle Hampton.Ricky Montgomery - WikipediaRicky Montgomery - YouTubeVirtual Reality from 1990, Jaron Lanier, Eye phones, - YouTube‘Cover Story' Podcast Goes Into World of Psychedelic TherapyPzizz | Sleep at the push of a buttonProfessor Seagull's SmartshopUnder the Banner of Heaven (TV Mini Series 2022) - IMDb#44 Shine On You Crazy Goldman | Reply All"It's Always Sunny in Philadelphia" The Gang Beats Boggs: Ladies Reboot (TV Episode 2018) - IMDbProfessor Seagull's SmartshopWhy is a $3 bill gay? : AskRedditQueer as a Three Dollar Bill Origin | The Village IdiomExist · Understand your behaviour.With iOS 15, Apple reveals just how far Health has come — and how much further it can go | TechCrunchApple advances personal health by introducing secure sharing and new insights - AppleSleep++ on the App StoreSleepWatch — Find Your Best Sleep.AutoSleep Track Sleep on Watch on the App StoreTracker – Mood & Energy Diary on the App StoreWaterMinder® - track your daily water intake, hydrate, feel better!Get Started - Quantified Self How to Export, Parse and Explore Your Apple Health Data with Python | Mark KoesterMonitor, Track and Maximise Training and RecoveryNevada(Recorded on Wednesday, June 1, 2022)Next week's challenge: Eat hand salad.
Hello and welcome back to Equity, a podcast about the business of startups, where we unpack the numbers and nuance behind the headlines.Happily we were once again at full strength this week, with Alex Wilhelm, Natasha Mascarenhas and Mary Ann Azevedo chatting, and Grace handling production.You can tell from the topic list today that we are in an odd time. There are myriad signals that the startup market is slowing down. And there are some counter-narrative data points that paint a more complex picture. Where do you stand in your own viewpoint? Well, read on for some data to consider:Natasha gave us a brief update on All Raise's annual VC summit, but she'll get into more on an upcoming Wednesday show (stay tuned!)Monte Carlo just raised a unicorn round, worth $135 million at a $1.6 billion valuation. On the other hand, Bolt is laying off staff amidst a correction in the larger startup market, and perhaps its own space.If startup news is pointing in two directions, so too are data from the venture capital world. While Sequoia is warning founders about a downturn, a16z just raised a king's ransom to pour into the web3 market. Parse that as you will.There were other bits of news to consider as we work to understand where the startup world truly is today, including news from Zip and Nowports -- two newly-minted unicorns that Mary Ann recently profiled.And we closed on, what else, drama in fintech. As Stripe and Plaid gear up to battle, Finix is either in the fray, or about to jump in, depending on your perspective. What's clear is that increasingly overlapping fintech giants are going to rub up against one another. You can read more about that in The Interchange, out on Sunday.Hugs from us to you, and we will talk to you next week!Equity drops every Monday at 7 a.m. PDT and Wednesday and Friday at 6 a.m. PDT, so subscribe to us on Apple Podcasts, Overcast, Spotify and all the casts.
In this episode, Benji talks to David Cardiel, Vice President of Marketing at WordPress VIP & Parse.ly. Today we learn how David used data to determine pillar content and ultimately drive revenue. He explains how data should inform and continuously improve our marketing efforts across the board and what we should be paying attention to in all the numbers.