Podcasts about Ruby on Rails

Server-side open source web application framework

  • 640PODCASTS
  • 3,331EPISODES
  • 42mAVG DURATION
  • 1WEEKLY EPISODE
  • Jul 12, 2025LATEST
Ruby on Rails

POPULARITY

20172018201920202021202220232024

Categories



Best podcasts about Ruby on Rails

Show all podcasts related to ruby on rails

Latest podcast episodes about Ruby on Rails

Lex Fridman Podcast
#474 – DHH: Future of Programming, AI, Ruby on Rails, Productivity & Parenting

Lex Fridman Podcast

Play Episode Listen Later Jul 12, 2025


David Heinemeier Hansson (aka DHH) is a legendary programmer, creator of Ruby on Rails, co-owner & CTO of 37signals that created Basecamp, HEY, & ONCE, and is a NYT-best-selling author (with Jason Fried) of 4 books: REWORK, REMOTE, Getting Real, and It Doesn't Have To Be Crazy At Work. He is also a race car driver, including a class-winning performance at the 24 hour Le Mans race. Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep474-sc See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc. Transcript: https://lexfridman.com/dhh-david-heinemeier-hansson-transcript CONTACT LEX: Feedback - give feedback to Lex: https://lexfridman.com/survey AMA - submit questions, videos or call-in: https://lexfridman.com/ama Hiring - join our team: https://lexfridman.com/hiring Other - other ways to get in touch: https://lexfridman.com/contact EPISODE LINKS: DHH's X: https://x.com/dhh DHH's Website: https://dhh.dk/ Ruby on Rails: https://rubyonrails.org/ 37signals: https://37signals.com/ DHH's books: Rework: https://amzn.to/44rSKob Remote: https://amzn.to/44GFJ91 It Doesn't Have to Be Crazy at Work: https://amzn.to/46bzuwx Getting Real: https://amzn.to/4kzoMDg SPONSORS: To support this podcast, check out our sponsors & get discounts: UPLIFT Desk: Standing desks and office ergonomics. Go to https://upliftdesk.com/lex Lindy: No-code AI agent builder. Go to https://go.lindy.ai/lex LMNT: Zero-sugar electrolyte drink mix. Go to https://drinkLMNT.com/lex Shopify: Sell stuff online. Go to https://shopify.com/lex NetSuite: Business management software. Go to http://netsuite.com/lex OUTLINE: (00:00) - Introduction (00:58) - Sponsors, Comments, and Reflections (08:48) - Programming - early days (26:13) - JavaScript (36:32) - Google Chrome and DOJ (44:19) - Ruby programming language (51:30) - Beautiful code (1:09:31) - Metaprogramming (1:12:52) - Dynamic typing (1:20:10) - Scaling (1:33:03) - Future of programming (1:50:34) - Future of AI (1:56:29) - Vibe coding (2:05:01) - Rails manifesto: Principles of a great programming language (2:29:27) - Why managers are useless (2:38:48) - Small teams (2:44:55) - Jeff Bezos (3:00:13) - Why meetings are toxic (3:07:58) - Case against retirement (3:15:15) - Hard work (3:20:53) - Why we left the cloud (3:24:04) - AWS (3:33:22) - Owning your own servers (3:39:35) - Elon Musk (3:49:17) - Apple (4:01:03) - Tim Sweeney (4:12:37) - Fatherhood (4:38:19) - Racing (5:05:23) - Cars (5:10:41) - Programming setup (5:25:51) - Programming language for beginners (5:39:09) - Open source (5:48:01) - WordPress drama (5:59:18) - Money and happiness (6:08:11) - Hope

Unstoppable Mindset
Episode 349 – Unstoppable Coach For High-Achieving Leaders with Ashley Rudolph

Unstoppable Mindset

Play Episode Listen Later Jul 1, 2025 67:41


Today Ashley Rudolph is an executive coach working with high-achieving and executives who are at a “crossroad” as they look GREAT on paper, but tend to exhibit fears and have other problems that effect their confidence and performance. Ashley was not always a coach and, in fact, did not view herself as a coach during most of her career. She grew up in the Bronx in New York City. She attributes her high confidence level to the high bar her parents set for her as well as to the environment where she grew up.   After high school Ashley enrolled in Babson College where she quickly had to learn much about business and working as a team. She will tell us that story. After graduation she secured a job, but was layed off and then went back to Babson to secure her Master's degree.   Ashley began working and quickly rose through the corporate ranks of tech companies. She tells us how, while not really tech savy at first, she pushed herself to learn what she needed to know to work as part of a team and then eventually to lead high tech teams.   In 2023 her high tech employment world took a change which she will describe. Bottom line is that she was laid off from her vice presidential position and after pondering what to do she realized that she had actually been coaching her employees for some time and so she began hirering herself out as an executive coach. We will get the benefit of receiving a number of her insights on leadership, confidence building and how to become better mentally with anything life throughs at us. What Ashley says during our episode time makes a great deal of sense and I believe you will gain a lot from what she has to say. You can reach out to Ashley through the contact information in the show notes for this Unstoppable Mindset episode.     About the Guest:   Ashley Rudolph is an executive coach for high-achieving leaders and executives at a crossroads—those who have built success on paper but are ready to step into something greater. Her work is grounded in a bold belief: true transformation isn't about doing more—it's about leading differently.   A former tech executive, she scaled from IC to VP in just five years, leading $75M+ deals and teams of 250+ at high-growth companies. She knows what it takes to succeed in high-stakes environments—not just in execution, but in the deeper, often invisible work of leadership: making bold decisions, navigating uncertainty, and owning your impact.   Her signature methodology, The Three Dimensions of Transformation, helps leaders unlock their full potential by focusing on: mindset, strategy, and elite execution.   Whether guiding clients through reinvention, leadership evolution, or high-stakes career moves, Ashley helps them break free from outdated success metrics and create momentum that lasts. Her insights have been featured in Inc., U.S. News & World Report, The New York Post, Success Magazine, Apartment Therapy, and more. She also writes The Operator's Edge, a newsletter on the unseen shifts that drive real momentum in leadership and career growth. Because true leadership isn't about following a path. It's about defining your own. Ways to connect with Ashley:   My website which has details about me, my programs, and insights about high achievers in the workplace: www.workwithashleyr.com    My newsletter which gets published every single Monday morning with my expert advice for high achievers on how to succeed in the workplace. newsletter.workwithashleyr.com    My LinkedIn: https://www.linkedin.com/in/ashleyrudolph/   About the Host:   Michael Hingson is a New York Times best-selling author, international lecturer, and Chief Vision Officer for accessiBe. Michael, blind since birth, survived the 9/11 attacks with the help of his guide dog Roselle. This story is the subject of his best-selling book, Thunder Dog.   Michael gives over 100 presentations around the world each year speaking to influential groups such as Exxon Mobile, AT&T, Federal Express, Scripps College, Rutgers University, Children's Hospital, and the American Red Cross just to name a few. He is Ambassador for the National Braille Literacy Campaign for the National Federation of the Blind and also serves as Ambassador for the American Humane Association's 2012 Hero Dog Awards.   https://michaelhingson.com https://www.facebook.com/michael.hingson.author.speaker/ https://twitter.com/mhingson https://www.youtube.com/user/mhingson https://www.linkedin.com/in/michaelhingson/   accessiBe Links https://accessibe.com/ https://www.youtube.com/c/accessiBe https://www.linkedin.com/company/accessibe/mycompany/ https://www.facebook.com/accessibe/       Thanks for listening!   Thanks so much for listening to our podcast! If you enjoyed this episode and think that others could benefit from listening, please share it using the social media buttons on this page. Do you have some feedback or questions about this episode? Leave a comment in the section below!   Subscribe to the podcast   If you would like to get automatic updates of new podcast episodes, you can subscribe to the podcast on Apple Podcasts or Stitcher. You can subscribe in your favorite podcast app. You can also support our podcast through our tip jar https://tips.pinecast.com/jar/unstoppable-mindset .   Leave us an Apple Podcasts review   Ratings and reviews from our listeners are extremely valuable to us and greatly appreciated. They help our podcast rank higher on Apple Podcasts, which exposes our show to more awesome listeners like you. If you have a minute, please leave an honest review on Apple Podcasts.       Transcription Notes:   Michael Hingson ** 00:00 Access Cast and accessiBe Initiative presents Unstoppable Mindset. The podcast where inclusion, diversity and the unexpected meet. Hi, I'm Michael Hingson, Chief Vision Officer for accessiBe and the author of the number one New York Times bestselling book, Thunder dog, the story of a blind man, his guide dog and the triumph of trust. Thanks for joining me on my podcast as we explore our own blinding fears of inclusion unacceptance and our resistance to change. We will discover the idea that no matter the situation, or the people we encounter, our own fears, and prejudices often are our strongest barriers to moving forward. The unstoppable mindset podcast is sponsored by accessiBe, that's a c c e s s i capital B e. Visit www.accessibe.com to learn how you can make your website accessible for persons with disabilities. And to help make the internet fully inclusive by the year 2025. Glad you dropped by we're happy to meet you and to have you here with us.   Michael Hingson ** 01:20 Well, hello, everyone, wherever you happen to be today, I am Michael Hingson, and you are listening to or watching or both, unstoppable mindset today, our guest is Ashley Rudolph, who is a coach, and I like something Ashley put in her bio that I thought was really interesting, and that is that Ashley's work is grounded in the belief that true transportation is not really about doing more, but rather it's doing things differently. And I want, I'm going to want to learn about that. I think that's fascinating, and I also think it is correct, but we will, we will definitely get to that and talk about that. Ashley approached me a little while ago and said, I'd like to explore coming on your content, your podcast. And I said, Well, sure, except I told her the same thing that I tell everyone who comes on the podcast, there is one hard and fast rule you got to follow, and that is, you got to have fun, or you can't come on the podcast, so you got to have fun. Ashley, just   Ashley Rudolph ** 02:26 reminding you, I'm ready. I am ready. I'm coming into the podcast today with all of my best jokes, all of my best tricks. Oh, good.   Speaker 1 ** 02:35 Well, we want to hear them all. Well, thank you for being here, and it's a pleasure to have you on unstoppable mindset.   Ashley Rudolph ** 02:42 Yes, thank you so much for having me. I was just really taken by your entire background story, and I took a risk and sent you a message. So thank you so much for having me on the podcast.   Speaker 1 ** 02:55 Well, I have always been of the opinion that everyone has stories to tell, and a lot of people just don't believe they do, but that's because they don't think about it. And so what I tell people who say that to me when we talk about them coming on the podcast, my job is to help bring out the stories. Now, you didn't say that, and I'm not surprised, but still, a lot of people say that. And the reality is, I believe everyone is more unstoppable than they think they are, and that they undersell themselves, they underrate what they are and what they can do,   Ashley Rudolph ** 03:28 yeah, and honestly, I 100% agree with you, and that's why, and maybe I'm jumping ahead a little bit, but you triggered a thought. That's why I spend every single one of my first coaching meetings with a client, having them talk me through either their professional history or their wins from the past year. And in those conversations, my feedback is also is always Hey, you're not giving yourself enough credit for the things that you're doing. Like, these are amazing stories, or like, repeating things back to them a little bit differently than they would have phrased it, but that's 100% accurate. We don't sell ourselves enough,   Speaker 1 ** 04:08 even to ourselves. We don't sell ourselves enough, especially to ourselves. Yeah, yeah, yeah. Well, tell me a little about kind of the early Ashley growing up and all that, and you know where you came from, and all that sort of stuff,   Ashley Rudolph ** 04:23 yeah. So I grew up in New York. I'm from the Bronx. Oh and yeah, yeah. So, so is my   Michael Hingson ** 04:30 mom   Ashley Rudolph ** 04:31 Aqua? Oh my gosh, I had no idea. So I grew up in the Bronx and grew up with my mom. My dad was around too, and, oh, it's interesting, and I'm sure this will make sense, but I grew up going to Catholic schools from first grade to senior year of high school, and something about me, it was like I was always a very self assured. Determined person, and that carried through all the way through my adulthood. And maybe that comes from me being a New Yorker. Maybe that comes from my mom being a an immigrant. She's from the Caribbean. She's from the Bahamas, and she had a very high bar for what success looked like I don't know where it comes from, but yeah, yeah. So that's a little bit about me growing up and kind of who I was   Speaker 1 ** 05:28 as a kid. So now, where are you living? Now?   Ashley Rudolph ** 05:32 I am in New York again, so I moved back to New York in 2020,   Speaker 1 ** 05:38 okay, wow, just in time for the pandemic. Lucky you?   Ashley Rudolph ** 05:43 Yeah, I actually moved back to New York on election day in 2020 so I missed the early pandemic. But yeah, yeah, yeah,   Speaker 1 ** 05:53 I was in New York speaking on March 5, and that night, I got back to the hotel, and my flight was supposed to go out at like, 415 in the afternoon, yeah. And I said, when I started hearing that they were talking about closing down the city, I think I better leave earlier. So I was on a 730 flight out the next day. Oh my gosh,   Ashley Rudolph ** 06:18 wow. So you just made it out and that yeah, and at the time, I was living in Boston, and I actually was went on a vacation with a friend, and we flew back the day before they shut down the airports in Boston. So   Speaker 1 ** 06:36 that was lucky. Yeah, did you live in Boston itself or a suburb?   Ashley Rudolph ** 06:42 Yeah, I lived in Boston for two years, I think, yeah, I lived in the city, yeah. I   Speaker 1 ** 06:50 lived in Winthrop for three years, and commuted across Boston to Cambridge every day,   Ashley Rudolph ** 06:55 yeah, oh, my god, yeah. So I worked in Cambridge and I lived in the West End, right above TD Garden.   Speaker 1 ** 07:03 Oh, okay, yeah, I hear that Durgan Park closed in, in near Faneuil Hall.   Ashley Rudolph ** 07:13 Oh, yeah, well, I have to admit, I didn't go there that much. Was living in Boston.   Speaker 1 ** 07:19 It was a fun place. It was a family style thing, and they had tables for four around the outer edges inside the restaurant. But you couldn't sit at one of those unless you had four people. And the serving staff was trained to be a little bit on the snotty side. And I went in fun. Oh, wait. Oh, absolutely. They made it fun. But I went in and the hostess, there were three of us, and my guide dog at the time, Holland, who was a wonderful, cute golden retriever, and she said, Oh, we're going to put you at one of the tables for four. And I said, Well, okay, we appreciate that. And Holland was under the table. This waitress comes up and she says, you're not supposed to be sitting here. This is a table for four, and there are only three of you. And I said, but they told us we could. No Nobody told you you could sit here. You got to go back over to the big tables. And I said, Look, we have a guide dog under the table, and he's really happy. And they told us we could be here because of the dog. And she's, I don't believe that at all. I'm, I'm gonna go check. I don't believe you. She goes away and she comes back a little bit later. No, you're not supposed to sit here. And I said, Look, lift up the tablecloth and look under the table. I'm not going to fall for that. Just do it. She finally did. And there's Holland staring out with these big brown eyes. And she just melted. She goes away and comes back. And one of the things about Durgan Park is they have big plates of prime rib. And she brought this plate of prime ribs somebody hadn't eaten at all, and she said, can I give this to the dog? And so, you know, normally, I would say no, but we were trying to make peace in our time, so I said, Oh, sure. And she and Holland had a great time. So it was fun.   Ashley Rudolph ** 08:59 Oh, and Holland got prime rib. Holland   Speaker 1 ** 09:03 got prime rib. What a treat. And so did and so did the rest of us, but, but we had to pay for ours. But I missed Durgin Park. It was a fun place to go, but I understand that it is closed, and I don't know whether it's oh, well, oh, that's unfortunate, but Quincy market's a wonderful place to go. It's not a lot of interesting things. So you, so you went through high school. So you went through high school in New York, went in in the Bronx tough neighborhood, and then what did you do? So   Ashley Rudolph ** 09:34 I then went to college. So I went to Babson College, which is, well, it's in Massachusetts, it's in Wellesley, and it's actually right next door to Wellesley College. Yeah, yeah. So I went there and I studied business, and that was basically where I learned how to be successful in the workplace, which is kind. Funny, because I found that over the years, a lot of people will say, you know, I went to college, but by the end of it, maybe I didn't know what my transferable skills were, or I studied something that isn't related to what I was doing or what I did as a professional, and I always felt the opposite, like in freshman year at Babson, they gave us $3,000 to, like, start a company as a as a students. So all of us just had to start this company. We had our business ideas. There was a CEO, a CMO, a CFO. We had like rules assigned. And that was my first experience of what a workplace could be like, although it was with 18 year olds, so maybe not totally reflective, but we had performance reviews, we had a head of HR, we had like, company meetings, so we were doing things within a framework, and they all kind of translated into the workplace, different players. So Babson basically kind of turned me into the business person that I am   Speaker 1 ** 11:09 today. Now, did each person get $3,000 and they started their own company?   Ashley Rudolph ** 11:14 Oh, no. So there were, there were maybe 30 of us, and we started a company with that with $3,000 Okay? Exactly with that investment, it was managed quite tightly. There's not a lot that you can do with $3,000 right? So you can probably guess that a lot of the businesses turned out to be the same. So there was always a T Shirt Company or a company the when the LIVESTRONG wristbands were popular, then we were like, oh, let's customize these wristbands. So yeah, yeah. The the company ideas basically ended up being the same, because there's not that much that you could do with that, yeah,   Speaker 1 ** 11:56 yeah, yeah. So much you can do unless you start making a bunch of money,   Ashley Rudolph ** 12:00 yeah, yeah, yeah. And in today's landscape, I guess there's more that you can do with digital products and stuff like that. But yeah, yeah, we, we had to do physical so we were pretty limited, yeah, well, that's   Speaker 1 ** 12:13 okay, but still, if the company is successful, and was it successful? Yeah,   Ashley Rudolph ** 12:19 we, did turn a profit, and then for all of the businesses that did turn a profit, you had to donate the profits to a local charity. So we did. We donated ours to a local organization. We threw an event in partnership with the organization. It was just, it was nice. So, yeah, oh,   Speaker 1 ** 12:43 cool. So, how, how long did the company last? Essentially, was it all four years?   Ashley Rudolph ** 12:50 It was the first   Speaker 2 ** 12:52 year, just the first year, okay, yeah, okay, yeah, that's still, that's pretty cool.   Ashley Rudolph ** 12:58 Yeah, it is. I have to say that I learned a lot,   Speaker 1 ** 13:02 yeah, well, you're you're kind of forced to or you don't succeed. So I was going to ask you why you felt that you learned how to be successful. But now it's pretty clear, yeah, yeah, yeah.   Ashley Rudolph ** 13:13 So we started there in freshman year, and then sophomore, junior and senior year was kind of more of a deep dive on specific skills. So that you take our accounting classes, finance marketing, if you were into retail, there was like a retail management class at the core classes. So we had, you know, liberal arts courses, so art history, yeah, philosophy, things like that. But yeah, everything was mostly centered around business and cool, yeah, yeah. Well, that's   Speaker 1 ** 13:47 pretty exciting. Did you did you go do any graduate work anywhere?   Ashley Rudolph ** 13:52 It's funny, yes, I did. So I graduated from Babson, and my first job was in a creative agency, and I was doing media buying, and at the time it was 2008 and we were buying ads in school newspapers, which was dying like it was pretty much On on its last leg, and I just had this thought when I was doing it, and that I wasn't inspired by the work, because it wasn't growing, it was going away. And it was clear, yeah, and that. And actually my first job, I got laid off because it was a dying industry, and the team needed to be smaller, and at that point, it's my first job. So it was very devastating to me. I had never gone through anything like that before. So then I decided to go back to school. So I did my masters. I actually. Went back to Babson, but in an international program. So I spent my first semester in France, my second semester in China, and then my final semester at Babson. Ah,   Speaker 1 ** 15:13 so why was the newspaper industry going away? Just because everything was going online?   Ashley Rudolph ** 15:18 Exactly, yeah, things were shifting more digital. Yeah, it's exactly   Speaker 1 ** 15:23 that, so they didn't need as many people selling and doing other things as they did before. Yeah,   Ashley Rudolph ** 15:28 yeah, exactly. Or companies were figuring out different ways to reach college students that wasn't dependent on getting in the school newspaper.   15:39 Yeah? Yeah, yeah,   Speaker 1 ** 15:42 yeah. So you got your master's degree from Babson, and then what did you   Ashley Rudolph ** 15:47 do? I got my master's degree from Babson, and I'll fast forward a little bit, because what's funny is that after I graduated, I still didn't quite know what I wanted to do, but I figured it out. I ended up going back into marketing. But if you remember, what I described was, in that first job, I wasn't connected to the mission. I wasn't inspired by where the industry was going. So I ended up pivoting into nonprofits. And my first job after graduating from my masters was running digital media, so not physical media, so I shifted into social media and online marketing. Had a nonprofit, right? So I was connected to the mission. I felt like the work that I was doing was for a good cause, and it was an industry that was new and that was growing, and that was ever changing and exciting. So I did that for about three years, so first at a nonprofit, and then at an a charter school network that was in New York and New Jersey at the time, but has since expanded far beyond that. So, yeah, I went into mission driven work, and I went into digital marketing and digital media. And I think what I took away from that chapter of my career was that I want to be in an industry that is ever evolving. So, yeah, so after my experience in the nonprofit and education space, that's when I jumped into tech. So I jumped into tech after that, and spent a decade in the tech industry. And obviously, tech is ever changing. I had access to so many different opportunities. I grew really fast. I started at the first company, the first tech company that I worked for. I was a program manager, and five years later I was a vice president, right? So, like, I was able to seize opportunities and work really hard and get to the level that I wanted to get to I was very ambitious, so I think tech just kind of gave me everything I wanted. Career wise, how   Speaker 1 ** 18:09 did you progress so fast to go from being a program manager to the level of Vice President in what generally would be defined as a pretty short time? Yeah,   Ashley Rudolph ** 18:20 yeah, yeah. So some of it was hard work, and I think the other factor was luck, and the other factor was going after whatever it was that was in front of me. So taking risks. So I would say, with the hard work part, I worked a lot. See when I first, when I started that job, I was actually a Program Manager for Back End Web Development, which was Ruby on Rails, coding a coding language. And then I was also a program manager for data science. I had no experience in either I was not technical. I did not have the technical skills or technical aptitude to do this, but I did have the desire to learn. So my first month at that job, I worked seven days a week. I went to workshops on the weekend. I did coding workshops, I read through all of the documentation. I sat in all of the programs that I was managing. I just dug deep. And I think that first year of immersing myself in everything kind of set the foundation for me.   Speaker 1 ** 19:38 So you made yourself pretty technical by the time it was all said and done,   Ashley Rudolph ** 19:42 yeah, yes, yes, and not on the level of any of my instructors or the students that actually took the programs. But I cared about learning, and I cared about having a certain level of fluency in order to I had to hire instructors for the program so I couldn't fumble my. Words, right? So, yeah, yeah. So I taught myself, yeah,   Speaker 1 ** 20:05 you learned. You learned enough. You You weren't trying to be the most technical person, but you learned enough to be able to interact with people and hold your own. Yeah, which, which is the important thing, I think. And for me, I know at one point, I had a job that was phased out when Xerox bought the company and I couldn't find another job. And it wasn't because of a lack of trying, and it wasn't because I didn't have the skills, but rather, as societal norms typically go, the belief is blind people can't work, as opposed to what we really can and can't do. So I eventually started my own company selling computer aided design systems, and for me, as a blind person, of course, I'm not going to sit in front of a CAD computer or even a PC based CAD system, which is what we sold. So I had to learn, however, all about how to operate the system. Learn about PCs. So I learned how to how to build PCs. I learned about CAD so I could actually walk someone through the process of drawing without actually having to do it, so I understand what, exactly what you're saying. Yeah, and it was important to do that. Yeah. Yeah,   Ashley Rudolph ** 21:21 it was important, and no one told me to do that, right? And I'm sure that no one told you to do that too, but there was just something in me that knew that I was excited about this work, or I wanted opportunities, and this was the best way that I knew how to go after it. Yeah, yeah.   Speaker 1 ** 21:43 Well, and, and it is the way you still have you do have to learn enough to be able to hold your own, but I Yeah, but I think it's also important in learning that that you're also not trying to threaten anyone else. You're just trying to be able to communicate with them   Ashley Rudolph ** 22:00 exactly, exactly, yes,   Speaker 1 ** 22:05 yeah. All too often, people view others as threats when they really shouldn't. But you know,   Speaker 2 ** 22:12 that's Yeah, another story gonna do Yeah, right, right.   Speaker 1 ** 22:16 Well, so for within five years, you became a vice president. What was the tech that y'all were really developing?   Ashley Rudolph ** 22:22 Yeah, great question. So what's interesting about this is that it wasn't so the first company I worked for wasn't a tech company, and that they were building tech it's actually a coding boot camp. So they were teaching people either how to code or how to become a UX designer, or how to become a product manager. So that was the product after a while. And I think long after I left the company, they did develop their own tech. So they developed an online an LMS learning management system, and there was digital content. But when I started, it was really about the boot camp era and teaching people how to code, because there were all these engineering jobs and web development jobs that were available and not enough, not enough talent, not   Speaker 2 ** 23:13 enough talent to go around. Yeah, yeah, yeah, yeah.   Ashley Rudolph ** 23:17 Which is when you think about today's market and where we're, where we are, that was only 10 years ago, and it's a completely different story. Now, the market is flooded with too many web developers. Yeah,   Speaker 1 ** 23:29 it is, but I would say, from my standpoint of seeing what they produce in terms of making web content accessible, not nearly enough of them know how to do that, which is another story,   Ashley Rudolph ** 23:41 yeah, yeah, yeah, which is so interesting. And yeah, unacceptable, unfortunate, because there were always teams that were in charge of accessibility at the companies that I worked for, but then having someone be in charge of it, and then properly resourcing the accessibility team is a whole other story. And I think so many companies view it as just oh yeah, I checked the box. My website is accessible. But did you really build with your end users in mind, and the answer is probably no,   Speaker 1 ** 24:23 probably not, yeah, and all too often that ended up being the case. Well, so what did you do after you became vice president?   Ashley Rudolph ** 24:32 Yeah, so that was tough. You said it, and you said, I climbed really fast. And that's true, I did, and because I climbed fast, there were a lot of lessons to learn. So after I became vice president, I really had to own that leadership seat, or that executive leadership seat, and recognize that what had got me there. Here is was not what was going to keep me there. So the thing that I did after I became a vice president was really understanding how to be an effective executive. So that means really understanding the business side, which I already knew I had been doing that I've been thinking about that since college, so that wasn't something that I was concerned about, but the biggest thing was forming executive level relationships and really understanding how to form allies, and understanding that at that level, it's less of I have the right answer, and listen to me, because I'm a vice president and more of a okay. How am I influencing the people around me to listen to my idea, accept my idea, champion and support my idea. And it's not enough to just have something that's right on paper.   Speaker 1 ** 26:06 The others the other side of that, of course, could be that maybe you have an idea that may or may not be the right idea, which also means you need to learn to listen,   Ashley Rudolph ** 26:13 yes, exactly, exactly, and that was absolutely the other side of it. So me coming into things and being like, I understand what needs to happen, and not having all the context either way, right? So, yeah, yeah, yeah,   Speaker 1 ** 26:31 but you must have done pretty well at doing all that.   Ashley Rudolph ** 26:34 I figured it out eventually. Yes, I did figure it out eventually, and it wasn't easy, but I was able to grow a team and scale a team, and I was able to move from maybe the business side of running operations to the product and technology side of it, so being able to see two different sides of the coin. And yeah, it did. It did work. Well, I was able to create my own department, which was a product project management office that oversaw all of the work of the entire product and design and technology teams, 250 people. I I'm not sure that I would have thought I was capable of doing something like that, and building something from the ground up, and hiring a team of, I think, 15 people, and leading that department. And, yeah, yeah, and it was great. I did learn a lot. And then 2023 happened. And that was the major turning point in Tech where I think the dominant story shifted from, or at least in education technology, which I think you know something a lot about, but the dominant story shifted from this is great. This is growing. Distance Learning is fueling growth. There's so much opportunity here to it's too big. We need to, you know, do layoffs. We need to find a way to right size the business. There's actually not a lot of growth happening. So 2023 happened, and I ended up getting laid off with my entire department that I built. And that was such a huge lesson, a huge leadership lesson for me, for sure. So I'll pause so that I'm not not talking at you, but hanger, yeah, yeah,   Speaker 1 ** 28:46 well, so you got laid off. I've been there. I've had that happen. And, yeah, it isn't fun, but it's like anything else. You may not have been able to control it happening, but no, you are the one who has to deal with it. So you may not have control over it happening, but you always have control over how you deal with what happened.   Ashley Rudolph ** 29:09 Yes, yes,   29:11 yes. And what did you do?   Ashley Rudolph ** 29:14 And that's exactly what was so different about this time. So I will say I had two months notice. I had an amazing leader, such a technology officer. When the decision was made, he said, Okay, we can make this decision, but I have to tell Ashley immediately. So he told me, and it wasn't surprising, right? Because I saw how the business what direction the business was going in. So I can't say I was shocked, but the big question that I had was, Oh, my God, what am I going to do about my team? And I felt such immense responsibility because I had hired many of them I came to. Care about them and their careers and their livelihoods, and, yeah, I just felt responsible for it. So you said it, you said it beautifully, and that it was about what I decided to do. So from that moment, I shifted my focus, maybe, maybe to my own detriment, but whatever, I came out on the upside, but I shifted my focus to my team, and I thought the best thing that I could do in that moment was preparing them for their next chapters without going directly to the team and damaging the trust of the Chief Technology Officer and saying, in two months, we're all going to get laid off. That's also not reflective of the type of leader I wanted to be. So I figured out that, because we were a project management office and because there wasn't a lot of new work at the company, we had downtime. So I implemented a meeting on the calendar, which was a project review, and every single week, someone on my team had the opportunity to present their projects and talk about what they learned, what was challenging for them, and what their successes were, right, some combination of those things, and they all did it, and that was my way of helping to start prepare them for the interview process, because now you know your work, you know what your impact was, and you've gotten my feedback as someone who's a leader, who knows what hiring managers are looking for, you got my feedback on the best ways to present yourself, and they were able to ask questions. There were some people who approached me or the director on my team privately and asked us to review their resumes, because they kind of saw the writings on the wall without me ever having to say it, and I did. And what ended up happening is, at that two month mark, or whenever, when the layoffs did happen, no one on my team was shocked, and there were people who actually within a month after the layoff happened, they had found new jobs because they had that time to prepare and felt confident in their job search and the stories that they were telling about themselves. So I all that to say that I did exactly that. I chose the type of leader that I wanted to be, and the thing that felt important to me was preparing my team for their next chapter,   Michael Hingson ** 32:32 which I would say is the right thing to do,   Ashley Rudolph ** 32:34 yeah, yes, exactly, because it   Speaker 1 ** 32:37 isn't, no matter what a lot of people might think, it isn't about you, it's about the team. It's about you and the rest of the team, because you're all a team,   Ashley Rudolph ** 32:45 yeah? Except Yes, yes. And I very much viewed my team as an extension of myself, an extension of them. I you know, it wasn't just about them doing a job for me, quote, unquote, like that's not the type of leader that I am. We are a team,   Speaker 1 ** 33:04 right? So meanwhile, while you were doing that and helping the team, what were you also doing for you? And   Ashley Rudolph ** 33:12 that's why I said to my detriment, I didn't do a lot of thought. I put no thought into what I wanted to do. Okay? At all. I just And you know what? It's not to my detriment. I think what I needed at that time was a distraction, and this was a really good distraction for me, from sorting through what I wanted to do next, but also in navigating that with my team and supporting them through that, I think the answer became very clear once I was ready to ask my question, I just coached my team. So yeah, yeah, yeah, yeah.   Speaker 1 ** 33:51 And so you sort of, as you would say, pivoted to being a coach,   Ashley Rudolph ** 33:57 yes, yes. And I want to be clear that this wasn't a decision that was like, you know, that I just fell into coaching, you know, I I made the decision to so I took some time to think about what were the pieces of my work that I really loved when I was a VP at multi, you know, at multiple companies, and the answer was clear, and that I really loved coaching and helping people become better at their work, and I really loved mentorship. And those were the parts of the work that if I could just do that all day, that's what I would want to do. And I was like, Well, I have the I can make a decision to do that all day, every day now, because I'm not doing anything, I just got laid off. So I can choose to do this work. So that's exactly how I ended up being a coach.   Speaker 1 ** 34:58 Well, so you. Ever originally planned on being a coach. So was it that work with your team that really was the sort of pivotal decision for you, that although you never thought you were going to be a coach, that led you to coaching, or was there something else that really helped move you there? There was something else. Okay, yeah, more to the story.   Ashley Rudolph ** 35:21 There is always you're peeling all the layers so, so initially, what I thought I would do, because I was an operations person, I was like, I'll just be an operations consultant. I'll go out on my own, and people will hire me to be their ops person. So let me, you know, run with that as an idea. And I started having conversations with former colleagues. And what was funny in that so many of their conversations were kind of like, oh yeah, I want to support you. And that sounds nice. I understand why you would want to be an operations consultant. But there's something more interesting about you being a coach. Or I want to hire you to be a coach for my team. Or, Hey, you did really amazing things in your career. You should help other people do those things. And that was the theme that people kept telling me, so I finally decided, decided to listen. That's how I landed on coaching. And instead of it being like, oh my god, I'm trying to sell the value of myself as an operations consultant, once I just owned the coach title, people just started saying, okay, yep, Sign me up. Or I'll refer you to someone who needs a coach right now. Or, hey, you coach just one person on my team, and they're great. Here's more. So it just became easy, and it became less of a I'm trying to sell people, and I'm trying to, like, convince them that they need me in this role, it was just easy.   Speaker 1 ** 37:04 So do you think you talked about being ambitious when you were in college and starting that business at Babson and so on? Do you think you've always continued to try to be, if you will, ambitious, or did you sort of shift in terms of mindsets over time?   Ashley Rudolph ** 37:22 Yeah, that's a really good question. I do think I have always been ambitious, and when I visited my mom last year or the year before last for Thanksgiving, I found a fake report card that I wrote myself, that I wrote for myself in fourth grade. And there was a prompt that said, what would you want your teacher to write on your report card at the end of this year? And I wrote, Ashley is excelling at excellence. Well, there you go, fourth grade. So I think it's always been there.   Speaker 1 ** 38:02 So is it, but is it ambition? Is it ambition, or is it being industrious and being being confident? You know?   Ashley Rudolph ** 38:10 Yeah, yeah. Oh, that is such a good question, right? So there was a version of me when I was in the corporate world where I would have just said, yeah, it's ambition, right? Because I'm always motivated to, you know, go after the next level, and that's what's driving me. And now, now that you put that question out there, it is, it is that confidence, because I'm not chasing a thing or the next level right now, in this phase, I'm chasing quote, unquote impact like the thing that drives me is helping people, helping people probably achieve things for themselves that They also didn't think that they could in their careers, and I'm just helping them get there, yeah,   Speaker 1 ** 39:06 and that's why I asked the question, because ambition, the way you normally would think of it, yeah, can be construed as being negative, but clearly what you're doing is is different than that. Yeah, you know, at this at the same time for you, now that you're coaching and so on, and you shifted to doing something different, yeah, did you have to let something go to allow you to be open to deciding to be a coach? Yeah,   Ashley Rudolph ** 39:38 and the thing that I had to let go was exactly what you just pointed out. So you are very intuitive. The thing I had to let go was that the traditional construct of what success looks like. So it looks like, okay, I'm a VP, so I next need to be an SVP. And then after that I need to be at the sea level. And no, and I guess there could have always been questions about, was that what I really wanted, or was it just the next level that I was after? Yeah, yeah. And there was that, I think it was just the next level for quite some time, but now, like I said, the thing that I let go of was that and wanting to grasp for what the next level is. And now for me, it looks like, okay, well, I only have so many hours in the day, so I can't coach unlimited people, but I still want to impact many people. So what does that mean? Okay, well, I'm writing a newsletter, and I put out a newsletter every week with my thoughts, and that can reach many more people than I can one to one or podcast. I'm talking to you on this podcast, and maybe me sharing more of my story will inspire someone else, or I'll learn from you and your community, Michael, but yeah, I think the thing, the thing that determines what success looks like for me is my ability to impact   Speaker 1 ** 41:14 and and the result of that is what happens with the people that you're working with, and so you, you do get feedback because of that,   Ashley Rudolph ** 41:25 yes, yes, I do get, I get lots of feedback, and it is, it's transformational feedback. And I think one of the things that I love, and I do this for every client that I work with, is on day one, we established a baseline, which I don't necessarily have to always say that to them like we're establishing the baseline, it's understood. And then in our last session, I put a presentation together, and I talked to them about where they were when we started, and what they wanted for themselves, and over the course of us coaching together, what they were able to accomplish, so what their wins were, and then where they land, and just me taking them on that journey every single or when they work with me, is eye opening, because they don't even see the change as it's happening. And I'm like, Hey, you did this. You're not that person that you walked into this room as on day one, and maybe by the end, you have a new job, or you got promoted, or you feel more confident and assured in your role. But whatever it is, you've changed, and you should be proud of yourself for that.   Speaker 1 ** 42:43 Yeah, yeah. And it's, I am sure, pretty cool when you get to point that out to people and they realize it, they realize how far they've come.   Ashley Rudolph ** 42:55 Yeah, yeah, it is. It's, it's really awesome to be able to share that with people and to also be on the journey with them, and when they think that maybe they're not ready to do something just gently reminding them that they are. And sometimes I think about what, you know, what managers have done for me, because I've, I had the privilege of working with really great managers some in my career, and yeah, they did that to me, and that that's how I was able to accomplish the things that I did. So yeah,   Speaker 1 ** 43:34 well, it's great that you're able to carry those lessons forward and help other people. That's pretty cool.   Ashley Rudolph ** 43:38 Yeah, yeah. And honestly, I hope that my clients can do the same. So if there are things that they learn in coaching, any frameworks or things like that, if they're able to help people, then that's great. And the cycle continues, you know? So, yeah, yeah.   Speaker 1 ** 43:57 You know, a question that comes to mind is that when we talk about leadership, there are certainly times that leaders face uncertainty, especially when there are transitions going on and you've experienced a lot of transitions. What would you say is the unconventional truth about leadership in times of change and transition?   Ashley Rudolph ** 44:20 Yeah, yeah. So I think the thing that I see the most is that in times of transition, especially if it's a transition that maybe you have no control over, right? You're not choosing to leave your job, for example, the the inclination is to over control, right, and try to assert control over the situation in any way that you can, and in more cases than not, that backfires to some degree. So the thing that I try to focus on with my clients is getting to a point where you accept the fact that what is happening is happening. I'm kind of like my layoff, right? I didn't fight the decision or try to change the decision. I just had to accept it for what it was. And then the thing that we focus on is now that we know the thing is happening, whatever the transition or change is, it doesn't have to be as extreme as a layoff, but now that we know that it's happening, what can you control and what can you focus on? And that's what we need to spend our time on. And it can be anything, you know, sometimes people are put on performance improvement plan, and you kind of just if, if this is a situation where you're like, Oh yeah, I could see where this came from, and I wish that I was not in this situation. Okay, well, you kind of have to accept that you are, and what can you do about it now, it's really, yeah,   Speaker 1 ** 45:58 what's the hardest lesson you've learned about leadership and being a leader, not just being an executive, but coaching people.   Ashley Rudolph ** 46:10 Yeah, and I get this all the time as a coach too. It's it's in me, but the lesson that I've learned is I don't have to know everything. That's   Michael Hingson ** 46:21 a hard lesson. To learn, isn't   Ashley Rudolph ** 46:25 it? It is, especially when you feel like as a leader, like people are relying on you, or you think they are, they're relying on you to know the answers or to know what to do next, or as a coach, they're relying on you to ask the right questions or to guide them in the right direction, right? And sometimes you just don't know, and that's okay, and it's also okay to say that. And I was just going to say that, yeah, yeah, exactly, exactly. It took me a long time to get comfortable with that, but now, now I am more comfortable with it, for sure. Do you feel like you struggled with that too? Or Yeah?   Speaker 1 ** 47:06 Well, I have, but I was blessed early on, when I was a student teacher in getting my secondary teaching credential, I was a student teacher in an algebra one class in high school, and one of the students came in one day, and he asked a question in the course of the day, and it should have been a question I knew the answer to, but I didn't. But when I when I realized I didn't, I also, and I guess this is my makeup, thought to myself, but I can't blow smoke about it, so I just said, you know, I don't know the answer, but I'm going to look it up and I will bring you the answer tomorrow. Is that okay? And he said, Yeah. And my master teacher after class cornered me, and he said, That was absolutely the best thing you could do, because if you try to psych out these kids and fake them out, they're going to see through you, and you're never going to get their trust. Yeah, and of course, he was absolutely right. So I did the right thing, but I also learned the value of doing the right thing. And Mr. Redman, my master teacher, certainly put it in perspective. And I think that's so important. We don't have to necessarily have all the right answers. And even if we do have the right answer, the question is, Is it our job to just say the right answer or try to guide people to get to the right answer?   Ashley Rudolph ** 48:41 Yeah, yeah, exactly. That's another leadership lesson, right? It's and it's so much more powerful when people do get to the answers themselves, yeah. And I think that kind of helps with them being less dependent on coming to you for the answers moving forward, right? If they're able to go on that path of discovery   Speaker 1 ** 49:04 well, and if they are able to do that and you encouraged it, they're going to sense it, and when they get the right answer, they're going to be as high as a kite, and they're going to come and tell you that they did it. So, yeah,   Ashley Rudolph ** 49:15 exactly. Yeah, yeah. What a good feeling.   Speaker 1 ** 49:19 Yeah, it is, what do you do? Or what are your thoughts about somebody who just comes to you and says, I'm stuck?   Ashley Rudolph ** 49:27 Ooh, that happens all the time. Michael, it happens all the time. And I'll tell you, there's two things. So if someone says I'm stuck, they either don't have the confidence to pursue the thing that they know they want to do, but they're just saying they're stuck, which is it is being stuck, right? If you can't take action, then you're stuck. But sometimes they frame that as I don't know where what I want to do or where I want to go, and then I ask. Couple of questions, and it's like, oh, well, you actually do know what you want to do and where you want to go. You just don't have the confidence yet to pursue that path. So part of the time, it's a confidence issue, or the other time, the thing that they're grappling with, or the other cases, what they're grappling with is, I haven't connected with like my values or the things that motivate me or my strengths even right? So maybe they're the ambitious person who was compelled to just chase the next level and the next level and the next level, but now they're asking, Is this really important to me, or do I really want this? As I spoke to another coach, and she ended up leaving what she thought was a dream job at Google, because every day she was kind of like, I still want to be here, and it wasn't her dream job, and she left to become a coach. So it's either one of those two things, most times, for the clients that I work with, and I ask a lot of questions, so I get to the answers, or I help them get to the answers by asking them the right questions. Yeah,   Speaker 1 ** 51:14 and that's the issue. And sometimes you may not know the right question right off the bat, but by the same token, you can search for it by asking other questions.   Ashley Rudolph ** 51:23 Exactly, exactly, exactly, yeah, yeah, that's it.   Speaker 1 ** 51:27 So what is, what is a transformation of a client that you experienced and kind of what really shifted, that changed everything to them, something that just really gave you chills, and was an AHA kind of thing. Yeah,   Ashley Rudolph ** 51:44 there are. There's so many one, okay, so one that I want to share is and basically the client went from, this isn't the job for me. I don't like the role I'm in. I don't think I can be successful, and I don't think my work is valued here. And I would say, over the course of eight months, she went from that to getting one of few perfect performance reviews in the company like it's a company that doesn't give a perfect performance review, right? So, right, going from that and being like, I need to find a new job. I've got to get out to I am excelling at this job, and it wasn't just anyone that gave her the perfect performance review. It was one of the co founders of the company. So like, top person is saying, Yeah, this is great. You're doing amazing work. There is value, and I think you're incredible. So in that transformation, the thing that she had to connect to, or reconnect to, was her values and understanding what are the things that she enjoys about her work and what are the things that she really didn't enjoy, and understanding the why behind that, and then the other two things for her, or developing her confidence, which sounds very fluffy, because it's like, How do you help someone do that? And I help people do that by helping them feel really good about their work product. So with her, with her, what we ended up doing was focusing on helping her prepare for some presentations. Me giving her feedback on her decks, or her talking to me about how she wanted to prepare for a meeting and the points that she wanted to make, and me helping her, you know, craft really compelling talking points, and having that feedback loop with me of being like, Okay, here's how the meeting went, and this was the feedback I got, and also being like, Oh, wow, the meeting went really well. And like feeling her confidence build over time by helping her get better at her work, and gradually over time, it just built to that amazing end point for her. But that's that's a transformation for me that will always stick out, because I just remember that first meeting and me just being like, okay, you know this, this might end up being a journey where we help her find a role that is better suited for her. And, you know, just kind of thinking about that, and it just didn't end up being that at all.   Speaker 1 ** 54:35 Well, the other thing that, in one way or another, probably plays into some of that is the people her bosses, the people who she worked for, probably sensed that something was going on, yeah, and she had to be honest enough to to deal with that. But as she progressed, they had to sense the improvement, and that. Had to help a lot.   Ashley Rudolph ** 55:01 Yes, for sure. And I think maybe there is confusion from her boss and in him thinking that she was ready to take on the work that he knew that she could take on, but she didn't quite feel ready yet. Yeah, so there was something she had to sort through, and she finally, not finally, that wasn't a lot of time at all, but she got there, and yeah, yeah.   Speaker 1 ** 55:26 And I'll bet they were better. I'll bet they were better communicators with each other by the time it was all said and done, too   Ashley Rudolph ** 55:31 Exactly, yes, yeah, yeah. They developed a shorthand, you know? And, yeah, yep.   Speaker 1 ** 55:39 So there are a lot of leaders who look great on paper, but when it really comes down to it, they just aren't really doing all that they ought to be doing. They feel restless or whatever. What's the real reason that they need to deal with to find momentum and move forward?   Ashley Rudolph ** 55:58 Yeah, so I'm going to take a I'm going to take a different approach to answering this question. And because of the people that I work with, again, they're high achievers. Yeah, right. And sometimes I see that what happens is maybe people have described them as restless, or people have said, Why aren't you happy? You have this amazing career, you should be happy. And I think, like that projection, they end up taking that on and feeling guilty about the fact that they want more. But at the core of it, when I talk to them or get to the level of, you know, Hey, what is happening here? What's causing this sense of restlessness? Surprisingly, the answer is, yeah, I have this great job or this great title, but I feel like I could be doing so much more. So it's an impact. It's an impact thing that is driving the people that I work with. So what we end up doing is trying to figure out, to some degree, like I have no control over what happens at work, so I don't want to pretend that I do, but if it is an impact question, then what we get to the core of is, okay, well, how do you increase your impact? And that's what I work with them on?   Speaker 1 ** 57:24 Well, here's a question. So I have been in sales for a long time, and of course, as far as I'm concerned, I still am being a public speaker. I sell more life and philosophy than anything else. But one thing a lot of people face is rejection. A lot that was redundant, but a lot of people face rejection. How do you get people to understand that rejection isn't a bad thing, and that it actually is a sign of success more often than not? And I agree with it. And you had given me this question, I think it's a great question and relevant to answer.   Ashley Rudolph ** 57:58 Yeah, so I just try to flip the thinking. So I make it less about the person rejecting you, or you receiving a rejection. And to me, if you get rejected, it's a signal that you try, and that's what we focus on, right? So if you're not getting rejected and you're in the same place that you were, it's probably an indication that you're not trying, or you're not taking big enough swings, or you're not pushing yourself. So, yeah, I just try to help my clients. You know, think about the fact that, hey, you got rejected because you tried and you put yourself out there, and that's great. And then the other thing I like to think about with rejection is really just like rejection is someone placing a bet, and if you know about bets, you know that they're not 100% right, and sometimes the person just decided they weren't going to place their bet on you. And it's not that you're not capable, or it's not that it wasn't a great idea, maybe it wasn't the right time, maybe whatever, you don't know what the why is, but it's just a bet, and someone could take a different bet, and it can be on you, or you can bet on yourself even, right? So once you start to think about rejection as just the choice that someone made on a day, and that person isn't all people, and they're certainly not representative of, you know, the person who could decide to take a chance on you and your idea or your initiative, then I think the rejection stings a lot less.   Speaker 1 ** 59:31 Yeah, one of the expressions I've heard regularly is the selling really begins. And I and I think whether it's selling a product or whatever you're doing, but the selling really begins when the objections begin or the rejection. Yeah, and I think there's, there's so much truth to that one of the things, one of the things that I used to do when I was selling products, is I would play a game with myself. Is this person. Going to give me a new objection or a new reason for rejection that I haven't heard before, and I always loved it when somebody came up with something that truly I hadn't heard before, and that was absolutely relevant to bring up, because then it's my job to go off and deal with that, but it was fun to put my own mindset in that sort of framework, because it's all about it's it's not me, unless I really am screwing up, it's other things. And no matter whether it's me screwing up or not, it's my job to figure out how to deal with whatever the other person has on their mind. Yeah, and when the new things come up, those are so much fun to deal with. And I even praised people, you know, I've never heard that one before. That's really good. Let's talk about it.   Ashley Rudolph ** 1:00:50 So great, yeah, yeah. They were probably like, oh, okay, wow. Well, yeah, let's talk about it, yeah.   Speaker 1 ** 1:01:00 But I didn't show fear, and didn't need to, because I I went into a learning mode. I want to learn what's on their mind and what's going on,   Ashley Rudolph ** 1:01:09 yeah, and that's what it's about. It's about understanding what's important to the other person, or understanding their concerns. And I think if you come at it like you did, from a place of really wanting to understand them and find common ground, then sometimes you can even shift the rejection right often.   Speaker 1 ** 1:01:27 If you do it right often you can. Yeah, you can. You can reverse it, because most rejections and objections are really based on perception and not necessarily reality   Ashley Rudolph ** 1:01:41 at all? Yes, exactly yes, yes, which is   Speaker 1 ** 1:01:45 important? Well, if you could go back and talk to a younger version of yourself, what moment would you choose and who? What would you say that they should learn? Oh,   Ashley Rudolph ** 1:01:54 this is so this is such a   Speaker 1 ** 1:01:57 great fun question. Yeah,   Ashley Rudolph ** 1:02:03 if I could go back, I would probably tell myself that you you don't necessarily have to run away to find the things that you're looking for in your career, right? And I think in life too. Sometimes you think, Oh, I just have to move to a different city, or I just have to buy a new outfit, or I just have to, I have to, I have to, I have to change this thing. And sometimes you just don't have to. Sometimes you can have a conversation about thing that you want or the thing that you're not getting. So if this is a boss right, talking about the thing that you want or that you're not getting, and coming up with a solution together, and I think for quite some time, I was too afraid to do that, and if I wasn't getting what I needed or what I wanted, I just thought the best thing to do was to find it elsewhere, and I would just go back and tell myself to ask for what I wanted first, and then get the information and then leave if I had to. But leaving doesn't have to be the default.   Speaker 1 ** 1:03:21 Yeah. Cool. Well, Ashley, this has been a lot of fun. We've been doing this an hour. Can you believe   Ashley Rudolph ** 1:03:29 it? We have, we have the time flew by. Fun. Yeah, I could have kept going.   Michael Hingson ** 1:03:36 Well, then we'll just have to do another one. Yeah,   Ashley Rudolph ** 1:03:39 we do. It, I will always come back. You are amazing. Michael,   Speaker 1 ** 1:03:43 well, this has been fun, and maybe one of the things that you could do to help spread the word about what you do and so on is do your own podcast.   Ashley Rudolph ** 1:03:50 Yes, something else to think about, yeah, yeah, that's a great idea. And then if I do then I will invite you on there. I'd   Speaker 1 ** 1:04:00 love it, I'll come absolutely well. I want to thank you again, and I want to thank all of you for listening and watching today. This has been very enjoyable and a lot of fun, and I appreciate you taking the time to be with us. I'd love to hear your thoughts. Please feel free to email me at Michael H i@accessibe.com so accessibi is spelled A, C, C, E, S, S i, B, E, so Michael M, I C H, A, E, L, H i@accessibe.com or go to our podcast page, www, dot Michael hingson.com/podcast and Michael hingson is m, I C H, A, E, L, H, I N, G, s o n.com/podcast, love to hear from you, and certainly I hope that whenever you're listening or watching, give us a five star rating. We value your reviews, and we really want to know that we're doing good by you, so please give us good reviews, and if you have thoughts or things that you want us to know about, don't hesitate to reach out. It. And for all of you, and Ashley, including you, if you know of other people who ought to be guests on our podcast, it's so much fun to meet more people from those who have been on before. But for anyone, if you know someone who ought to be a guest, please let me know. Reach out, and we will honor your interest and we will bring them on, because I think everyone has, as I told Ashley earlier, stories to tell. So hope that you will do that and that we'll get to see you on our next episode. And again, Ashley, I just want to thank you for being here. This has been so much fun. All   Ashley Rudolph ** 1:05:37 right, thank you, Michael.   **Michael Hingson ** 1:05:42 You have been listening to the Unstoppable Mindset podcast. Thanks for dropping by. I hope that you'll join us again next week, and in future weeks for upcoming episodes. To subscribe to our podcast and to learn about upcoming episodes, please visit www dot Michael hingson.com slash podcast. Michael Hingson is spelled m i c h a e l h i n g s o n. While you're on the site., please use the form there to recommend people who we ought to interview in upcoming editions of the show. And also, we ask you and urge you to invite your friends to join us in the future. If you know of any one or any organization needing a speaker for an event, please email me at speaker at Michael hingson.com. I appreciate it very much. To learn more about the concept of blinded by fear, please visit www dot Michael hingson.com forward slash blinded by fear and while you're there, feel free to pick up a copy of my free eBook entitled blinded by fear. The unstoppable mindset podcast is provided by access cast an initiative of accessiBe and is sponsored by accessiBe. Please visit www.accessibe.com . AccessiBe is spelled a c c e s s i b e. There you can learn all about how you can make your website inclusive for all persons with disabilities and how you can help make the internet fully inclusive by 2025. Thanks again for Listening. Please come back and visit us again next week.

Code and the Coding Coders who Code it
Episode 52 - Vladimir Dementyev

Code and the Coding Coders who Code it

Play Episode Listen Later Jun 17, 2025 65:55 Transcription Available


What happens when you put Rails in a browser? Vladimir Dementyev (Vova) is pushing WebAssembly to its limits by creating an interactive Rails playground that runs entirely client-side. This groundbreaking project aims to eliminate the frustrating installation barriers that often discourage newcomers from trying Ruby on Rails."I asked myself the question - can I run Rails on WASM? And that's when you feel yourself like a pilgrim software engineer, experiencing something for the first time that no one ever experienced," Vova shares. The project isn't just a technical curiosity but serves a vital educational purpose - allowing anyone to learn Rails through the official tutorial without wrestling with Ruby version managers or environment setup.As principal engineer at Evil Martians, Vova balances multiple innovative projects simultaneously. Beyond Rails on WASM, he's organizing the first San Francisco Ruby Conference (coming November 2024), building a custom open-source CFP application, expanding AnyCable to support Laravel, and updating his technical book "Ruby on Rails Applications." His creative problem-solving approach extends to production environments too, where techniques developed for experimental projects help solve real client challenges like making libvips fork-safe for high-performance web servers.Vova's philosophy on productivity is refreshingly practical: work when inspiration strikes rather than forcing creativity during arbitrary hours. "If I have no desire to sit at my desk and stare at the laptop, I'm not going to do that. I wait for the moment to come, and then I sit and work, and it's really efficient."Ready to see what Ruby and Rails can do in previously impossible environments? Follow Vova's work, attend his RailsConf talk, or join the growing San Francisco Ruby community to witness how Ruby's flexibility continues to break new ground in unexpected ways.Send us some love. HoneybadgerHoneybadger is an application health monitoring tool built by developers for developers.JudoscaleAutoscaling that actually works. Take control of your cloud hosting.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Support the show

Remote Ruby
Bites and Bytes – Cheesesteaks and One Month Rails

Remote Ruby

Play Episode Listen Later May 30, 2025 38:10


In this episode of Remote Ruby, Chris and Andrew catch up on recent travels and food experiences, including the best Philly cheesesteaks they've ever had. The conversation shifts towards development topics, particularly testing challenges and solutions in Ruby on Rails, featuring discussions about emoji pickers, asset pipelines, and the prawn library. Chris shares updates on acquiring an old Rails app, One Month, and future plans for this project. They also explore various development hiccups and solutions, including using libraries for faster system tests and streamlining asset pipelines. The episode wraps up with insights into new tools like an official Postgres extension for VS Code and plans for future video content on their platform.LinksJudoscale- Remote Ruby listener giftOne MonthRunning Rails System Tests With Playwright Instead of Selenium by Justin SearlsAnnouncing a new IDE for PostgreSQL in VS Code from MicrosoftLou Malnati's Pizzeria Chris Oliver X/Twitter Andrew Mason X/Twitter Jason Charnes X/Twitter

Elevate with Robert Glazer
Elevate Classics: David Heinemeier Hansson on Leadership on Social Issues, Therapy, and Entrepreneurship

Elevate with Robert Glazer

Play Episode Listen Later May 27, 2025 84:57


David Heinemeier Hansson, better known as DHH, returns to ⁠the Elevate Podcast⁠ in this classic episode. DHH is the co-founder of Basecamp which has been used by over 20 million people globally. He's also the founder of Hey and the creator of the transformational Ruby on Rails, an open-source web framework that was used to create Basecamp, Github, Shopify, Airbnb and more. He is also a frequent writer at Hey World, the New York Times bestselling author of four books, including Rework, and even an award-winning racecar driver.  DHH and Robert had an extensive conversation about therapy, parenting, going against the grain, the future of the tech industry and much more. Learn more about your ad choices. Visit megaphone.fm/adchoices

Remote Ruby
The Frustrations of React and the Power of Turbo

Remote Ruby

Play Episode Listen Later May 23, 2025 30:32


In this episode of Remote Ruby, Andrew and Chris discuss the frustrations of working with React and the advantages of using Hotwire. They also talk about upcoming plans, including Andrew's retreat to Philadelphia and Lancaster, and the new features they've been working on, like an inbox for notifications. The conversation touches on the complexity of maintaining large Ruby on Rails applications and the new features in the latest Ruby release. Chris shares his experience at a Post Malone concert, and some tips on maintaining productivity by rearranging workspaces. Hit download now to hear more! LinksJudoscale- Remote Ruby listener giftRails World 2025, September 4 & 5, Amsterdam, NL‘Learn Hotwire' Coursebunny.netNamespaces 101Ruby Releases-GitHub Chris Oliver X/Twitter Andrew Mason X/Twitter Jason Charnes X/Twitter

The Bike Shed
464: Modelling the stars with Rémy Hannequin

The Bike Shed

Play Episode Listen Later May 20, 2025 42:59


Joël and Rémy draw inspiration from the stars as they discuss Rémy's new open source Ruby gem, Astonoby (https://github.com/rhannequin/astronoby). Rémy reveals the challenges he faced in taking on this project, the scientific translation work that went into making it accessible for everyone, as well as the key lessons he learnt from modelling the cosmos. — The Sponsor for this episode has been Judoscale - Autoscale the Right Way (https://judoscale.com/bikeshed). Check out the link for your free gift! If you're enthusiastic about space and want to try out Rémy's new gem tool, you can find it here (https://github.com/rhannequin/astronoby). Alternatively you can read more about astronomical computing here (https://dev.to/rhannequin/series/17782). Your host for this episode has been thoughtbot's own Joël Quenneville (https://www.linkedin.com/in/joel-quenneville-96b18b58/) and was accompanied by Rémy, who can be found over on LinkedIn (https://www.linkedin.com/in/rhannequin/?locale=en_US), or through social media (https://mastodon.social/@rhannequin@ruby.social) under the handle @rhannequin If you would like to support the show, head over to our GitHub page (https://github.com/sponsors/thoughtbot), or check out our website (https://bikeshed.thoughtbot.com). Got a question or comment about the show? Why not write to our hosts: hosts@bikeshed.fm This has been a thoughtbot (https://thoughtbot.com/) podcast. Stay up to date by following us on social media - YouTube (https://www.youtube.com/@thoughtbot/streams) - LinkedIn (https://www.linkedin.com/company/150727/) - Mastodon (https://thoughtbot.social/@thoughtbot) - BlueSky (https://bsky.app/profile/thoughtbot.com) © 2025 thoughtbot, inc. — Credit: Ad-read music by joystock.org

Code and the Coding Coders who Code it
Episode 50 - Adam Fortuna

Code and the Coding Coders who Code it

Play Episode Listen Later May 20, 2025 35:53 Transcription Available


Swimming against the current sometimes leads to unexpected treasures. In this fascinating conversation, Adam Fortuna reveals how migrating Hardcover—a social network for readers with 30,000 users—from Next.js back to Ruby on Rails delivered surprising performance improvements and development simplicity.The journey begins with Adam explaining how Hardcover originated as a response to Goodreads shutting down their API. As a longtime Rails developer who initially chose Next.js for its server-side rendering capabilities, Adam found himself drawn back to Rails once modern tools made it viable to combine Rails' backend strengths with React's frontend interactivity. The migration wasn't a complete rewrite—they preserved their React components while replacing GraphQL with ActiveRecord—and unexpectedly saw significant improvements in page load speeds and SEO rankings.At the heart of this technical evolution is Inertia.js, which Adam describes as "the missing piece for Rails for a long time." This elegant solution allows direct connections between Rails controllers and React components without duplicating routes, creating a seamless developer experience. We dive into the challenges they faced, particularly with generating Open Graph images and handling API abuse, and how they solved these problems with pragmatic hybrid approaches.The conversation takes an exciting turn as Adam discusses their work on book recommendation engines, combining collaborative filtering with content analysis to help readers discover their next favorite book. As someone currently enjoying the Dungeon Crawler Carl series (described as "RPG mixed with Hitchhiker's Guide"), Adam's passion for both books and elegant technical solutions shines throughout.Listen in as we explore how going against conventional wisdom sometimes leads to better outcomes, and discover why Hardcover is now being open-sourced to invite community collaboration. Whether you're interested in Rails, JavaScript frameworks, or book recommendations, this episode offers valuable insights into making technical decisions based on real-world results rather than following trends.Linkshttps://hardcover.app/blog/part-1-how-we-fell-out-of-love-with-next-js-and-back-in-love-with-ruby-on-rails-inertia-jshttps://adamfortuna.com/https://bsky.app/profile/adamfortuna.comSend us some love.HoneybadgerHoneybadger is an application health monitoring tool built by developers for developers.JudoscaleAutoscaling that actually works. Take control of your cloud hosting.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Support the show

Prompt
Opsang fra hjemvendt techrebel, søgekongerne i krise og Trumps paveshow

Prompt

Play Episode Listen Later May 15, 2025 55:41


Dagens afsnit er en special, da Henrik er syg. Til gengæld får vi selskab af en af Danmarks mest markante tech-profiler: David Heinemeier Hansson, der er vendt hjem efter 20 år i USA. Han er en af de få danskere, der har sat et globalt aftryk i techverdenen med Ruby on Rails, som udgør rygraden i blandt andet Shopify og GitHub. Nu er han tilbage for at give en opsang til Danmark og Europa, som ifølge ham har tæt på intet at byde på, når det gælder tech - og som har gjort sig pinligt afhængige af USA. Vi diskuterer også, om Googles tid som vores allesammens port til internettet er ved at rinde ud. For første gang i 22 år faldt brugen af Google i Safari-browseren sidste måned. I stedet søger flere mod AI-chatbots. Men hvad betyder det for internettet, som vi kender det? Til sidst vender vi Trumps ustoppelige kærlighed til deepfakes og AI-videoer. Verden forarges gang på gang, når han deler billeder af sig selv i pavens klæder - eller Gaza forvandlet til en Trump-riviera med sandslotte og palmer. Men hvorfor forarges, når ingen kan være i tvivl om, at det er satire? Og hvorfor hopper folk i fælden hver gang med forargelse? Værter: Marcel Mirzaei-Fard, techanalytiker, og gæst, David Heinemeier Hansson, techiværksætter og stifter af 37Signals.

Hipsters Ponto Tech
Carreiras: Alexandre Gregianin, CTO da Smart Fit – Hipsters Ponto Tech #462

Hipsters Ponto Tech

Play Episode Listen Later May 6, 2025 44:10


Primeiro episódio do mês é dia de falar sobre carreira! Hoje, conversamos com Alexandre Gregianin sobre o início da sua jornada como analista de suporte, e sobre as decisões, os imprevistos e os desvios de percurso que o levaram até o cargo de CTO da Smart Fit, a terceira maior rede de academias do mundo. Vem ver quem participou desse papo: André David, o host que se inspira com quem foi de onde para onde Marcus Mendes, co-host e uma das vozes da Hipsters Network Alexandre Gregianin, CTO na Smart Fit

Compilado do Código Fonte TV
IAs não reduzem vagas e salários; 30% do código da MS vem de IA;Phi-4 nova geração; LlamaCon; Aumento 358% de ataques DDoS [Compilado #197]

Compilado do Código Fonte TV

Play Episode Listen Later May 4, 2025 68:44


Compilado do Código Fonte TV
IAs não reduzem vagas e salários; 30% do código da MS vem de IA;Phi-4 nova geração; LlamaCon; Aumento 358% de ataques DDoS [Compilado #197]

Compilado do Código Fonte TV

Play Episode Listen Later May 4, 2025 68:44


PodRocket - A web development podcast from LogRocket

Carson Gross, creator of HTMX, talks about its evolution from intercooler.js, its viral rise on social media, and its philosophy of simplicity and stability. They dive into how HTMX fits into the modern web dev ecosystem, the idea of building 100-year web services, and why older technologies like jQuery and server-side rendering still have staying power. Carson also shares insights on open-source marketing, progressive enhancement, and the future of web development. Links https://bigsky.software https://www.linkedin.com/in/1cg https://github.com/bigskysoftware https://x.com/htmx_org https://htmx.org https://htmx.org/discord https://hypermedia.systems https://github.com/surrealdb/surrealdb.js https://unpoly.com https://ui.shadcn.com We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Let us know by sending an email to our producer, Emily, at emily.kochanekketner@logrocket.com (mailto:emily.kochanekketner@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understand where your users are struggling by trying it for free at [LogRocket.com]. Try LogRocket for free today.(https://logrocket.com/signup/?pdr) Special Guest: Carson Gross.

The New Stack Podcast
How Heroku Is ‘Re-Platforming' Its Platform

The New Stack Podcast

Play Episode Listen Later Apr 24, 2025 18:01


Heroku has been undergoing a major transformation, re-platforming its entire Platform as a Service (PaaS) offering over the past year and a half. This ambitious effort, dubbed “Fir,” will soon reach general availability. According to Betty Junod, CMO and SVP at Heroku (owned by Salesforce), the overhaul includes a shift to Kubernetes and OCI standards, reinforcing Heroku's commitment to open source. The platform now features Heroku Cloud Native Buildpacks, which let developers create container images without Dockerfiles. Originally built on Ruby on Rails and predating Docker and AWS, Heroku now supports eight programming languages. The company has also deepened its open source engagement by becoming a platinum member of the Cloud Native Computing Foundation (CNCF), contributing to projects like OpenTelemetry. Additionally, Heroku has open sourced its Twelve-Factor Apps methodology, inviting the community to help modernize it to address evolving needs such as secrets management and workload identity. This signals a broader effort to align Heroku's future with the cloud native ecosystem. Learn more from The New Stack about Heroku's approach to Platform-as-a-Service:Return to PaaS: Building the Platform of Our DreamsHeroku Moved Twelve-Factor Apps to Open Source. What's Next?How Heroku Is Positioned To Help Ops Engineers in the GenAI EraJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.

Maintainable
Freedom Dumlao: What 70 Java Services Taught Me About Focus

Maintainable

Play Episode Listen Later Apr 22, 2025 63:19


Freedom Dumlao (CTO at Vestmark) joins Robby to explore what it means to maintain software at scale—and why teams sometimes need to unlearn the hype.With two decades of experience supporting financial systems, Freedom shares how his team manages a Java monolith that oversees $1.6 trillion in assets. But what's most surprising? His story of how a team working on 70+ microservices rebuilt their platform as a single Ruby on Rails monolith—and started shipping faster than ever before.Episode Highlights[00:02:00] Why Respecting Legacy Code MattersFreedom reflects on a lesson he learned at Amazon: "Respect what came before." He discusses the value of honoring the decisions of past developers—especially when their context is unknown.[00:05:00] How Tests Help (and Where They Don't)Freedom discusses how tests can clarify system behavior but not always intent—especially when market logic or business-specific rules come into play.[00:07:00] The Value of Understudies in EngineeringFreedom shares how his team intentionally pairs subject matter experts with understudies to reduce risk and transfer knowledge.[00:09:30] Rethinking Technical DebtHe challenges the fear-based framing of technical debt, comparing it instead to a strategic mortgage.[00:17:00] From 70 Services to 1 MonolithAt FlexCar, Freedom led an unconventional rewrite—consolidating 70 Java microservices into a single Rails app. The result? A dramatic increase in velocity and ownership.[00:25:00] Choosing Rails Over Phoenix, Laravel, and DjangoAfter evaluating multiple frameworks, Rails' cohesiveness, Hotwire, and quick developer ramp-up made it the clear winner—even converting skeptical team members.[00:31:00] How Rails Changed Team DynamicsBy reducing dependency handoffs, the new Rails app enabled solo engineers to own complete features. The impact? Faster delivery and more engaged developers.[00:36:30] Why Rails Still Makes Sense at a 20-Year-Old CompanyEven with a large Java codebase, Vestmark uses Rails for rapid prototyping and new product development.[00:41:00] Using AI to Navigate Legacy SystemsFreedom explains how his team uses retrieval-augmented generation (RAG) to surface relevant code—but also the limitations of AI on older or less common codebases.[00:51:00] Seek Feedback, Not ConsensusFreedom explains why aiming for alignment slows teams down—and how decision-makers can be inclusive without waiting for full agreement.Links and ResourcesFreedom Dumlao on LinkedInVestmarkNo Rules RulesDungeon Crawler Carl seriesThanks to Our Sponsor!Turn hours of debugging into just minutes! AppSignal is a performance monitoring and error-tracking tool designed for Ruby, Elixir, Python, Node.js, Javascript, and other frameworks.It offers six powerful features with one simple interface, providing developers with real-time insights into the performance and health of web applications.Keep your coding cool and error-free, one line at a time! Use the code maintainable to get a 10% discount for your first year. Check them out! Subscribe to Maintainable on:Apple PodcastsSpotifyOr search "Maintainable" wherever you stream your podcasts.Keep up to date with the Maintainable Podcast by joining the newsletter.

Tech Disruptors
Gusto Scales Up HR-Payroll Software for SMBs

Tech Disruptors

Play Episode Listen Later Apr 22, 2025 36:06


Helping small businesses manage all things human resources — payroll, benefits, tax compliance and employee onboarding — is Gusto's central focus, which presents unique challenges given the fragmented customer base. Chief Technology Officer Mike Tria outlines the company's product-portfolio suite and the evolution of the human-capital management industry over the past 10 years from a technology angle. In this Tech Disruptors podcast episode, BI analyst Niraj Patel sits down with Tria to discuss Gusto's appeal across small and medium-sized business (SMBs), the leverage of its technology infrastructure (Ruby on Rails) to scale up for asynchronous workloads, the software ecosystem, its “Gus” AI solution and more.

All JavaScript Podcasts by Devchat.tv
Breaking Into Tech: Lessons from My Career Path - JsJ 672

All JavaScript Podcasts by Devchat.tv

Play Episode Listen Later Apr 7, 2025 44:11


This episode is a little different—thanks to a U.S. holiday, I'm flying solo. But that just means we get to have a one-on-one chat!I dive into my career journey—not to brag, but to offer insights for anyone feeling stuck, of how my inventor grandfather sparked my early interest in tech, how I transitioned from electrical engineering to computer engineering, and how I went from IT support to discovering my love for programming while solving real-world problems at Mosey with Ruby on Rails.Become a supporter of this podcast: https://www.spreaker.com/podcast/javascript-jabber--6102064/support.

Hipsters Ponto Tech
Carreiras: De Deyvid Nascimento a Mano Deyvin – Hipsters Ponto Tech #457

Hipsters Ponto Tech

Play Episode Listen Later Apr 1, 2025 52:01


Primeiro episódio do mês é dia de falar sobre carreira! Hoje, conversamos novamente com Deyvid Nascimento, o Mano Deyvin, dessa vez para explorar a sua própria trajetória. Dos dias ajudando no trabalho do pai, até a criação do canal que hoje representa quase a totalidade da sua profissão, Deyvid conta como se interessou por desenvolvimento, como se apaixonou por Ruby on Rails, e de onde veio a ideia de criar seu famoso canal (quase) sem filtros para jogar a real da vida de dev. Vem ver quem participou desse papo: André David, o host que se diverte com o Chorume Corporativo do Mano Deyvin Marcus Mendes, co-host e uma das vozes da Hipsters Network Deyvid Nascimento, desenvolvedor e o criador de conteúdo Mano Deyvin

Liquid Weekly Podcast: Shopify Developers Talking Shopify Development
Episode 038 - Jan Frey on How to Become a Shopify Developer in 2025

Liquid Weekly Podcast: Shopify Developers Talking Shopify Development

Play Episode Listen Later Mar 27, 2025 62:22


In this episode of the Liquid Weekly Podcast, hosts Karl Meisterheim and Taylor Page welcome back Jan Frey, one of the greatest Shopify development teachers on the web with the incredibly helpful and popular YouTube channel Coding with Jan.The conversation covers a range of topics including the current landscape of Shopify development, strategies for finding clients, the importance of professionalism, and the evolving role of AI in web development. Jan shares insights on his new JavaScript training program aimed at helping developers enhance their skills, while also discussing the significance of soft skills in client interactions. TakeawaysFinding clients often comes from referrals and established relationships.Professionalism is key in differentiating yourself as a developer.AI tools can enhance productivity but should not replace human developers.Soft skills are just as important as technical skills in client interactions.Building a solid portfolio and online presence is crucial for new developers.JavaScript is essential for Shopify development and should be learned thoroughly.Understanding the Shopify admin and Liquid is vital for effective development.Networking and community engagement can lead to more opportunities.Creating content can help establish authority and attract clients.Continuous learning and adapting to new tools is necessary for success in development.Introducing the .dev Assistant VSCode Extension - https://shopify.dev/changelog/introdu...[action required] Checkout APIs will be shut down April 1, 2025 - https://shopify.dev/changelog/checkou...[action required] AMAZON_PAY enumerated in DigitalWallets - https://shopify.dev/changelog/amazonp...[action required] Metafield description input field removal - https://shopify.dev/changelog/metafie...New customer address capabilities in the Admin API - https://shopify.dev/changelog/new-cus...Timestamps00:00 Exploring Shopify Development and Educational Initiatives01:25 The Evolution of Development in 202504:23 Finding Clients and Building a Portfolio07:21 Soft Skills in Development and Client Interaction13:15 Navigating Cold Outreach Strategies17:30 Building a Professional Online Presence22:59 The Importance of Referrals and Networking31:52 Establishing Technical Knowledge in Development38:49 The Future of Development in an AI World40:47 The Role of AI in Web Development46:04 Essential Skills for Freelance Developers47:03 Mastering JavaScript for Shopify52:56 Shopify Updates and Changes01:01:35 Personal Highlights and Future CollaborationsFind Jan OnlineYouTube: https://www.youtube.com/@CodingwithJanLinkedIn:   / jan-frey   Twitter (X): https://x.com/Coding_with_Jan Website: https://codingwithjan.com/Freemote: https://www.freemote.com/Javascript Training: https://www.codingwithjan.com/javascr... ResourcesLuck Sail:    • The little risks you can take to incr...   Dev ChangelogPicks of the WeekKarl: Ruby on Rails and Web Assembly - https://web.dev/blog/ruby-on-rails-on...Jan: Working with Shopify Academy - https://www.shopifyacademy.com/Taylor: GoRuck Weighted Vest - https://www.goruck.com/products/train...Signup for Liquid Weekly NewsletterDon't miss out on expert insights and tips—subscribe to Liquid Weekly for more content like this delivered right to your inbox each week - https://liquidweekly.com/

Elevate with Robert Glazer
Elevate Classics: David Heinemeier Hansson's Radical Entrepreneurship

Elevate with Robert Glazer

Play Episode Listen Later Mar 25, 2025 77:40


David Heinemeier Hansson, is a true innovator. He is the co-founder of Basecamp which has been used by over 20m people globally. He's also the founder of Hey and the creator of the transformational Ruby on Rails, an open source web framework that was used to create Basecamp, Github, Shopify, Airbnb and more. He is also a frequent writer at Hey World, the New York Times bestselling author of four books, including Rework, and even an award winning racecar driver. In this classic episode of the Elevate Podcast, David shares his personal approach to work-life balance, the importance of using time effectively, and the value of Stoic philosophy in his life. He challenges conventional notions of productivity and delves into the idea of front-loading important tasks for maximum impact. Learn more about your ad choices. Visit megaphone.fm/adchoices

IndieRails
Ben Curtis & Josh Wood - Kids These Days

IndieRails

Play Episode Listen Later Mar 18, 2025 64:37


In this episode of IndieRails, co-founders Ben Curtis and Joshua Wood share the origin story of Honeybadger, an application monitoring tool for Ruby on Rails applications (and many others). They discuss their motivations for starting the company, the challenges they faced in the early days. The conversation also covers their approach to product development, marketing, pricing strategies, expanding into new markets and the lessons learned from their journey.HoneybadgerBen CurtisMastodonBlueskyLinkedInJosh WoodMastodonBlueskyLinkedIn

Life on Mars - A podcast from MarsBased
The Future of Agency Work: Navigating AI and Growth Strategies with Szymon Boniecki

Life on Mars - A podcast from MarsBased

Play Episode Listen Later Mar 11, 2025 44:45 Transcription Available


Are you ready to explore the dynamics of running a successful consultancy in today's fast-evolving tech landscape? Join us as we sit down with Szymon Boniecki, co-founder of Monterail, a pioneering Ruby on Rails agency. In this engaging episode, we dive into Monterail's remarkable journey over the past 14 years, sharing lessons on growth, the impact of artificial intelligence (AI), and the delicate balance between client expectations and service quality. The conversation highlights the importance of adaptability in the face of changing industry demands—from discovering the right marketing strategies to understanding the nuances of maintaining client relationships. Szymon sheds light on how embracing a primarily inbound marketing approach has bolstered Monterail's reputation and client acquisition over the years. We tackle pressing questions such as: How do you remain competitive in an era where AI promises more efficiency at lower costs? How do you manage your operational scale while still delivering exceptional value to your clients? The episode offers insightful answers along with practical strategies that any aspiring consultant or established agency leader can implement. With heartfelt anecdotes from both Szymon and host Alex, listeners will find themselves not just inspired, but also equipped with actionable takeaways to elevate their own business strategies. Don't miss this opportunity to gain valuable insights from industry leaders! Make sure to subscribe, share, and leave us your thoughts—how do you approach growth and adaptation in your consultancy?Support the show

Code and the Coding Coders who Code it
Episode 47 - Jason Swett

Code and the Coding Coders who Code it

Play Episode Listen Later Mar 4, 2025 48:19 Transcription Available


Join us for a fascinating episode where we explore the development of SaturnCI—a new and user-friendly Continuous Integration tool that arose from frustrations with existing solutions like CircleCI and GitHub Actions. Our guest, Jason Sweat, shares his passion for creating a platform that not only simplifies the user experience but actively incorporates feedback from early adopters. Through candid conversations, Jason recounts his journey as a content creator in the Ruby community, and how it inspired him to address the shortcomings he observed in CI tools.We delve into the technical challenges faced as SaturnCI grows, particularly those relating to user scalability as it onboarded new customers. Jason offers valuable insights into his tech stack choices while drawing attention to the importance of creating streamlined interfaces that cater to developers' needs. The conversation shifts to the foundation of community through his upcoming Sin City Ruby conference, showcasing the efforts made to facilitate connection among participants and ensure each attendee leaves with new friendships and knowledge.Toward the end of our episode, we touch upon Jason's unique approach to outreach through his snail mail newsletter, where he shares insights and stories beyond technology. This creative endeavor highlights how stepping away from screens can cultivate a deeper connection with the audience. With an inviting conversational tone and enriching discussions, this episode is packed with valuable insights for anyone interested in CI tools, community-building, and finding the courage to innovate within your space. Be sure to subscribe and share your thoughts with us!Send us some love.HoneybadgerHoneybadger is an application health monitoring tool built by developers for developers.HoneybadgerHoneybadger is an application health monitoring tool built by developers for developers.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Support the showReady to start your own podcast?This show is hosted on Buzzsprout and it's awesome, not to mention a Ruby on Rails application. Let Buzzsprout know we sent you and you'll get a $20 Amazon gift card if you sign up for a paid plan, and it helps support our show.

IndieRails
Garrett Dimon - A Long Winding Journey

IndieRails

Play Episode Listen Later Mar 4, 2025 59:01


Our guest for this episode is Garrett Dimon. Garrett is a developer, author, conference speaker and multi time business owner. With some partners, he's recently formed a company called “Very Good Software” where they own and operate several SaaS apps. Garrett Dimon is a seasoned software developer and entrepreneur with a passion for front-end development and Ruby on Rails. His journey began in 1998, experimenting with HTML, CSS, and JavaScript before earning a Computer Science degree from the University of Texas at Dallas in 2000. Over the next eight years, he honed his skills in front-end development and information architecture through consulting roles with organizations of all sizes. During this time, he also shared his expertise through a column on front-end design and development for Digital Web Magazine.In 2008, Garrett started his entrepreneurial journey and launched Sifter, a bug and issue tracking application built with Rails, which he ran until its successful sale in 2016. His experience building and selling Sifter inspired him to write and self-publish Starting and Sustaining, a book about building and running SaaS applications. After Sifter, Garrett took some time off from entrepreneurship and joined Wildbit and then egghead. Eventually he went back on his own independent consulting where he helped clients Fireside.fm and Flipper. Little did he know that later on, he'd become part owner of these companies. In the fall of 2024 the one time business seller became the buyer. He, John Nunemaker, and Kris Priemer are operating Very Good Software, where fireside.rm and Flipper are core products. Links:GarrettDimon.comBlueSkyFireside.fmFlipperVery Good SoftwareBooksRecent podcast appearances:Taking Over Fireside with John Nunemaker & Garrett DimonMaster of Generators (with Garrett Dimon) | Dead Code

Software Sessions
Hong Minhee on ActivityPub

Software Sessions

Play Episode Listen Later Feb 28, 2025 45:39


Hong Minhee is an open source developer and the creator of the Fedify ActivityPub server framework. We talk about how applications like Mastodon and Misskey communicate with one another using ActivityPub. This includes discussions on built-in activites, extending the specification in a backwards compatible way, difficulties implementing JSON-LD, the inbox model, and his experience implementing the specification. Hong Minhee: activitypub profile fedify hollo Specifications: ActivityPub W3C specification JSON Linked Data Resource Description Framework W3C Semantic Web Standards ActivityPub and WebFinger ActivityPub and HTTP Signatures ActivityPub implementations: Mastodon Misskey Akkoma Pleroma Pixelfed Lemmy Loops GoToSocial ActivityPub support in Ghost Threads has entered the Fediverse ActivityPub tools: ActivityPub Academy BrowserPub fedify CLI -- Transcript You can help correct transcripts on GitHub. What's ActivityPub? [00:00:00] Jeremy: Today, I'm talking to Hong Minhee. He is the developer of Fedify. A TypeScript library for building ActivityPub server applications. The first thing I think we should start with is defining ActivityPub. what is ActivityPub? [00:00:16] Hong: ActivityPub is the protocol that lets social networks talk to each other and it's officially recommended by W3C. It's what powers this thing we call the Fediverse which is basically a way for different social media platforms to work together. Users of ActivityPub [00:00:39] Jeremy: Can you give some examples that people might have heard of -- either users of ActivityPub or things that are a part of this fediverse? [00:00:50] Hong: Mastodon is probably the biggest one out there. And you know what's interesting? Meta threads has actually started implementing ActivityPub this summer. So this still pretty much a one way street right now. In East Asia, especially Japan, there's this really popular microblogging platform called misskey. It's got so many forks that people actually joke around and called them forkeys. but it's not just about Twitter style microblogging, there's Pixelfed which is kind of like Instagram, but for the fediverse. And those same folks recently launched loops. Which is basically doing what TikTok does, but in the Fediverse. Then you've got stuff like Lemmy and which are doing the reddit thing up in the Fediverse. [00:02:00] Jeremy: Oh like Reddit. [00:02:01] Hong: Yeah. There are so much more out there that I haven't even mentioned. Um, most of it is open source, which is pretty cool. [00:02:13] Jeremy: So the first few examples you gave, Mastodon and Meta's threads, they're very similar to, to Twitter, right? So that's what you were calling the, the Microblogging applications. And I think what you had said, which is a little bit interesting is you had said Metas threads is only one way. So could you kind of describe like what you mean by that? [00:02:37] Hong: Currently meta threads only can be followed by other ActivityPub applications but you cannot follow other people in the fediverse. [00:02:55] Jeremy: People who are using another Microblogging platform like Mastodon can follow someone on Meta's Threads platform. But the other way is not true. If you're on threads, you can't follow someone on Mastodon. [00:03:07] Hong: Yes, that's right. [00:03:09] Jeremy: And that's not a limitation of the protocol itself. That's a design decision or a decision made by Meta. [00:03:17] Hong: Yeah. They are slowly implementing ActivityPub and I hope they will implement complete ActivityPub in the future. Interoperability through Activities [00:03:27] Jeremy: And then the other examples you gave, one is I believe it was Pixel Fed is very similar to Instagram. And then the last examples you gave was I think it was Lemmy, you said it's similar to Reddit. Because you mentioned the term Fediverse before and you mentioned that these all use ActivityPub and since these seem like different kinds of applications, what does it mean for them to interact? Because with Mastodon and Threads I can kind of understand because they're both similar to Twitter. So you're posting messages and replying, but, but what does it mean, for example, for someone on Mastodon to interact with someone on Lemmy which is like Reddit because they seem very different. [00:04:16] Hong: People in Lemmy and Mastodon are called actors and can follow each other. They have interactions between them called activities. And there are several types of activities like, create and follow and undo, like, and so on. So, ActivityPub applications tend to, use these vocabulary to implement their features. So, for example, Lemmy uses like activities for upvoting and like activities for down voting and it's translated to likes in Mastodon. So if you submit a post on Lemmy and it shows up on your Mastodon timeline. If you like that post (it) is upvoting in Lemmy. [00:05:36] Jeremy: And probably similarly with Pixelfed, which you said is like Instagram, if you follow someone's Pixelfed account in Mastodon and they post a photo in Pixel Fed, they would see it as a post in Mastodon natively and they could give it a like there. Adding activities or properties [00:05:56] Jeremy: And these activities that you mentioned -- So the like and the dislike are those part of ActivityPub itself? [00:06:05] Hong: Yes, and this vocabulary can be extended. [00:06:10] Jeremy: So you can add, additional actions (activities) or are you adding information (properties) to the existing actions? [00:06:37] Hong: It is called activity vocabulary, and there are, things like accept, add, arrive, block, create lead, dislike, flag, follow, ignore invite, join, and so on. So, basically, almost everything you need to build social media is already there in the vocabulary, but if you want to extend some more, you can define your, own vocabulary. [00:06:56] Jeremy: Most of the things that an Instagram or a Twitter, or a Reddit would need is already there. But you're saying that you can have your own vocabulary. So if there's an action or an activity that is not covered by the specification, you can create one yourself. [00:07:13] Hong: Yes. For example, Misskey and Pleroma defined emoji reactor to represent emoji reactions. [00:07:25] Jeremy: Because the systems can extend the vocabulary. What are some other examples of cases where mastodon or any other of these systems has found that the existing vocabulary is not enough. What are some other examples of applications extending it? [00:07:45] Hong: For example, uh, mastodon defined suspended -- suspended property. They are not activities, but they are properties in the activity. ActivityPub consists of several types of objects and there are activities and normal objects like, article. they can have properties and there are several existing properties, but they can be also extended. So Mastodon extended some properties they need. So for example, they define suspended or discoverable. [00:08:44] Hong: Suspended for to tell if an actor is suspended by moderators. Discoverable tells if an actor itself wants to be, searched and indexed, and there are much more properties. Mastodon extended. Actors [00:09:12] Jeremy: And these are, these are properties of the actor. These are properties of the user? [00:09:19] Hong: Yes. Actors. [00:09:21] Jeremy: Cause I think earlier you mentioned that. The concept of a user is an actor, and it sounds like what you're saying is an actor can have all these properties. There's probably a, a username and things like that, but Mastodon has extended the properties so that, you can have a property on whether you wanna be searched or indexed you can have a property that says you're suspended. So I guess your account, is still there, but can't be used anymore. Something we should probably talk about then is, so you have these actors, you have these activities that I'm assuming the actors are performing on one another. What does that data look like and what does the communication look like? [00:10:09] Hong: Actors have their own dereferencable URI and when you look up that URI you get all the info about the actor in JSON-LD format [00:10:22] Jeremy: JSON-LD? [00:10:23] Hong: Yeah. JSON-LD. linked data. (The) Actor has all the stuff you expect to find on a social account name, bio URL to the profile page, profile picture, head image and more. And there are five main types of actors: application, group, organization, person and service. And you know how sometimes on Mastodon you will see an account marked as a bot? [00:10:58] Jeremy: A bot? [00:10:59] Hong: Yeah. Bot and that's what an actor of type service looks like. And the ActivityPub spec actually let you create other types beyond these five. But I haven't seen anyone actually do that yet. JSON-LD [00:11:15] Jeremy: And you mentioned that these are all JSON objects. but the LD part, the linked data part, I'm not familiar with. So what different about the linked data part of the JSON? [00:11:31] Hong: So JSON-LD is the special way of writing RDF. Which was originally used in the semantic web. Usually RDF uses (a) format (that) is called triples. [00:11:48] Jeremy: Triples? [00:11:49] Hong: Yeah, subject and predicate and object. [00:11:55] Jeremy: Subject, predicate, object. Can you give an example of what those three would be? [00:12:00] Hong: For example, is a person, it's a triple. John is a subject and is a predicate [00:12:11] Jeremy: is, is the predicate. [00:12:12] Hong: Okay. And person is a object. That's great for showing how things are connected, but it is pretty different from how we usually handle data in REST for APIs and stuff. Like normally we say a personal object has property like name, DOB, bio, and so on. And a bunch of subject predicated object triples that's where JSON-LD comes in -- is designed to look more like the JSON we are used to working with, while still being able to represent RDF Graphs. RDF graph are ontology. It's a way to represent factual data, but is, quite different from, how we represent data in relational database. And it's a bunch of triples each subject and objects are nodes and predicates connect these nodes. Semantic Web [00:13:30] Jeremy: You mentioned the Semantic web, what does that mean? What is the semantic web? [00:13:35] Hong: It's a way to represent web in the structural way, is machine readable so that you can, scan the data in the web, using scrapers or crawlers. [00:13:52] Jeremy: Scrapers -- or what was the second one? Crawling. [00:13:59] Hong: Yeah. Then you can have graph data of web and you can, query information about things from the data. [00:14:14] Jeremy: So is the web as it exists now, is that the Semantic web or is it something different? [00:14:24] Hong: I think it is partially semantic web, you have several metadata in Your HTML. For example, there are several specification for semantic web, like, OpenGraph metadata. [00:14:32] Jeremy: Cause when I think about OpenGraph, I think about the metadata on a webpage that, that tells other applications or websites that if you link to this page: show this image or show this title and description. You're saying that specifically you consider part of the semantic web? [00:15:05] Hong: That's, semantic web. To make your website semantic web. Your website should be able to, provide structural data. And other people can make Scrapers to scan, structural data from your website. There are a bunch of attributes and text for HTML to represent metadata. For example you have relation attribute rel so if you have a link with rel=me to your another social profile. Then other people can tell two web pages represent the same person. [00:16:10] Jeremy: Oh, I see. So you could have more than one website. Maybe one is your blog and maybe one is your favorite birds or something like that. But you could put a rel tag with information about you as a person so that someone who scrapes both websites could look at that tag and see that both of these websites are by, Hong, by this person. JSON-LD is difficult to implement and not used as intended [00:16:43] Hong: Yeah. I think JSON-LD is, designed for semantic web, but in reality, ActivityPub implementations, most of them are, not aware of semantic web. [00:17:01] Jeremy: The choice of JSON Linked Data, the JSON-LD, by the people who made the specification -- They had this idea that things that implemented ActivityPub would be a part of this semantic web, but the actual implementation of a Mastodon or a Pixelfed, they use JSON-LD because it's part of the specification, but the way they use it, it ends up not really being a part of this semantic web. [00:17:34] Hong: Yeah, that's exactly.. [00:17:37] Jeremy: You've mentioned that implementing it is difficult. What makes implementing JSON LD particularly hard? [00:17:48] Hong: The JSON-LD is quite complex. Which is why a lot of programming language don't even have JSON-LD implementations and it's pretty slow compared to just working with the regular JSON. So, what happens is a lot of ActivityPub implementations just treat JSON-LD like (it) is regular JSON without using a proper JSON-LD processor. You can do that, but it creates a source of headache. In JSON-LD there are weird equivalences like if a property is missing or if it's an empty array, that means the same thing. Or if a property has one value versus an array with just that one value in it, same thing. So when you are writing code to parse JSON-LD, you've got to keep checking if something's an array how long it is and all that is super easy to mess up. It's not just reading JSON-LD that's tricky. Creating it is just as bad. Like you might forget to include the right context metadata for a vocabulary and end up with a JSON-LD document that's either invalid or means something totally different from what you wanted. Even the big ActivityPub implementations mess this up pretty often. With Fedify we've got a JSON-LD processor built in and we keep running into issues where major ActivityPub implementations create invalidate JSON-LD. We've had to create workaround for all of them, but it's not pretty and causes kind of a mess. [00:19:52] Jeremy: Even though there is a specification for JSON-LD, it sounds like the implementers don't necessarily follow it. So you are kind of parsing JSON-LD, but not really. You're parsing something that. Looks like JSON-LD, but isn't quite it. [00:20:12] Hong: Yes, that's right. [00:20:14] Jeremy: And is that true in the, the biggest implementations, Mastodon, for example, are there things that it sends in its activities that aren't valid JSON-LD? [00:20:26] Hong: Those implementations that had bad JSON-LD tends to fix them soon as a possible. But regressions are so often made. Yeah. [00:20:45] Jeremy: Even within Mastodon, which is probably one of the largest implementers of ActivityPub, there are cases where it's not valid, JSON-LD and somebody fixes it. But then later on there are other messages or other activities that were valid, but aren't valid anymore. And so it's this, it's this back and forth of fixing them and causing new issues it sounds ... [00:21:15] Hong: Yeah. Yeah. Right. [00:21:17] Jeremy: Yeah. That sounds very difficult to deal with. How instances communicate (Inbox) [00:21:20] Jeremy: We've been talking about the messages themselves are this special format of JSON that's very particular. but how do these instances communicate with one another? [00:21:32] Hong: Most of time, it all starts with a follow. Like when John follows Alice, then Alice adds both John and John's inbox URI to her followers list, and after John follows Alice, Whenever Alice posts something new that activities get sent to John's inbox behind the scenes. This is just one HTTP post request. Even though ActivityPub is built on HTTP. It doesn't really care about the HTTP response beyond did it work or not. If you want to reply to an activity, you need to figure out the standard inbox, URI and send or reply activity there. [00:22:27] Jeremy: If we define all the terms, there's the actor, which is the person, each actor can send different activities. those activities are in the form of a JSON linked data. [00:22:40] Hong: Yeah. [00:22:42] Jeremy: And everybody has an inbox. And an inbox is an HTTP URL that people post to. [00:22:50] Hong: Right. [00:22:52] Jeremy: And so when you think about that, you had mentioned that if you have a list of followers, let's say you have a hundred followers, would that mean that you have the URLs to all hundred of those follower's inboxes and that you would send one HTTP post to each inbox every time you had a new message? [00:23:16] Hong: Pretty much all ActivityPub implementations have, a thing called shared inbox, it's exactly what it sounds like. One inbox that all actors on a server share. Private stuff like DMs don't go there (it) is just for public posts and thoughts. [00:23:36] Jeremy: I think we haven't really talked about the fact that, when you have multiple users, usually they're on a server, right? That somebody chooses. So you could have tens of thousands, I don't know how many people can fit on the same server. But, rather than, you having to post to each user individually, you can post to the shared inbox on this server. So let's say, of your 100 followers, 50 them are on the same server, and you have a new post, you only need to post to the shared inbox once. [00:24:16] Hong: Yes, that's right. [00:24:18] Jeremy: And in that message you would I assume have links to each of the profiles or actors that you wanted to send that message to. [00:24:30] Hong: Yeah. Scaling challenges [00:24:31] Jeremy: Something that I've seen in the past is there are people who have challenges with scaling. Their Mastodon instance or their implementations of ActivityPub. As the, the number of followers grow, I've seen a post about, ghost one of the companies you work with mentioning that they've had challenges there. What are the challenges there and, and how do you think those can be resolved? [00:25:04] Hong: To put this in context, when Ghost mentioned the scaling, they were not using Message Queue yet. I'm pretty sure using Message Queue would help a lot of their scaling problems. That said it is definitely true that a lot of activity post software has trouble with scaling right now. I think part of the problem is that everyone's using this purely event driven approach to sending activities around. One of the big issues is that when their delivery fails it's the sender who has to retry and not the receiver. Plus there's all this overhead because the sender has to authenticate itself with HTTP signatures every time. Actually the ActivityPub spec suggests using polling too so I'd love to see more ActivityPub software try using both approaches together. [00:26:16] Jeremy: You mean the followers would poll who they're following instead of the person posting the messages having to send their posts to everyone's inboxes. [00:26:29] Hong: Yeah. [00:26:29] Jeremy: I see. So that's a part of the ActivityPubs specification, but not implemented in a lot of ActivityPub implementations, And so it sounds like maybe that puts a lot of burden on the servers that have people with a lot of followers because they have to post to every single, follower server and maybe the server is slow or they can't reach it. And like you said, they have to just keep trying and trying. There could be a lot of challenges there. [00:27:09] Hong: Right. Account migration [00:27:10] Jeremy: We've talked a little bit about the fact that each person each actor is hosted by a server and those servers can host multiple actors. But if you want to move to another server either because your server is shutting down or you just would like to change servers, what are some of the challenges there? [00:27:38] Hong: ActivityPub and Fediverse already have the specification for an account move. It's called FEP-7628 Move Actor. First thing you need to do when moving an account is prove that both the old and new accounts belong to the same person. You do this by adding the all accounts, add the URI to the new account's AlsoKnownAs property. And then the old account contacts all the other instances it's moving by sending out a move activity. When a server gets this move activity, it checks that both accounts really do belong to the same parts, and then it makes all the accounts that, uh, were following the, all the accounts start to, following the new one instead. that's how the new account gets to keep all the, all the accounts follow us. pretty much all, all the major activity post software has this feature built in, for example, Mastodon Misskey you name it. [00:29:04] Jeremy: This is very similar to the post where when you execute a move, the server that originally hosted that actor, they need to somehow tell every single other server that was following that account that you've moved. And so if there's any issues with communicating with one of those servers, or you miss one, then it just won't recognize that you've moved. You have to make sure that you talk to every single server. [00:29:36] Hong: That's right. [00:29:38] Jeremy: I could see how that could be a difficult problem sometimes if you have a lot of followers. [00:29:45] Hong: Yeah. Fedify [00:29:46] Jeremy: You've created a TypeScript library Fedify for building ActivityPub powered applications. What was the reason you decided to create Fedify? [00:29:58] Hong: Fedify is (a) ActivityPub servers framework I built for TypeScript. It basically takes away a lot of headaches you'd get trying to implement (an) ActivityPub server from scratch. The whole thing started because I wanted to build hollo -- A single user microblogging platform I built. But when I tried, to implement ActivityPub from (the) ground up it was kind of a nightmare. Imagine trying to write a CGI program in Perl or C back in the late nineties, where you are manually printing, HTTP headers and HTML as bias. there just wasn't any good abstraction layer to go with. There were already some libraries and frameworks for ActivityPub out there but none of them really hit the sweet spot I was looking for. They were either too high level and rigid. Like you could only build a mastodon clone or they barely did anything at all. Or they were written in languages I didn't really know. Ghost and Fedify [00:31:24] Jeremy: I saw that you are doing some work with, ghost. How is Ghost using fedify? [00:31:30] Hong: Ghost is an open source publishing platform. They have put some money into fedify which is why I get to work on it full time now. Their ActivityPub feature is still in private beta but it should be available to everyone pretty soon. We work together to improve fedify. Basically they are a user of fedify. They report bugs request new features to fedify then I fix them or implement them, first. [00:32:16] Jeremy: Ghost to my understanding is a blogging platform and a a newsletter platform. So what does it mean for them to implement ActivityPub? What would somebody using Mastodon, for example, get when they follow somebody using Ghost? [00:32:38] Hong: Ghost will have a fediverse handle for each blog. If you follow them in your mastodon or something (similar) then a new post is published. These post will show up (in) your timeline in Mastodon and you can like them or share them. Andin the dashboard of Ghost you can see who liked their posts or shared their posts and so on. It is like how mastodon works but in Ghost. [00:33:26] Jeremy: I see. So if you are writing a ghost blog and somebody follows your blog from Mastodon, sort of like we were talking about earlier, they can like your post, and on the blog itself you could show, oh, I have 200 likes. And those aren't necessarily people who were on your ghost website, they could be people that were liking your post from Mastodon. [00:33:58] Hong: Yes. Misskey / Forkey development in Asia [00:34:00] Jeremy: Something you mentioned at the beginning was there is a community of developers in Asia making forks of I believe of Mastodon, right? [00:34:13] Hong: Yeah. [00:34:14] Jeremy: Do you have experience working in that development community? What's different about it compared to the more Western centric community? [00:34:24] Hong: They are very similar in most ways. The key difference is language of course. They communicate in Japanese primarily. They also accept pull requests with English. But there are tons of comments in Japanese in their code. So you need to translate them into English or your first language to understand what code does. So I think that makes a barrier for Western developers. In fact, many Western developers that contribute to misskey or forkey are able to speak a little Japanese. And many of the developers of misskey and forkey are kind of otaku. [00:35:31] Jeremy: Oh otaku okay. [00:35:33] Hong: It's not a big deal, but you can see (the) difference in a glance. [00:35:41] Jeremy: Yeah. You mentioned one of the things that I believe misskey implemented was the emoji reactions and maybe one of the reasons they wanted that was so that they could react to each other's posts with you know anime pictures or things like that. [00:35:58] Hong: Yeah, that's right. [00:36:01] Jeremy: You've mentioned misskey and forkey. So is misskey a fork of Mastodon and then is forkey a fork of misskey? [00:36:10] Hong: No, misskey is not a fork of mastodon. (It) is built from scratch. It's its own implementation. And forkeys are forks of Mastodon. [00:36:22] Jeremy: Oh, I see. But both of those are primarily built by Japanese developers. [00:36:30] Hong: Yes. Whereas Mastodon (is) written in Ruby. Ruby on Rails. But misskey is built in TypeScript. [00:36:40] Jeremy: And because of ActivityPub -- they all implement it. So you can communicate with people between mastodon and misskey because they all understand the same activities. [00:36:56] Hong: Yes. Backwards compatible activity implementations [00:36:57] Jeremy: You did mention since there are extensions like misskey has the emoji reactions. When there is an activity that an implementation doesn't support what happens between the two servers? Do you send it to a server's inbox and then the server just doesn't do anything with it? [00:37:16] Hong: Some implementers consider backwards compatibility. So they design (it) to work with other implementations that don't support that activity. For example misskey uses like activity for emoji reaction. So if you put an emoji to a Mastodon post then in Mastodon you get one like. So it's intended behavior by misskey developers that they fall back to normal likes. But sometimes ActivityPub implementers introduce entirely new activity types. For example Pleroma introduced the emoji react. And if you put emoji reaction to Mastodon post from Pleroma in Mastodon you have nothing to see because Mastodon just ignores them. [00:38:37] Jeremy: If I understand correctly, both misskey and Pleroma are independent implementations of ActivityPub, but with misskey, they can tell when or their message is backwards compatible where it's if you don't understand the emoji reaction, it'll be embedded inside of a like message. Whereas with Pleroma they send an activity that Mastodon can't understand at all. So it just doesn't do anything. [00:39:11] Hong: Yes, right. But, Misskey also understands (the) emoji react activity. So between pleroma and misskey they have exchanged emoji reactions with no problem. [00:39:27] Jeremy: Oh, I see. So they, they both understand that activity. They both implement it the same way, but then when misskey communicates with Mastodon or with an instance that it knows doesn't understand it, it sends something different. [00:39:45] Hong: Yeah, that's right. [00:39:47] Jeremy: The servers -- can they query one another to know which activities they support? [00:39:53] Hong: Usually ActivityPub implementations also implement NodeInfo specification. It's like a user agent-like thing in Fediverse. Implementations tell the other instance (if it) is Mastodon or something else. You can query the type of server. [00:40:20] Jeremy: Okay, so within ActivityPub are each of the servers -- is the term node is that the word they use for each server? [00:40:31] Hong: Yes. Right. [00:40:32] Jeremy: You have the nodes, which can have any number of actors and the servers send activities to one another, to each other's inboxes. And so those are the way they all communicate. [00:40:49] Hong: Yeah. Building an ActivityPub implementation [00:40:50] Jeremy: You've implemented ActivityPub with Fedify because you found like there weren't good enough implementations or resources already. Did you implement it based off of the specification or did you look at existing implementations while you were building your implementation? [00:41:12] Hong: To be honest, instead of just, diving into the spec. I usually start by looking at actually ActivityPub software code first. The ActivityPub spec is so vague that you can't really build something just from reading it. So when we talk about ActivityPub, we are actually talking about a whole bunch of other technical standards too, WebFinger, HTTP signatures and more. So you need to understand all of these as well. [00:41:47] Jeremy: With the specification alone, you were saying it's too vague and so what ends up being -- I'm not sure if it's right to call it a spec, but looking at the implementations that people have already made that collectively becomes the spec because trying to follow the spec just by itself is maybe too difficult. [00:42:12] Hong: Yes. [00:42:14] Jeremy: Maybe that brings up the issues you were talking about before where you have specifications like JSON-LD where they're so complicated that even the biggest implementations aren't quite following it exactly. [00:42:28] Hong: Yeah. [00:42:29] Jeremy: If somebody wanted to, to get started with understanding a little bit more about ActivityPub or building something with it where would you recommend they start? [00:42:44] Hong: I recommend to dig into a lot of code from actual implementations. First, Mastodon, Misskey, Akkoma and so on. There are are some really cool tools that have been so helpful. For example, ActivityPub Academy is this awesome mastodon server for debugging ActivityPub. It makes it super easy to create a temporary account and see what activities are going back and forth. There is also BrowserPub. BrowserPub is this neat tool for looking up and browsing ActivityPub objects. It's really handy when you want to see how different ActivityPub software handles various features. I also recommend to use Fedify. I've got to mention the Fedify CLI, which comes with some really useful tools. [00:43:46] Jeremy: So if someone uses Fedify they're writing an application in TypeScript, then it sounds like they have to know the high level concepts. They have to know what are the different activities, what is inside of an actor. But the actual implementation of how do I create and parse JSON linked data, those kinds of things are taken care of by the library. [00:44:13] Hong: Yes, right. [00:44:16] Jeremy: So in some ways it seems like it might be good to, like you were saying, use the tools you mentioned to create a test Mastodon account, look at the messages being sent back and forth, and then when you're trying to implement it, starting with something like Fedify might be good because then you can really just focus on the concepts and not worry so much about the, the implementation details. [00:44:43] Hong: Yes, that's right. [00:44:45] Jeremy: Is there anything else you. Wanted to mention or thought we should have talked about? [00:44:52] Hong: Mm. I want to, talk about, a lot of stuff about ActivityPub but it's difficult to speak in English for me, so, it's a shame to talk about it very little. [00:45:15] Jeremy: We need everybody to learn Korean right? [00:45:23] Hong: Yes, please. (laughs) [00:45:23] Jeremy: Yeah. Well, I wanna thank you for taking the time. I know it must have been really challenging to give an interview in, you know, a language that's not your native one. So thank you for spending the time to talk with me. [00:45:38] Hong: Thank you for having me.

Code and the Coding Coders who Code it
Episode 46 - David Hill

Code and the Coding Coders who Code it

Play Episode Listen Later Feb 25, 2025 40:56 Transcription Available


David Hill, the innovative mind behind "Ode to RailsConf" and a senior engineer at Simplify, invites us to explore his fascinating journey into the world of podcasting. Inspired by the final announcement of RailsConf, David crafted a platform to celebrate the cherished memories of the event while also providing himself with a bridge to manage social interactions more comfortably. With his love for board games providing a structured approach, David shares how the podcasting framework has transformed him from a hesitant introvert to a comfortable conversationalist.Our conversation takes an intriguing turn as we delve into the art of podcast guest planning and the intricate process of editing conference videos. From featuring guests from the Scholar Guide program at RailsConf and RubyConf to orchestrating a unique episode with nine guests from a single company, we leave no stone unturned. Engaging discussions with prominent figures like Freedom Dumlao and Sarah May offer listeners a treasure trove of insights, while upcoming episodes with Ruby Central's Rhiannon and Ali Vogel promise to further explore the dynamic world of PR, marketing, and operations.As we navigate the evolution of podcasting strategies, the conversation shifts to the often-overlooked balance between coding and communication. The journey from a simple chat between friends to a thriving podcasting community has not been without its challenges and surprises. We reflect on the impact of Jason Charnes' departure due to family commitments and celebrate the resilience and growth that comes with embracing new roles. Amidst it all, the spirit of supporting creators, learning new skills, and fostering personal growth shines through, with an optimistic outlook for the show's future.Send us some love.HoneybadgerHoneybadger is an application health monitoring tool built by developers for developers.HoneybadgerHoneybadger is an application health monitoring tool built by developers for developers.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Support the showReady to start your own podcast?This show is hosted on Buzzsprout and it's awesome, not to mention a Ruby on Rails application. Let Buzzsprout know we sent you and you'll get a $20 Amazon gift card if you sign up for a paid plan, and it helps support our show.

CISSP Cyber Training Podcast - CISSP Training Program
CCT 222: TP-Link Router Risks and Software Development Security for CISSP (D8.2)

CISSP Cyber Training Podcast - CISSP Training Program

Play Episode Listen Later Feb 24, 2025 41:21 Transcription Available


Send us a textUnlock the secrets to fortifying your software development practices with expert insights from Shon Gerber. As we navigate the complex landscape of cybersecurity, we delve deep into the urgent risks posed by TP-Link routers, used by a staggering portion of U.S. households. Discover practical strategies for protecting your network, like firmware updates and firewall configurations, and learn how potential geopolitical threats could reshape your tech choices. This episode arms you with the knowledge to safeguard your digital ecosystem against looming threats and prepares you for possible shifts in government regulations.Venture into the vibrant world of programming languages and development environments, tracing their evolution from archaic beginnings with BASIC and C# to today's dynamic platforms like Python and Ruby on Rails. Shon unravels the intricacies of runtime environments and libraries, emphasizing why sourcing trusted libraries is non-negotiable in preventing security breaches. For those new to programming, we demystify Integrated Development Environments (IDEs) and offer insights into why securing these tools is paramount, especially as AI makes coding more accessible than ever before.As we wrap up, Shon guides you through best practices for securing both your development and runtime environments. From addressing vulnerabilities inherent in IDEs to ensuring robust CI/CD pipeline security, we cover it all. Learn about the pivotal role Dynamic Application Security Testing (DAST) plays and how to seamlessly integrate it within your development processes. This episode is a trove of actionable advice, aimed at equipping you with the skills and foresight needed to enhance your cybersecurity strategies and development protocols. Don't miss this comprehensive guide to making informed decisions and fortifying your software's security posture.Gain exclusive access to 360 FREE CISSP Practice Questions delivered directly to your inbox! Sign up at FreeCISSPQuestions.com and receive 30 expertly crafted practice questions every 15 days for the next 6 months—completely free! Don't miss this valuable opportunity to strengthen your CISSP exam preparation and boost your chances of certification success. Join now and start your journey toward CISSP mastery today!

Maintainable
Marty Haught: Rethinking Technical Debt—Is It Really Just Drift?

Maintainable

Play Episode Listen Later Feb 18, 2025 52:39


Episode OverviewMarty Haught joins Robby to discuss the sustainability of open-source projects, the challenges of maintaining RubyGems, and why the metaphor of technical debt may not fully capture how software ages. Instead, he suggests thinking of it as drift—the natural misalignment of software with its evolving purpose over time.They also dig into security challenges in package management, including how Ruby Central worked with Trail of Bits to audit RubyGems. Marty also shares insights on the EU Cyber Resilience Act and how it might affect open-source maintainers worldwide. Finally, they explore how companies can support open-source sustainability through corporate sponsorships and individual contributions.Topics Discussed[00:01:00] The two pillars of maintainable software: good tests and readability.[00:02:40] From Perl to Ruby: How readability changed Marty's approach to programming.[00:07:20] Is technical debt the right metaphor? Why "drift" might be a better fit.[00:11:00] What does it take to maintain RubyGems? Marty's role at Ruby Central.[00:14:00] Security in package management: How RubyGems handles vulnerabilities.[00:16:40] The role of external audits: Partnering with Trail of Bits for security improvements.[00:20:40] EU Cyber Resilience Act: How new regulations might affect open-source projects.[00:26:00] Funding open source: Why corporate sponsorships are becoming essential.[00:33:40] Advocating for technical debt work in teams: How to make a compelling case.[00:38:20] Processes in distributed teams: Balancing structure with flexibility.Key TakeawaysTechnical debt is often misunderstood. The real issue may not be shortcuts taken in the past, but the way software naturally drifts from its original purpose.Security in package management is a growing concern. Open-source ecosystems like RubyGems require continuous investment to remain secure.Open source needs sustainable funding. Relying on volunteers is not a long-term solution—companies need to contribute via corporate sponsorships.Advocating for code improvements requires strategy. Engineers should frame technical debt discussions around business impact, not just code quality.Resources MentionedMarty Haught on LinkedInMarty Haught on TwitterRuby CentralRubyGemsAuditing the Ruby Ecosystem's Central Package Repository – Trail of BitsEU Cyber Resilience Act OverviewWhat the EU's New Software Legislation Means for Developers (GitHub Blog)Ruby Central Open Source Program – Get InvolvedCorporate Sponsors ProgramGive and Take by Adam GrantConnect with MartyLinkedInTwitterBlueSkyThanks to Our Sponsor!Need a smoother way to share your team's inbox? Jelly's got you covered!

Code and the Coding Coders who Code it
Episode 45 - Stephen Margheim

Code and the Coding Coders who Code it

Play Episode Listen Later Feb 4, 2025 38:46 Transcription Available


Stephen Margheim, a celebrated figure in the Ruby and Rails community, returns to unravel the fascinating intricacies of his latest project—writing a parser for SQLite's SQL dialect in Ruby. He shares his enlightening journey of translating complex SQL syntax, which at first seemed a simple endeavor but soon unfolded into a realm of deep learning and unexpected challenges. Alongside this, Stephen collaborates with Aaron Francis on "High Leverage Rails," a video course designed to spotlight the synergy between Rails and SQLite, offering a treasure trove of insights into developing high-quality applications.We dive into the nuanced world of SQL parsing, where Stephen candidly recounts the arduous process of porting SQLite's lexer and parser into Ruby. What began as a straightforward task quickly turned into a labyrinth of complex syntax and discrepancies that required astute attention and incremental progress. He reflects on the absence of a fully compatible SQLite parser in any language, emphasizing the significance of open parsers like Postgres in creating a robust ecosystem for tools and libraries.Stephen's excitement is palpable as he discusses Quickdraw, a groundbreaking testing framework that revolutionizes testing in multi-core environments. This innovation, along with the anticipation for RailsConf 2025 in Philadelphia, paints a bright future for the Rails community. With rich discussions on parsing, testing, and upcoming Rails events, this episode promises to inspire and engage both seasoned developers and newcomers to the Ruby and Rails landscape. Join us for an episode filled with excitement, insight, and a glimpse into the future of Rails development.Send us some love.HoneybadgerHoneybadger is an application health monitoring tool built by developers for developers.HoneybadgerHoneybadger is an application health monitoring tool built by developers for developers.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Support the showReady to start your own podcast?This show is hosted on Buzzsprout and it's awesome, not to mention a Ruby on Rails application. Let Buzzsprout know we sent you and you'll get a $20 Amazon gift card if you sign up for a paid plan, and it helps support our show.

IndieRails
Jim Remsik - Genuinely Desiring Success In Those Around Him

IndieRails

Play Episode Listen Later Feb 4, 2025 62:21


In this episode, Jeremy & I are excited to share a mic with Jim Remsik. Jim is the Founder and CEO of a digital agency called Flagrant. He is also a conference organizer…he created and hosts the popular Madison + Ruby conference. Jim has held many roles:  MC, speaker, developer, CEO, conference organizer, writer and many more, but I imagine most people know him as someone who is an all around awesome human. Jim Remsik shares his journey through the tech industry, from his early days in software development to his transition from government work to agency life. He discusses building companies like Bendyworks and Flagrant, emphasizing how design and professional connections shaped his career path. The conversation follows his path from running Adorable to launching Flagrant, including the personal challenges he faced—health issues and navigating business during the pandemic. Jim reflects on the vital role of personal growth, team empowerment, and maintaining human connections in a remote-first world. Drawing from his agency experience, he shares how taking initiative and self-authorization were crucial to his entrepreneurial success. Throughout the discussion, Jim offers valuable perspectives on consulting and collaboration, emphasizing his core belief in actively supporting others' success. He explores the varied landscape of consulting work, industry uncertainties, and the power of personal mission statements. The conversation highlights how meaningful connections, purposeful work, and courageous leadership intertwine. Jim's guiding motto reveals how generosity and community-building shape his professional approach.Show Links:Socialshttps://bsky.app/profile/jremsikjr.bsky.socialhttps://www.linkedin.com/in/jremsikjr/Flagranthttps://www.beflagrant.com/team/https://www.beflagrant.com/blog/author/jim-remsik/Conferenceshttps://www.madisonruby.com/https://rubyconf.org/about/Postshttps://medium.com/authority-magazine/jim-remsik-of-flagrant-five-things-i-wish-someone-told-me-when-i-first-launched-my-business-or-14699acfbda2https://www.beflagrant.com/blog/2024-predictions-2024-01-30https://devops.com/the-ruby-on-rails-resurgence/https://devops.com/the-ruby-on-rails-resurgence-2/Other Podcastshttps://shows.acast.com/dead-code/episodes/all-those-letters-that-you-do-with-jim-remsikhttps://www.codewithjason.com/podcast/14444689-211-jim-remsik-ceo-of-flagrant/XO Rubyhttps://bsky.app/profile/xoruby.comReadalong - Practical Object-Oriented Design: An Agile Primer Using Rubyhttps://app.thestorygraph.com/readalongs/5983b152-bf48-4ff3-aeb0-976ea67d0d08

The Rails Changelog
029: Tuning Ruby on Rails App Performance with Jean Boussier

The Rails Changelog

Play Episode Listen Later Jan 23, 2025 64:50


In this episode, Jean Boussier and I dive deep into performance optimisation for Ruby on Rails applications. From diagnosing common bottlenecks and mastering advanced caching techniques to leveraging YJIT, jemalloc, and server concurrency models, we explore practical strategies for scaling apps efficiently. We also discuss key metrics for monitoring production performance, cost-effective observability, and modern Rails patterns to embrace or avoid. Perfect for developers looking to take their Rails performance game to the next level!Ruby and Rails Conference 11–13.04.2025 Wrocław, PolandTry Mailtrap for freeRails Guide: Tuning Performance for DeploymentNate Berkopec: The Rails Performance Workshop

Life on Mars - A podcast from MarsBased
087 - Our 2024 in review + forecast for 2025

Life on Mars - A podcast from MarsBased

Play Episode Listen Later Jan 21, 2025 47:01 Transcription Available


What if economic downturns could be a springboard for growth instead of a setback? Join Àlex, the CEO and founder of MarsBased, as he shares the inside scoop on how his company not only weathered the technological market storm of 2024 but emerged stronger. Learn from their journey through industry challenges, management demands, and the strategic pivots that led to exciting collaborations and a revenue milestone of 2.5 million euros, all while remaining bootstrapped and expanding the team.Despite grappling with project estimation issues and sudden contract cancellations, the story of MarsBased is one of resilience and adaptability. Àlex recounts the emotional journey of navigating a tense Q1 and the strategic decisions that helped regain stability. From transparent communication practices to celebrating MarsBased's decade-long journey, this episode paints a vivid picture of staying the course amidst uncertainty. Get insights into how MarsBased's strategic shift to smaller projects and consulting work opened doors to contracts with entities like Mobile World Capital.Join us as we explore MarsBased's technological strategy evolution, focusing on the decision to standardize their tech stack with Ruby on Rails and React. Àlex explains how this strategic move ensures reliable, cost-effective solutions for their clients without compromising quality. Discover the balance between embracing innovation in areas like VR and AI while staying committed to proven technologies. As the episode wraps up, Àlex invites listeners to share feedback and ideas, reinforcing the collaborative spirit of the MarsBased journey.Support the show

Rails with Jason
244 - Jeff Dwyer, Founder & CEO at Prefab

Rails with Jason

Play Episode Listen Later Jan 14, 2025 62:38 Transcription Available


This episode explores how Prefab enhances deployment workflows by integrating feature flags with Java microservices and Ruby on Rails, drawing on Jeff's experiences at HubSpot and EasyCater. We discuss strategies for minimizing deployment risks, improving PR reviews, and mentoring junior developers through clear objectives and constructive feedback. Real-world examples and practical advice offer insights into building efficient development systems and fostering growth in engineering teams.Links:- Prefab- Jeff Dwyer on LinkedIn

Practical Founders Podcast
#126: Jason Fried on 20 Years Bootstrapping BaseCamp at 37signals

Practical Founders Podcast

Play Episode Listen Later Jan 10, 2025 57:57


Jason Fried is the co-founder and CEO of 37signals, makers of the popular Basecamp project management software, which is still growing and very profitable after 20 years. He is going long and still having fun as an engaged CEO, building great products with great marketing that stands out. Jason has long advocated for software founders to avoid VC funding and build sustainable businesses that are great for customers and generate healthy profits for the owners. His best-selling book, Rework, shared his practical approach for entrepreneurs. In this wide-ranging interview, Jason discusses these important topics: How the core principles of Basecamp remain focused on simplicity and essential tools for project management after 20 years. Why Basecamp targets small businesses, avoiding the enterprise market that many competitors chase. Why software should fit the needs of the user, rather than forcing users to adapt to complex tools for big companies How profitability, not growth, provides the freedom to innovate and explore new ideas. Why competing against your costs is more important than competing against other companies. How small teams have the agility to win against big companies. Quote from Jason Fried, co-founder and CEO of 37signals “My sense of independence has always been important to me. That's why I became an entrepreneur: to do things the way I wanted to do them. Otherwise, why be an entrepreneur? It's true when you work, you're working for your customers. That's always going to be true. But you still have a sense of independence. You get to make your own decisions. “What people don't realize is when you raise money, you don't really work for yourself anymore. You really don't. You work for someone else's schedule, for someone else's fulfillment, for someone else's return. That never appealed to me. “I want our products to explain themselves. I want our success to explain ourselves. I don't want to have to explain myself on a quarterly basis to somebody who's trying to get a return out of me. I'm not interested. So for all those reasons, it just wasn't right to raise big funding.” Links Jason Fried on LinkedIn Jason Fried on Twitter 37Signals on LinkedIn 37Signals website Basecamp website  HEY website Ruby on Rails website Podcast Sponsor – Full Scale This week's podcast is sponsored by Full Scale, one of the fastest-growing software development companies in any region. Full Scale vets, employs, and supports over 300 professional developers, designers, and testers in the Philippines who can augment and extend your core dev team. Learn more at fullscale.io. The Practical Founders Podcast Tune into the Practical Founders Podcast for weekly in-depth interviews with founders who have built valuable software companies without big funding. Subscribe to the Practical Founders Podcast using your favorite podcast app or view on our YouTube channel. Get the weekly Practical Founders newsletter and podcast updates at practicalfounders.com/newsletter.

Agile Mentors Podcast
#129: 2025: The Year Agile Meets AI and Hyper-Personalization with Lance Dacy

Agile Mentors Podcast

Play Episode Listen Later Jan 8, 2025 43:15


Curious about the future of Agile in 2025? Join Brian and Lance Dacy as they dive into the rise of AI, hyper-personalization, and how teams can balance innovation with customer focus. Plus, discover actionable insights to navigate a rapidly evolving landscape—don’t miss this forward-looking discussion! Overview In this episode of the Agile Mentors Podcast, Brian and Lance set their sights on 2025, exploring how AI is transforming Agile practices and reshaping customer engagement. They discuss the shift from output to outcome metrics, the expansion of Agile beyond IT, and the critical role of leadership agility. With practical takeaways on fostering continuous learning and delivering real value, this episode equips teams and leaders to stay ahead in a fast-changing world. References and resources mentioned in the show: Lance Dacy Accurate Agile Planning Subscribe to the Agile Mentors Podcast Advanced Certified Scrum Product Owner® Advanced Certified ScrumMaster® Mountain Goat Software Certified Scrum and Agile Training Schedule Join the Agile Mentors Community Want to get involved? This show is designed for you, and we’d love your input. Enjoyed what you heard today? Please leave a rating and a review. It really helps, and we read every single one. Got an Agile subject you’d like us to discuss or a question that needs an answer? Share your thoughts with us at podcast@mountaingoatsoftware.com This episode’s presenters are: Brian Milner is SVP of coaching and training at Mountain Goat Software. He's passionate about making a difference in people's day-to-day work, influenced by his own experience of transitioning to Scrum and seeing improvements in work/life balance, honesty, respect, and the quality of work. Lance Dacy is a Certified Scrum Trainer®, Certified Scrum Professional®, Certified ScrumMaster®, and Certified Scrum Product Owner®. Lance brings a great personality and servant's heart to his workshops. He loves seeing people walk away with tangible and practical things they can do with their teams straight away. Auto-generated Transcript: Brian (00:00) Happy New Year's Agile Mentors. We are back and a very happy New Year's to everyone who's listening. Welcome back for another episode and another new year of the Agile Mentors podcast. I'm with you as always, Brian Milner, and we have our friend of the show for our annual kind of tradition now. We have Mr. Lance Dacey back with us. Welcome in, Lance. Lance Dacy (00:23) Thank you, Brian. Happy New Year to all of y'all. Happy to be setting this tradition. think it's two times now, so we'll just call it a tradition, but I love it. Thank you for having me. Brian (00:32) Very glad to have you here. The tradition we're referring to is that we like to take the first episode of the new year and just take a pause and kind of look ahead a little bit. What do we see coming up? What do we think this new year is going to be like? Obviously, it's a year of change. Here in the US, we'll have a new president that comes in. I'm not going to get into whether you like that or not, but it's new. It's going to be a change. There's going to be differences that take place. And I know there's a lot of differences and changes going on just in the way businesses operate and how things are run and lots of new technologies, lots of new trends. So we just thought we'd take a pause and kind of scan the horizon and maybe give you our take at least on what we're hearing and what we're seeing. And you can see if you agree with these or not. We'd love to hear from you in our discussion forum on the Agile Mentors Community afterwards if you have other thoughts or opinions on this. let's get into it. Let's start to talk about this. So Lance, I guess I'll start. I'll just turn it over to you and ask you that generalized question. Give me one point or one thing that you've been reading or seeing recently that you think is going to be a really important thing for us to kind of be prepared for or look out for here in 2025. Lance Dacy (01:44) Great question, Brian. There's so many things out there, and I thought we could start by looking back a little bit. if we're okay with that, just let's summarize, you what did we see happen in 2024? You mentioned, you know, 2025 is a year of change, absolutely, but 2024 was definitely a different kind of year as far as my experience is concerned and seeing a lot of industry trends that are just popping up out of nowhere. Now we are fans of agility, which means we embrace quick, efficient changes, but there's things going on in 2024 I never predicted Brian (01:52) Yeah, yeah. Lance Dacy (02:19) fast. And so I think we've got to reshape the way that we're thinking about these things. I think the topic of mind, one of the biggest shifts that I saw in 2024 that I think will continue in 2025 is AI. So that artificial intelligence is a big word that we keep lumping into a lot of things. And I just wanted to take a pause a little bit and say, I know everybody's got a little bit different experience about AI, but in particular, as it relates to product development and agile delivery, which is what this show is basically focused on, I thought we could look at some insights of what happened in 2024 with that. And so I think I call us babies at it right now. And I know that may be a bad term, but I have a lot of experience with AI and machine learning and things like that. But as far as the use of it, I feel like we're all a little bit more of babies on how to use it in the day-to-day work that we're trying to accomplish. And I think that comes with learning something. I embrace that. I don't mean that as a downplay, by the way, but that we're all babies. I'm just saying we're less mature about it. We're experimenting with a lot of things. And I don't think that some of the AI is all good. I I embrace it as a thing that's going to help us later on, but... I thought we could just share our experiences of how we've seen this thing manifest itself. I think tools like AI driven, I'm going to use the bad word JIRA, but in place of that, just use any product backlog management tool that you see. And I've seen a lot of organizations not just talk the game of, we use AI for our backlog management, but I'm talking about backlog prioritization, sprint planning capacity. And I believe what's happening is it frees teams up to do more of the... value driven work that we're going to see a lot more of in 2025. So what I mean by that is when we got automated testing and development, if you remember those days, it freed the developers up or the testers, should say, from doing less of the does this thing work to more of how does it feel using it as a human being, you know, automating that. So I've seen things like JIRA, with AI with JIRA and GitHub co-pilots, you know, reshaping the value creation in the teams and eliminating the need of having to do very low level tasks. So what is your thoughts on that and do you have any experiences of that as well? Brian (04:36) Yeah, for sure. There's a couple of things I've found that just kind of some stats I found from some different places. you know, listeners know I'm kind of like a data geek here. want to know where the data comes from and want to make sure it's a, yeah. Yeah. You want to make sure it's a solid source and it's not some questionable, you know, sketchy kind of, well, I asked 10 of my friends and here's the answer, you Right, right. Exactly. Lance Dacy (04:48) Good hand. I love that. or a FBI. Brian (05:02) But so there's a couple of things that came back. One was, I think Forrester is probably a pretty good source of information. They have some pretty good rigor to their process. And they have a thing that they put out every year. This one's just called the Developer Survey. And this is the one that they put out for 2024 that I'm quoting here. But a couple of stats from that that I found interesting. One was, 49 % of developers are expecting to use or are already using general AI assistance in their coding phase of software development, which, you know, maybe higher than most people might think. But it doesn't surprise me too much. I think that's probably kind of what I'm used to it. Understand saying, you know, an assistant co-pilot, that kind of thing. They're not saying 49 % have been replaced. They're saying 49 % are being assisted. by that and that seems about right. Maybe again, maybe a little higher than some might expect, but that seems like not too big of a shocker. Lance Dacy (06:04) Well, the animation too. So when you talk about assistance versus letting it run it, I saw a gentleman on LinkedIn, which is also a good. I wish we could interact more with our users on this call, because I'd love to hear their perspective. But I heard somebody say, let AI write my code. No, thank you. Code is like poetry. It has to be refined over time. It has humanistic qualities. And I was like, man, that's a really good point. But when I try to show my kids how to create a Ruby on Rails app to do an e-commerce site and I type it into chat GPT or whatever tool you use, I was amazed at how quickly it was able to put together. mean, you got to still know the file structures and things like that. But I don't know that developers are just going to say, I was going to write the whole thing. think they're, I think it's saving us keystrokes. I think we talked about that last time as well, but that's an interesting, interesting take. Brian (06:50) Yeah. Yeah. So I thought, I thought that was interesting. There was another, you know, I'm kind of, I'll move around between these two sources basically, but there's another source that I saw where there was a Harvard Business Review article. posted this on LinkedIn a while back, but it was a kind of the source of it was about a survey that they did to try to determine the impact on the job market. And one of the things they did was now their data was from July, 2021 to July, 2023. So this is a little bit older data, right? The survey was trying to say in analyzing the job postings on freelancer job sites specifically, and they tried to identify ones that might be affected by the advent of chat GPT, because that's the period where chat GPT really started to come onto the scene and started to become prevalent. And what they found was about a 21 % decrease in the weekly number of posts and what they call automation prone. Lance Dacy (07:35) Yeah. Brian (07:47) jobs compared to manually intensive jobs. They said riding jobs were affected the most 30.37 % decrease, followed up by software app and web development 20.62 % decrease and engineering 10.42 % decrease. But the interesting kind of thing is they found it kind of towards the end of that there was some increases and their kind of conclusion was that there was actually an increase in demand of the kinds of work that required human judgment and decision-making. And so that kind of ties back into what you were saying about let AI write my code whole, completely no, there's still a requirement for that human judgment and decision-making. I think this is why I'm not afraid of it, right? This is kind of, I don't want to make this an AI show, it's about the future in 2025, but when we had a... Lance Dacy (08:17) All right. Right. Brian (08:40) When we've had AI shows, that's one of the things I've said to the audience here is that I'm not so afraid of AI being sort of the doom and gloom of it's going to destroy profession or destroy. It's going to change it. But I don't think that's any different than any other. A great kind of analogy I make is when we started to have testing automation. It didn't do away with testers. This is just another tool that's going to be in our tool belt. Lance Dacy (08:51) Guy net. Brian (09:05) And I think our challenge is not to, you know, we're agilist, not to resist change, but to try to adapt, try to find ways that we can align and incorporate and get the most out of it. So, yeah. Lance Dacy (09:17) I think the most part of that though is, Brian, too, what most people fear. And I agree with you, we won't make it an AI show. just, we got a couple of points to make on this. But for the first time ever in human history, we now have something that might be more intelligent than us. And that is scary because there's some AI neural network engines that people can't explain how it's working anymore. They put it in place. And then it's like, we're not quite sure how it's doing all of this. And that's a scary thing, obviously, that can get out of control. We've never really had to face that. So we do have to be aware of that, but you know, let's go back and peel it back. Hey, we're, trying to plan a backlog with AI and we're trying to write a few Ruby on Rails code. I'm not letting it run my life yet. And one day it may already be doing that. I just don't even know it. I don't know. We won't get into that debate, but I think the thing is that we need to take pause of in the agile industry. is we embrace new technology as long as it's helping us deliver faster to our customers and save us time and efficiency. You know, I tell teams all the time, Agile is about delivering the highest business value items as early as possible with the least amount of cost friction, know, whatever word you want to use for that. Well, AI might help us do that, but I want to caution that. I think you and I were just talking about this. I wanted you to bring up that news story element that we were talking about. where people are just pushing content out there and kind of desensitizing us to is that important information or not? And I think AI needs to tag onto that. So I didn't know if you could share that real quick and then I want to share some metrics that I've seen some teams capture. There's a lot of teams now adopting these things called Dora metrics, which was created by a DevOps engineering group. And it's amazing to me now that we have real data to see, well, we have embraced AI. Brian (10:45) Sure. Lance Dacy (10:59) does do some things or not, I'd like to balance the good with the bad on that. But can you go over that new stuff that you were sharing with me? Brian (11:05) Yeah, no, it's just a conversation I've been having recently with people, they're friends of mine and kind of, you're probably feeling the same way about this in certain places, but the breaking news alerts that you get on your phone, you get those things all the time and I've had friends and I have discussions about maybe it's time to just turn them off. There's just so many breaking news alerts and that's kind of the issue, right? Is that there are so many that are now classified as Lance Dacy (11:23) Yeah. Brian (11:31) breaking news that you kind of look at that and say, this isn't really breaking news. You know, like if something really major happens, yeah, I want to know about that. I'd like to get an alert about something that's truly breaking news. the, you know, have major news sources, apps on my phone and get those breaking news alerts all the time. And some of them are just things that are minor, minor news that I would be much better served seeing in a summary and like a daily summary or even a weekly summary on some of the things. Right. Lance Dacy (11:50) Yeah. Or if at all, like you don't care about the sub undersecretary of Parks and Lighting in Minnetoca. You know, I don't know. It's just like, thank you for that information. But I totally agree that I feel like we're getting desensitized to a lot of these words, buzzwords, if you will. And we as humans are going to have to learn in this environment. And I'm trying to teach this with my kids as well, because they're the ones suffering the most from it. Brian (12:04) Right. Yeah. Lance Dacy (12:22) It's just inane information out there and you're filling your brains with the main things. So AI is great because it's allowing people to deliver more content, but is that content of substance or they just trying to market to you and get you, I forget the word you use for it, but, you know, keep you on a leash. Is that what you said? A small. Brian (12:42) Yeah, yeah. Yeah, that's, yeah, that's kind of what we were saying about this is that I think that the kind of conclusion that led me to is that I and I've seen this trend, I think in other areas as well, as I sort of feel like maybe with bigger companies, more than others in today's world, there seems to be a shift a little bit that, you know, for example, that that breaking news thing, it's not it's not something that benefits the customer, right? As the customer, I don't think there's a customer out there that says, I really love all these minor news stories appearing in my breaking newsfeed. But what it benefits is the company. It benefits the source because it keeps you engaged. It keeps you coming back and it keeps that ping to keep you engaged. And that's what they're trying to promote. That's good for the... Yeah, that's good for the company, but it's not good for the customer. I think that there may be, we may see some real kind of shifts I think happen in... Lance Dacy (13:21) Or me, it keeps me frustrated and I leave them. Brian (13:34) Some of those big companies maybe have moved too far in that way to favor their company's interest over the customer. And that leaves a door of opportunity, I think, for smaller companies to say, well, we're going to be all in on just what's best for the customer. And I think customers will appreciate that and will reward that because it's annoying otherwise. Lance Dacy (13:54) That's what I want to focus on because the last part of this AI conversation I want to have is I like a lot of what Gary Hamill, he's a management professor at a lot of different schools recently. He visits a lot of companies as well, but I really like the way he delivers his content and how he's more innovative and thought. I mean, I tell people all the time that management and leadership has not seen any innovation in 150 years. It's about time. that we start learning how to create cultures for human beings that are bringing gifts and talents every day to make things better for our customers. And Gary Hamill is a really good source if you're interested in those kinds of things. And so he emphasizes how AI has reshaped value creation by eliminating those low-level tasks that I think we all can embrace and are allowing agile teams to achieve unprecedented efficiency. Now... We are babies immature with this technology. So maybe these news organizations and the ones that we're going to kind of say, you're not doing a good job at it. It's not because they're bad. It's just we're learning how to use a new tool and hopefully customer feedback will change that. But I wanted to hit on these Dora metrics. Dora metrics are, I think they were created by DevOps research and assessment. That's what they kind of stand for. And there's four major categories. that Dora metrics measure as it relates to more of an engineering benchmark. Like how well are we, if you're an agile software development product company, Dora metrics are really good for you to look at. know, metrics can be misused, so be careful, but they're measuring outcomes. You know, what is our deployment frequency, which could be an output metric, because who knows if you're releasing the right things, but let's not get into that conversation. deployment frequency, lead time for changes, the change failure rate of your changes, and the meantime to recovery of those changes. I think those are really four good performance benchmarks. And they're starting to surface a lot in organizations that I work with. So you kind of use tools like Jellyfish or something to overlay over Jira. And all these tools are great, but these teams are using AI. And I found that we finally get some real data that says, how well is AI affecting those core metrics if you were measuring performance benchmarks of the software that you're delivering. And so this report that was created by the 2024 Accelerate State of DevOps report, they categorize organizations and performance clusters like elite, high, medium, and low. And based on their performance across these metrics that I just mentioned earlier, they're evaluating and guiding their software delivery practices. And so the impact of AI adoption was really cool to see on the DevOps Launchpad was a site that I saw this on, that the integration of AI into the development processes, as we were just talking about, has mixed effects on those door metrics. Can you believe that? So a 25 % increase in AI adoption correlated with a one and a half percent decrease in team throughput and a 72 % decrease in the stability of the product. Now these suggest that while AI, you know, offers productivity benefits maybe for the individuals or the teams, it has a, you know, it's introducing complexities that are affecting the software delivery performance. So I want our audience to pay attention to that. Brian (16:59) Wow. Wow. Lance Dacy (17:21) and start using some of these maybe to push back on managers and leaders that are just embracing this new tool and say, let's just push this on the teams. So that's the impact of AI adoption. And then if you look at platform engineering, so if you look at the implementation of an internal developer platforms, you know, that are helping developers deploy code faster, the adoption of AI led to an 8 % increase in individual productivity. and a 10 % increase at the team level. Now that's fantastic. But these gains were accompanied by an 8 % decrease in change throughput. So while the teams may be able to make changes, what I interpret that to mean is the customer is not seeing the changes. There's an 8 % decrease in the throughput all the way as a cycle time, if you will, all the way to the customer and a 14 % decrease in the stability of the product. So that indicates trade-offs. that we all need to be aware of that AI might be helping us performance wise, but it's not helping the customer a whole lot if we're destabilizing the platform. So I haven't dug into those metrics a lot, but I wanted to share that with the audience because if you do find yourself in a position where people are pushing this, you can try to go reference those and maybe give them some, I always call it pros and cons, right? There's no really right or wrong when you're an agile team trying to make a decision. You got to look at the pros and the cons and Brian (18:23) Yeah. Lance Dacy (18:40) We might accept a pro, multiple pros that come with some cons, but we all look at each other and say, that's the better decision for our customer. And we live with those cons, whatever they may be. So I wanted to talk about that because it centers on what you were just thinking with the news organization. just push, we got more productive at pushing content, but was it the right content or is it destabilizing what people are using? And you just have to be careful of that. Brian (18:57) Yeah. Yeah, no, I think those are excellent points. I think that's one of the things I see kind of for 2025 as well is that we're still so much in the empathy of how AI really plays into how a team operates and how development works that I don't think we can really say ultimately what's the right way or wrong way to do anything yet. I think it's good for teams to experiment. I don't think you should be afraid of experimenting and trying things. But it all comes back to the basic principle we say over and over as Agilist, inspect and adapt on it. Try something and identify what works about it and what doesn't work. And if that means that, we're using it too much and it's causing too much errors, we'll back off, find the right point, and move forward with that. Lance Dacy (19:41) Yeah. Or where companies are using it bad. Like I have a story that we won't get into here where a CEO or an executive of the company was mandating that they use AI to do something not so good for the customers. And you want to be able to push on that as well. So I'm sorry to interrupt you on that, but I was just like, man, that's something. Brian (20:07) Right. No. Lance Dacy (20:11) Sometimes, like we want to self-organize around the experimentation. We don't want it pushed in like management saying, need to use this because I want you more productive and managers be careful of doing that. Make sure you understand the pros and cons as much as you can before you dictate. Brian (20:26) Yeah. Something else you kind of said triggered something to me. I know the, I think that, well, not in a bad way, but it just, you know, the metrics I think that you mentioned were really good metrics. I liked the idea of kind of measuring, you know, things like, you know, the failure, the bug rate, you know, like how many defects and those kinds of things I think are good metrics. But they kind of, Lance Dacy (20:31) What? Okay. Brian (20:49) point out a certain difference that I think that's out there that I think the business community is wrestling with. And I hear these questions all the times in class, so I know it's prevalent out there. But we talk about building high performing teams. And just the difference between that word performing and productivity. There's sometimes I think confusion or false equivalency. between those two, that performance equals productivity. And I think a lot of the metrics sometimes we see that get measured or that we try to measure even, kind of expose that, as that's what's really the issue here, is that we're really trying to make that false equivalency between the two. It's not saying that performance has nothing to do with it, but Lance Dacy (21:15) Right. Brian (21:32) You know, this is the simplicity, the art of maximizing the amount of work not done is essential. You know, I'd rather have low productivity, but what we produce is high performing, is highly valuable, is something that matters, right? And I think that's kind of those kinds of statistics like you were bringing up, you know, what is our failure rate of things we put out there? Lance Dacy (21:44) Yeah. Brian (21:54) That is, I think, a performance metric to say, the old phrase, slow down to go faster. Right, right. Maybe the reason that our failure rate goes up and we're having problems with this is that we're trying to go too fast. And if we could back off, it ultimately makes you go faster if you have less bugs that you then have to go back and fix. Lance Dacy (22:00) Yeah, make hate, totally. Yeah. Brian (22:19) So it may be counterintuitive to certain organizations. Let's push them. Let's try to get everyone to go faster. But I think these new kind of metrics that you mentioned that we're trying to measure more and more, I think are starting to open people's eyes a little bit to the difference between those two words. Lance Dacy (22:22) I mean Well, in like the CrowdStrike situation, you know, that took down a lot of the airline systems, you know, I'm not saying they make, they didn't do a good job deploying and everything. All of us are victim of that kind of thing. But, know, to get us back on track a little bit, because you asked me the question, then I felt like I got us off on a tangent. know, 2024, obviously the rise of AI integration into Brian (22:48) Sure. Lance Dacy (22:54) the workflows that we experienced with Agile. And I just wanted to highlight, yeah, those are some great things, experiment with it. We're in our infancy. So there are a lot of things to discover that may not be so good. So start trying to put metrics in place. And I thought the Dora metrics, you know, as I've started discovering those, I'm a data guy and I'm like, yeah, as long as those are being tracked correctly, I think that's a good benchmark to kind of look at, hey, we're making a lot of changes in our software, but it's crashing the system. So change is good, crashing is bad. there's pros and cons, so we have to delegate that or figure that out. Now, the other one that you just mentioned, I thought I saw a great shift in 2024 from output related metrics to outcome oriented metrics. So the Scrum Alliance has a report, which we're all probably familiar with, especially you and I being certified Scrum trainers with, and we get a lot of data from them. But teams moved away from feature counts to measuring outcomes like Brian (23:35) Yeah. Yeah. Lance Dacy (23:49) customer satisfaction, user retention. You we teach this in our advanced certified Scrum Master workshops, the difference between output versus outcome metrics. And we've been doing that for five years. And I think it's really starting to take hold that management and leadership and maybe even teams are measuring the wrong thing. And I really saw the needle move in 2024 that people's eyes are opening that let's measure the outcomes of what we're doing. Sometimes that sacrifices individual productivity and performance for a greater outcome achieved at the organization or customer level. And we've been trying to articulate that for many years. And so I've seen a shift in that. And then also the rise of Agile beyond what I would generalize as IT. So Agile Alliance produced some information that I thought was interesting that Agile has expanded into health care or sectors like health care. education, human resources, HR, and those are typically what we would see the laggards, you know, back in the day, banking and healthcare and all those were the last people to adopt this progressive planning approach because of the way that they budget and finance and rightfully so. But those agile principles have been proven out far beyond software unpredictable type work and is going more into, you know, the different types of work environments and I think onto that is how it's getting involved more in leadership. So I don't know about you, but I've also seen people focusing more on building a culture of, I would like to call it leadership agility. So John Maxwell, you know, is a vocal person in the industry about leadership. And he underscored this idea that agile leadership. in driving transformation across non-technical domains. So not just a digital transformation, but non-technical domains is really taking hold in this idea of empowering cross-functional teams. You we've been saying this in technology for years, that the siloed development method is not good. Well, organizations are starting to see that not only in the tech sector, but why don't we put a marketing cross-functional team together with this other team? And that's what they talked about in 86. you know, in the new, new product development game. And I think I started to see the needle move a little bit more with leaders being more fascinated about leadership agility and driving culture change to meet the demands of cross-functional teams. And it could just be a by-product that technology has gotten easier to make these and focus on these things now, but psychological safety, know, sustainability and agile with, people having real goals and integrating. Brian (25:59) You Lance Dacy (26:23) What you see now is a lot of these eco-conscious practices coming in to product development, like the environmental, social, government's commitments as well, are making their way in there. So I want to just reflect on 2024. I don't know what you think. I'd love to interact with the audience too, but those are kind of the main things that I saw. And that will lead us into a good discussion of how we see that helping us in 2025. So what do you think about those? Brian (26:49) I One of the things I think that kind of stood out to me from what you talked about was the concept of how that plays in leadership. And I think you're absolutely right. think that is, I am hearing more of that in classes, people talking about that when they ask questions. You know, we've talked about for years that the fact that there can be sort of I don't know a better word to say but a glass ceiling sometimes in the organization for agile and how it spreads across and that leaders are often You know overlooked as far as getting trained in this kind of stuff and understanding it and I do see a rise in leaders trying to understand a little bit more about how can we You know incorporate this or even better, you know, how do we support? and nurture and foster this culture in our organization. So I think you're absolutely right. I think that is sort of a hidden or kind of a cheat code, if you will, for organizations to try to be more successful with the stuff we talk about is if you can have, it's not a top-down approach, but if you don't have the top on board, then they can really start to become a hindrance or a roadblock to the teams actually being successful with it. And so I agree. think that, you know, I'm hopeful that that shift is occurring. I'm seeing signs of that, you know, it's kind of always a little bit of a back and forth, you know, is it moving in that direction? Then I start to hear people say, no, we're having trouble. And the anecdotal little stories you hear makes you kind of not sure what the prevalence is, you know? Lance Dacy (27:54) Yeah Lose hope. You lose hope. I think, you know, the big takeaway for me for this as we talk about 2025 is it's going to be increasingly difficult and it has been increasingly difficult for any one individual company, product, service, whatever you want to call it, to differentiate yourself from other people. I've been telling my kids this forever. Brian (28:18) Right, right, exactly. Lance Dacy (28:38) that I feel I've seen a big shift from when I was back in early 90s, know, writing spreadsheets for people, they thought it was just unbelievable the work that I was doing because not everybody could do that. Well, everybody can do that now. So what I mean about differentiating yourself is, you know, AI is one of those things that you have to start prioritizing AI literacy because we've just talked about how immature we might be in some cases with this. But if we can ensure that our team members understand how to work effectively with those AI powered tools and letting AI be an active team participant, then I think we're going to start seeing even a greater problem with being able to differentiate yourself. So the main point I want to make for 2025 that I believe is going to be a real big focus is a is a hyper personalization of customer products. So there's a lot of companies out there that are really good. You just mentioned it with the news, right? Hey, I'm building your content, I'm keeping you engaged, but am I really serving you? Am I giving you your needs? And maybe it's okay if news organizations do that if you have a way to filter it and customize it. But really what I'm talking about is, and I'll go back to what Gary Hamill says about this. He says, the markets are crowded. And when you have the rise of AI and tools like Trello, Monday, and things like that, those are project management tools, right? Used to, you could be a better product company just if you would manage your work better. You know, you were using Scrum or Agile, you had an edge on everybody else. You could deploy faster and that was your secret sauce, right? But now that most people can do that now, what's your next up level in game? And he thinks it's going to be this hyper personalized customer solution and engagement. Brian (30:06) Right. Lance Dacy (30:23) where we need to invest in more customer discovery processes. You know how hard that is in teaching tech teams to do that? All we focus on is building the features, but how about we get better at customer discovery and really understand the tools that provide deep insights into their behavior so we can recognize that? know, several companies that I think are on the forefront of that, for those of you who are like, yeah, I'm concerned about that too. Where can we get better at that? I mean, go look at Amazon. Brian (30:30) Yeah. Lance Dacy (30:51) You know, Amazon uses highly sophisticated algorithms to analyze customer behavior, which enables them to produce product recommendations and help you buy things you didn't even know. You remember when we would teach like Kano analysis in a product owner class and they had six categories of features and one of those feature categories was an exciter or delighter feature. You know, the key to being a good differentiator is providing product and features that people didn't even know they needed. That's why customers are not always right, you know, on what they need. They're thinking about their reactive sense. And so how can we get better at predicting their behavior even more than they can and use AI and machine learning that allow for real-time adjustments? Because that used to take forever. You you think about Benjamin Graham's book on investing in the 1940s and 50s, trying to predict what the stock market is going to do is nearly impossible now. But can you imagine how he differentiated himself by doing all these algorithms by hand? Brian (31:20) Yeah. Lance Dacy (31:48) And so what I mean by that is we need to use AI and these tools to help do more predictive customer experiences. So Amazon does a good job. Netflix employs a lot of data analytics to help understand viewing habits. Starbucks does this. Spotify does it. So I really feel like in 2025, if you want something to focus on and you're a software product development company practicing agile, build literacy of AI tools with your team. Make sure we're using them the right way. Track the right. data, but more importantly, let's discover what our customers are doing and behaving and use the AI to help us decipher that information a lot easier so that we as humans can make a decision on where we spend the great scarce capacity of our teams building great products for them. And so there's a lot of things that go into that, but I feel like that's going to be the focus in 2025. That's what's going to separate the people that succeed even individually. How are you going to differentiate yourself from a market pool of people out there? You need to start learning how to use these tools and differentiate yourself. That's the for 2025. Brian (32:52) Yeah. No, that's a great point. I'll tag on and say that I know there's this, people probably have heard of this, there's a social media kind of trend of if you use chat GPT or something like that a lot to go to it and say, tell me some insights about myself that I may not know, just based on all my interactions with you. And that was a trend for a while for people to ask that and then. they were shocked in some of the things that would come out from chat GPT. Well, what I found in taking a couple of courses and things about AI is, it's really good at taking a large amount of data and then pulling out things that you may not be aware of. I think that's going to be something, the more data driven we are, obviously the better because we have facts behind it. And as you said, it has to be the right, we have to collect the right kind of data. you can take a big... Lance Dacy (33:19) Yep. Yes. Brian (33:43) source of data and feed it into an AI like ChatGPT and say, give me five hidden insights from this data. Yeah. Lance Dacy (33:50) Yeah, stuff you thought about, right? I think insights, that's the way to put it. And I used to have a saying being a data analytics guy for 20 years. Most people and organizations are data rich, but information poor. And I would like to change that word nowadays to insights poor because Brian (34:09) Yeah. Lance Dacy (34:09) We may have all the data and tracking data, there's no harm in that, know, storage is cheap these days. So go ahead and track it all. You can report on it infinite number of ways. And that's the secret sauce. And I think you just hit it on the head that, just go ahead and start tracking stuff. Let AI, you can't ever read that amount of data as a human being and decipher it. Let the machine do that. But then you can test it. You can say, do I really believe that or not? Because you have a humanistic experience that AI doesn't have. So we should embrace that. Brian (34:40) Yeah, I agree. Well, I mean, I hope people are hopeful. I'm hopeful. I know when I start a new year, I generally am hopeful because that's just the way I try to start new years. But I'm hopeful for some of these changes. think the tools that we have are just making things, some things that might have been more mundane, a little easier for us to do. And maybe that allows us to focus. Well, like the data I brought about at the very beginning, you the fact that there's a rise in, you know, postings and companies needing jobs that require human judgment and decision-making. I think that's where we're headed is, you know, that rise in human judgment and decision-making skill. And that's something that's at least at the moment, you know, our computers can't do for us. And it really does require, just like you talked about, understanding our customers. I can't put an AI out there to try to interview all my customers and get deep. Well, but not and get the kind of deep insights I want, right? Not to find out what the real problems are. It wouldn't know how to question it enough and dig deeper into different ways to truly figure those out. So it requires huge human judgment and decision-making. And I think that's where we... Lance Dacy (35:35) you could. Right. Brian (35:51) now bring the value is in that area. Lance Dacy (35:53) Well, and people hate change, right? So let's just end with this. know, most people, customers, you change things on the product. You put a new car design. We usually don't like it. So you want to hang in there and not get too distracted by noise with that. mean, remember when the first iPhone came out, you know, older generations like this is too complicated. I don't want to use it. And there is something to say for that. But eventually that's what we use and we learn how to adapt to it. So stay hyper competitive in 2025. Foster continuous learning for your team. So stay updated on industry trends. It'll lead time to experiment and invest in your team's learning. Prioritize collaboration and innovation. None of us are smarter than all of us together. Break down the silos. Encourage the cross-functional collaboration. And experimentation is going to be key. Leaders and managers in particular. must foster an environment where it's safe to not do so well. I tried something, it didn't work, and I'm sorry about that, but I learned from it and I'm going to try it this way next time. That's not a huge thing right now. We need to foster that. The last one, focus on delivering value. Keep the customer at the center of everything. Use metrics to measure your real world impact, not just the outputs. And I think that's how we can summarize everything that we talked about. Those are the three things if we had to take away. continuous learning, collaboration and innovation, and focus on delivering value. Good luck in 2025, right, Brian? Brian (37:19) Yeah, absolutely. Absolutely. That's awesome. Well, I hope this has been beneficial to folks. And Lance, I appreciate you keeping our tradition and helping us look forward into the new year. obviously, a very happy new year to you and your family. And thank you for coming back and joining us. Lance Dacy (37:35) Yeah, likewise to you, Brian. Glad to do it. Hope to see you all soon. Thank you all.

Go To Market Grit
#224 CTO & Co-Owner 37signals, David Heinemeier Hansson: Perfect Flow

Go To Market Grit

Play Episode Listen Later Jan 6, 2025 94:11


Guest: David Heinemeier Hansson, CTO & co-owner of 37signals and creator of Ruby on Rails 37signals CTO David Heinemeier Hansson has organized his life around his passions: Writing, racing sports cars, and coding. “ Why aren't we all doing that?” he wonders. “Why aren't we all trying to optimize our life in such a way that much of it is enjoyable?”Part of the problem, David argues, is that it's impossible to find a creative or productive flow inside of mainstream work culture. Open offices, managerial over-hiring, and sloppy scheduling prevents people from reaching a flow state.“40 hours a week is plenty than most people,” he says. “... So many people today are focused on just adding more and more hours. They're not thinking about how those hours are spent.” Chapters:(01:19) - 24 Hours of Le Mans (06:48) - Amateurs in sports car racing (10:54) - Flow and meditation (15:25) - Mundane bulls**t (18:14) - Optimizing for flow (21:09) - Calendars and open offices (24:30) - Full-time managers (29:06) - Small companies (32:20) - Selfishness and work (40:21) - Taking other people's money (45:43) - Temptation (49:49) - Moderately rich (55:19) - “The day I became a millionaire” (58:56) - The hassle (01:03:58) - Achieving the dream (01:08:34) - Shopify and Tobias Lütke (01:14:50) - Trade-offs and downsides (01:18:43) - The impact of Ruby on Rails (01:22:02) - “I love being wrong” (01:25:37) - DEI and illegal drugs (01:29:49) - Not hiring (01:30:35) - What “grit” means to David Mentioned in this episode: TikTok, Minecraft, Mario Kart, Formula One, NASCAR, Lewis Hamilton, the NBA, Tesla Model S, Flow: The Psychology of Optimal Experience by Mihaly Csikszentmihalyi, Steve McQueen, Jason Fried, Tetris, Bullshit Jobs: A Theory by David Graeber, Elon Musk and Twitter, the Dunbar number, Zappos, Google, Adam Smith, Stripe, Meta, Jeff Bezos, Basecamp, Zapier, 1Password, GitHub, SpaceX, private jets, Aesop, the Pagani Zonda, the Porsche Boxster, Lamborghini, Coco Chanel, LeBron James, Hey, Steve Jobs, Michael Arrington and TechCrunch, Y Combinator, Dr. Thomas Sowell,Punished by Rewards by Alfie Kohn, Grit by Angela Duckworth, and LEGO. Links:Connect with DavidTwitterLinkedInConnect with JoubinTwitterLinkedInEmail: grit@kleinerperkins.com Learn more about Kleiner PerkinsThis episode was edited by Eric Johnson from LightningPod.fm

Rails with Jason
242 - John DeSilva, CTO at Revela

Rails with Jason

Play Episode Listen Later Jan 3, 2025 69:24 Transcription Available


In this episode, we reflect on the shift from remote work to in-person connections and explore Detroit's transformation into a vibrant place to live and work. With guest John DeSilva, CTO of Revela, we discuss his company's growth from a basement startup to success with Ruby on Rails and the challenges of upgrading apps with Turbo. We also dive into database design, managing outdated data, and the surprising value of old-school technology in today's world.

Rails with Jason
241 - Freedom Dumlao, Sin City Ruby 2025 Speaker

Rails with Jason

Play Episode Listen Later Dec 31, 2024 63:04 Transcription Available


Freedom Dumlao discusses Flexcar's switch from Java to Ruby on Rails, covering the challenges, successes, and lessons learned from the transition. He shares insights on balancing coupling and decoupling in microservices, the strategic parallels between programming and problem-solving, and his experiences at Ruby conferences. The episode wraps up with community highlights, dining tips for Boston's Chinatown, and ways to connect with Freedom.

How About Tomorrow?
Talking with Typecraft

How About Tomorrow?

Play Episode Listen Later Dec 23, 2024 76:41


A real fear of heights, Typecraft's set up and content on YouTube, working in hard mode in the terminal, development with Ruby on Rails, developers selling coffee, plant based vs meat eating, and tracking holiday traditions and family KPI's.Links:Kevin Shen - We Design Video Studios

Code and the Coding Coders who Code it
Episode 44 - Adam McCrea

Code and the Coding Coders who Code it

Play Episode Listen Later Dec 17, 2024 35:42 Transcription Available


What if you could scale your SaaS platforms effortlessly across diverse hosting services? Join us as we welcome Adam McCrea, the brilliant mind behind JudoScale, who takes us through his fascinating evolution from being a Rails developer to creating a cutting-edge autoscaling solution. Adam opens up about the technical challenges he faced while adapting JudoScale for platforms like Render, Fly, and Railway, and how Heroku's unique architecture initially shaped his approach. His journey is one of innovation driven by necessity, as JudoScale originated from a need to optimize costs more efficiently than existing solutions.Our conversation doesn't shy away from complexity; in fact, it embraces it. Adam shares his experiences of grappling with AWS integration, navigating the intricate maze of ECS, EC2, Fargate, and IAM, all driven by customer demand. We explore the strategic shift from metered billing to flat-tiered pricing and the hurdles faced while setting up a staging environment on Render, ultimately reaffirming Heroku's smoother experience. This episode promises valuable insights into the strategic decisions and architectural reimaginations that keep JudoScale ahead of the game.Adding a creative flair, we delve into the entertaining world of infomercial production, as Adam recounts his experience crafting a humorous Billy Mays-inspired ad for JudoScale. With the aid of AI tools like ChatGPT and Descript, Adam turned a fun concept into an engaging reality. As we wrap up, Adam shares his excitement for RailsConf in Philadelphia and the significance of fostering connections through digital networking. Whether you're a tech enthusiast or a developer seeking innovative scaling solutions, this episode is brimming with insightful takeaways and creative inspiration.Send us some love.HoneybadgerHoneybadger is an application health monitoring tool built by developers for developers.HoneybadgerHoneybadger is an application health monitoring tool built by developers for developers.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Support the showReady to start your own podcast?This show is hosted on Buzzsprout and it's awesome, not to mention a Ruby on Rails application. Let Buzzsprout know we sent you and you'll get a $20 Amazon gift card if you sign up for a paid plan, and it helps support our show.

Maintainable
Dan Moore: Building Developer-Friendly Authentication Solutions

Maintainable

Play Episode Listen Later Dec 3, 2024 49:20


Topics CoveredCharacteristics of Maintainable SoftwareDan emphasizes the importance of internal consistency in codebases, automated tests, and proper documentation to preserve decision-making context.[00:05:32] Internal consistency: Why it matters.[00:08:09] Lessons from maintaining legacy codebases.Working with Legacy SystemsDan shares stories of upgrading ORM frameworks, introducing caching systems, and transitioning to bug tracking tools.[00:09:52] Replacing custom ORM systems with Hibernate and Ehcache.[00:13:10] Tackling high-risk components with automated testing.Modern Authentication ChallengesAs part of FusionAuth, Dan discusses building developer-friendly tools that balance local flexibility with SaaS convenience.[00:21:05] FusionAuth's role in secure authentication.[00:28:13] Testing authentication flows locally and in CI pipelines.Navigating Constraints in TeamsAdvice for managing technical debt, advocating for team priorities, and communicating with stakeholders during lean times.[00:16:39] Communicating the impact of resource constraints.[00:19:27] Tracing single requests to understand complex systems.Industry Trends and AI's RoleFrom managed services to the impact of AI on coding languages, Dan reflects on how the industry continues to evolve.[00:35:05] Managed services as accelerators for maintainability.[00:41:25] The potential and limits of AI in software development.Key TakeawaysConsistency and documentation in codebases reduce cognitive overhead for developers.Understand how your software fits into the business to prioritize effectively.AI might reshape the industry, but it won't replace the need for thoughtful problem-solving.Opinionated frameworks like Ruby on Rails continue to offer exceptional developer ergonomics.Resources MentionedFusionAuth BlogDan's Personal BlogCIAM Weekly NewsletterDan's Book: Letters to a New DeveloperZen and the Art of Motorcycle MaintenanceThe Asimov story mentionedTry FusionAuthDownload FusionAuth: Get started with the self-hosted version today.Free Trial of FusionAuth: Experience the FusionAuth cloud for free!Connect with DanLinkedInBlueSkyThanks to Our Sponsor!Turn hours of debugging into just minutes! AppSignal is a performance monitoring and error-tracking tool designed for Ruby, Elixir, Python, Node.js, Javascript, and other frameworks.It offers six powerful features with one simple interface, providing developers with real-time insights into the performance and health of web applications.Keep your coding cool and error-free, one line at a time! Use the code maintainable to get a 10% discount for your first year. Check them out! Subscribe to Maintainable on:Apple PodcastsSpotifyOr search "Maintainable" wherever you stream your podcasts.Keep up to date with the Maintainable Podcast by joining the newsletter.

Code and the Coding Coders who Code it
Episode 43 - Stan Lo

Code and the Coding Coders who Code it

Play Episode Listen Later Dec 3, 2024 32:45 Transcription Available


What drives a seasoned developer from Taiwan to London, and how does one translate a passion for Ruby into groundbreaking projects? Hear from Stan Lo of Shopify's RubyDX team as he shares his captivating journey and his significant impact on the Ruby development landscape. From his essential work on the debug gem and IRB to his current efforts with the Sorbet type checker and Prism parser, Stan delves into the technical intricacies of using C++ for performance and memory management. Gain unique insights into the collaborative decision-making process at Shopify that guided his transition from the Ruby LSP to focusing on Sorbet's integration.We also tackle the hurdles of progressing Ruby's Sorbet parser to Prism and the challenges of maintaining comprehensive Ruby documentation. Discover the importance of community-driven contributions, and how small acts like fixing typos can have a profound impact on the Ruby ecosystem. Experience Stan's personal anecdotes, from climbing adventures to mastering calisthenics, and explore the innovative shift from VS Code to Cursor, amplifying his development experience through AI capabilities. As we gear up for future events like RailsConf and RubyKaigi, there's an air of excitement for community reunions and ongoing projects. Join us for a blend of technical discussion, personal stories, and a call to action for all Ruby enthusiasts.Send us some love.HoneybadgerHoneybadger is an application health monitoring tool built by developers for developers.HoneybadgerHoneybadger is an application health monitoring tool built by developers for developers.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Support the showReady to start your own podcast?This show is hosted on Buzzsprout and it's awesome, not to mention a Ruby on Rails application. Let Buzzsprout know we sent you and you'll get a $20 Amazon gift card if you sign up for a paid plan, and it helps support our show.

Hemispheric Views
125: I Became One With PHP!

Hemispheric Views

Play Episode Listen Later Nov 28, 2024 45:08


It's that time again! It's Neatvember! Adam is back to chat about all things OMG. You have probably heard about "/save" but what about "/spend"!? Martin was away so we snuck into Obsidian corner! (Don't tell him please) Using Apple Podcasts? All notes can always be found here (https://listen.hemisphericviews.com/125)! AIFF, IFF, and WAV 00:00:00 IFF (https://en.wikipedia.org/wiki/Interchange_File_Format)

The Bike Shed
447: How to (not) implement impersonation

The Bike Shed

Play Episode Listen Later Nov 19, 2024 37:39


For developers, impersonation can be a powerful tool, but with great power comes great responsibility. In today's episode, hosts Stephanie and Joël explore the complexities of implementing impersonation features in software development, giving you the ability to take over someone's account and act as the user. They delve into the pros and cons of impersonation, from how it can help with debugging and customer support to its prime drawbacks regarding security and auditing issues. Discover why the need for impersonation is often a sign of poor admin tooling, alternative solutions to true impersonation, and the scenarios where impersonation might be the most pragmatic approach. You'll also learn why they advocate for understanding the root problem and considering alternative solutions before implementing impersonation. Tune in today for a deep dive into impersonation and the best ways to use it (or not use it)!
 Key Points From This Episode: What's new in Stephanie's world: how Notion Calendar is helping her manage her schedule. Joël's quest to find a health plan: how he used a spreadsheet to compare his options. A client request to build an impersonation feature, and why Joël has mixed feelings about it. What an impersonation tool does: it allows you to take over someone's account. When it's useful to use implementation as a feature, like for debugging and support. Potential risks and responsibilities associated with impersonation. Why the need for impersonation often indicates poor admin tooling. Technical and security implications of impersonation. Solutions for logging the audit trail when you're doing impersonation. Differentiating between the logged-in user and the user you're rendering views for. Building an app that isn't as tightly coupled to the “current user.” Suggested alternatives to true impersonation. The value of cross-functional teams and collaborative problem-solving. Links Mentioned in Today's Episode: Mailtrap (https://l.rw.rw/the_bike_shed) Notion Calendar (https://www.notion.com/product/calendar) 'Implementing Impersonation' (https://jamie.ideasasylum.com/2018/09/29/implementing-impersonation) Sustainable Web Development with Ruby on Rails (https://sustainable-rails.com/) The Bike Shed (https://bikeshed.thoughtbot.com/) Joël Quenneville on LinkedIn (https://www.linkedin.com/in/joel-quenneville-96b18b58/) Joël Quenneville on X (https://x.com/joelquen) Support The Bike Shed (https://github.com/sponsors/thoughtbot) WorkOS (https://workos.com/)

From Start-Up to Grown-Up
#80: David Heinemeier Hansson, Co-Owner of 37signals— Creating with first principles, acting with courage, and working in a world with no managers

From Start-Up to Grown-Up

Play Episode Listen Later Nov 19, 2024 80:28


David is the creator of Ruby on Rails, Co-Owner of 37signals, best-selling author, Le Mans class-winning racing driver, antitrust advocate, investor in Danish startups, frequent podcast guest, and family man.He writes regularly on HEY World and speaks on The REWORK Podcast. Hundreds of thousands of programmers around the world have built amazing applications using Ruby on Rails, an open-source web framework he created in 2003, and continues to develop to this day. Some of the more famous include Github, Shopify, Airbnb, Square, Coinbase, and Zendesk.For my newest episode of From Start-Up to Grown-Up, I talk with David Heinemeier Hansson, Co-Founder of 37signals, to explore his journey of innovation, remote work, and unconventional management.Learn more about DHH | Websitehttps://dhh.dk/Connect with Alisa! Follow Alisa Cohn on Instagram: @alisacohn Twitter: @alisacohn Facebook: facebook.com/alisa.cohn LinkedIn: https://www.linkedin.com/in/alisacohn/ Website: http://www.alisacohn.com Download her 5 scripts for delicate conversations (and 1 to make your life better) Grab a copy of From Start-Up to Grown-Up by Alisa Cohn from AmazonLove the show? Subscribe, Rate, Review, Like, and Share!

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Alessio will be at AWS re:Invent next week and hosting a casual coffee meetup on Wednesday, RSVP here! And subscribe to our calendar for our Singapore, NeurIPS, and all upcoming meetups!We are still taking questions for our next big recap episode! Submit questions and messages on Speakpipe here for a chance to appear on the show!If you've been following the AI agents space, you have heard of Lindy AI; while founder Flo Crivello is hesitant to call it "blowing up," when folks like Andrew Wilkinson start obsessing over your product, you're definitely onto something.In our latest episode, Flo walked us through Lindy's evolution from late 2022 to now, revealing some design choices about agent platform design that go against conventional wisdom in the space.The Great Reset: From Text Fields to RailsRemember late 2022? Everyone was "LLM-pilled," believing that if you just gave a language model enough context and tools, it could do anything. Lindy 1.0 followed this pattern:* Big prompt field ✅* Bunch of tools ✅* Prayer to the LLM gods ✅Fast forward to today, and Lindy 2.0 looks radically different. As Flo put it (~17:00 in the episode): "The more you can put your agent on rails, one, the more reliable it's going to be, obviously, but two, it's also going to be easier to use for the user."Instead of a giant, intimidating text field, users now build workflows visually:* Trigger (e.g., "Zendesk ticket received")* Required actions (e.g., "Check knowledge base")* Response generationThis isn't just a UI change - it's a fundamental rethinking of how to make AI agents reliable. As Swyx noted during our discussion: "Put Shoggoth in a box and make it a very small, minimal viable box. Everything else should be traditional if-this-then-that software."The Surprising Truth About Model LimitationsHere's something that might shock folks building in the space: with Claude 3.5 Sonnet, the model is no longer the bottleneck. Flo's exact words (~31:00): "It is actually shocking the extent to which the model is no longer the limit. It was the limit a year ago. It was too expensive. The context window was too small."Some context: Lindy started when context windows were 4K tokens. Today, their system prompt alone is larger than that. But what's really interesting is what this means for platform builders:* Raw capabilities aren't the constraint anymore* Integration quality matters more than model performance* User experience and workflow design are the new bottlenecksThe Search Engine Parallel: Why Horizontal Platforms Might WinOne of the spiciest takes from our conversation was Flo's thesis on horizontal vs. vertical agent platforms. He draws a fascinating parallel to search engines (~56:00):"I find it surprising the extent to which a horizontal search engine has won... You go through Google to search Reddit. You go through Google to search Wikipedia... search in each vertical has more in common with search than it does with each vertical."His argument: agent platforms might follow the same pattern because:* Agents across verticals share more commonalities than differences* There's value in having agents that can work together under one roof* The R&D cost of getting agents right is better amortized across use casesThis might explain why we're seeing early vertical AI companies starting to expand horizontally. The core agent capabilities - reliability, context management, tool integration - are universal needs.What This Means for BuildersIf you're building in the AI agents space, here are the key takeaways:* Constrain First: Rather than maximizing capabilities, focus on reliable execution within narrow bounds* Integration Quality Matters: With model capabilities plateauing, your competitive advantage lies in how well you integrate with existing tools* Memory Management is Key: Flo revealed they actively prune agent memories - even with larger context windows, not all memories are useful* Design for Discovery: Lindy's visual workflow builder shows how important interface design is for adoptionThe Meta LayerThere's a broader lesson here about AI product development. Just as Lindy evolved from "give the LLM everything" to "constrain intelligently," we might see similar evolution across the AI tooling space. The winners might not be those with the most powerful models, but those who best understand how to package AI capabilities in ways that solve real problems reliably.Full Video PodcastFlo's talk at AI Engineer SummitChapters* 00:00:00 Introductions * 00:04:05 AI engineering and deterministic software * 00:08:36 Lindys demo* 00:13:21 Memory management in AI agents * 00:18:48 Hierarchy and collaboration between Lindys * 00:21:19 Vertical vs. horizontal AI tools * 00:24:03 Community and user engagement strategies * 00:26:16 Rickrolling incident with Lindy * 00:28:12 Evals and quality control in AI systems * 00:31:52 Model capabilities and their impact on Lindy * 00:39:27 Competition and market positioning * 00:42:40 Relationship between Factorio and business strategy * 00:44:05 Remote work vs. in-person collaboration * 00:49:03 Europe vs US Tech* 00:58:59 Testing the Overton window and free speech * 01:04:20 Balancing AI safety concerns with business innovation Show Notes* Lindy.ai* Rick Rolling* Flo on X* TeamFlow* Andrew Wilkinson* Dust* Poolside.ai* SB1047* Gathertown* Sid Sijbrandij* Matt Mullenweg* Factorio* Seeing Like a StateTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol.ai.Swyx [00:00:12]: Hey, and today we're joined in the studio by Florent Crivello. Welcome.Flo [00:00:15]: Hey, yeah, thanks for having me.Swyx [00:00:17]: Also known as Altimore. I always wanted to ask, what is Altimore?Flo [00:00:21]: It was the name of my character when I was playing Dungeons & Dragons. Always. I was like 11 years old.Swyx [00:00:26]: What was your classes?Flo [00:00:27]: I was an elf. I was a magician elf.Swyx [00:00:30]: Well, you're still spinning magic. Right now, you're a solo founder and CEO of Lindy.ai. What is Lindy?Flo [00:00:36]: Yeah, we are a no-code platform letting you build your own AI agents easily. So you can think of we are to LangChain as Airtable is to MySQL. Like you can just pin up AI agents super easily by clicking around and no code required. You don't have to be an engineer and you can automate business workflows that you simply could not automate before in a few minutes.Swyx [00:00:55]: You've been in our orbit a few times. I think you spoke at our Latent Space anniversary. You spoke at my summit, the first summit, which was a really good keynote. And most recently, like we actually already scheduled this podcast before this happened. But Andrew Wilkinson was like, I'm obsessed by Lindy. He's just created a whole bunch of agents. So basically, why are you blowing up?Flo [00:01:16]: Well, thank you. I think we are having a little bit of a moment. I think it's a bit premature to say we're blowing up. But why are things going well? We revamped the product majorly. We called it Lindy 2.0. I would say we started working on that six months ago. We've actually not really announced it yet. It's just, I guess, I guess that's what we're doing now. And so we've basically been cooking for the last six months, like really rebuilding the product from scratch. I think I'll list you, actually, the last time you tried the product, it was still Lindy 1.0. Oh, yeah. If you log in now, the platform looks very different. There's like a ton more features. And I think one realization that we made, and I think a lot of folks in the agent space made the same realization, is that there is such a thing as too much of a good thing. I think many people, when they started working on agents, they were very LLM peeled and chat GPT peeled, right? They got ahead of themselves in a way, and us included, and they thought that agents were actually, and LLMs were actually more advanced than they actually were. And so the first version of Lindy was like just a giant prompt and a bunch of tools. And then the realization we had was like, hey, actually, the more you can put your agent on Rails, one, the more reliable it's going to be, obviously, but two, it's also going to be easier to use for the user, because you can really, as a user, you get, instead of just getting this big, giant, intimidating text field, and you type words in there, and you have no idea if you're typing the right word or not, here you can really click and select step by step, and tell your agent what to do, and really give as narrow or as wide a guardrail as you want for your agent. We started working on that. We called it Lindy on Rails about six months ago, and we started putting it into the hands of users over the last, I would say, two months or so, and I think things really started going pretty well at that point. The agent is way more reliable, way easier to set up, and we're already seeing a ton of new use cases pop up.Swyx [00:03:00]: Yeah, just a quick follow-up on that. You launched the first Lindy in November last year, and you were already talking about having a DSL, right? I remember having this discussion with you, and you were like, it's just much more reliable. Is this still the DSL under the hood? Is this a UI-level change, or is it a bigger rewrite?Flo [00:03:17]: No, it is a much bigger rewrite. I'll give you a concrete example. Suppose you want to have an agent that observes your Zendesk tickets, and it's like, hey, every time you receive a Zendesk ticket, I want you to check my knowledge base, so it's like a RAG module and whatnot, and then answer the ticket. The way it used to work with Lindy before was, you would type the prompt asking it to do that. You check my knowledge base, and so on and so forth. The problem with doing that is that it can always go wrong. You're praying the LLM gods that they will actually invoke your knowledge base, but I don't want to ask it. I want it to always, 100% of the time, consult the knowledge base after it receives a Zendesk ticket. And so with Lindy, you can actually have the trigger, which is Zendesk ticket received, have the knowledge base consult, which is always there, and then have the agent. So you can really set up your agent any way you want like that.Swyx [00:04:05]: This is something I think about for AI engineering as well, which is the big labs want you to hand over everything in the prompts, and only code of English, and then the smaller brains, the GPU pours, always want to write more code to make things more deterministic and reliable and controllable. One way I put it is put Shoggoth in a box and make it a very small, the minimal viable box. Everything else should be traditional, if this, then that software.Flo [00:04:29]: I love that characterization, put the Shoggoth in the box. Yeah, we talk about using as much AI as necessary and as little as possible.Alessio [00:04:37]: And what was the choosing between kind of like this drag and drop, low code, whatever, super code-driven, maybe like the Lang chains, auto-GPT of the world, and maybe the flip side of it, which you don't really do, it's like just text to agent, it's like build the workflow for me. Like what have you learned actually putting this in front of users and figuring out how much do they actually want to add it versus like how much, you know, kind of like Ruby on Rails instead of Lindy on Rails, it's kind of like, you know, defaults over configuration.Flo [00:05:06]: I actually used to dislike when people said, oh, text is not a great interface. I was like, ah, this is such a mid-take, I think text is awesome. And I've actually come around, I actually sort of agree now that text is really not great. I think for people like you and me, because we sort of have a mental model, okay, when I type a prompt into this text box, this is what it's going to do, it's going to map it to this kind of data structure under the hood and so forth. I guess it's a little bit blackmailing towards humans. You jump on these calls with humans and you're like, here's a text box, this is going to set up an agent for you, do it. And then they type words like, I want you to help me put order in my inbox. Oh, actually, this is a good one. This is actually a good one. What's a bad one? I would say 60 or 70% of the prompts that people type don't mean anything. Me as a human, as AGI, I don't understand what they mean. I don't know what they mean. It is actually, I think whenever you can have a GUI, it is better than to have just a pure text interface.Alessio [00:05:58]: And then how do you decide how much to expose? So even with the tools, you have Slack, you have Google Calendar, you have Gmail. Should people by default just turn over access to everything and then you help them figure out what to use? I think that's the question. When I tried to set up Slack, it was like, hey, give me access to all channels and everything, which for the average person probably makes sense because you don't want to re-prompt them every time you add new channels. But at the same time, for maybe the more sophisticated enterprise use cases, people are like, hey, I want to really limit what you have access to. How do you kind of thread that balance?Flo [00:06:35]: The general philosophy is we ask for the least amount of permissions needed at any given moment. I don't think Slack, I could be mistaken, but I don't think Slack lets you request permissions for just one channel. But for example, for Google, obviously there are hundreds of scopes that you could require for Google. There's a lot of scopes. And sometimes it's actually painful to set up your Lindy because you're going to have to ask Google and add scopes five or six times. We've had sessions like this. But that's what we do because, for example, the Lindy email drafter, she's going to ask you for your authorization once for, I need to be able to read your email so I can draft a reply, and then another time for I need to be able to write a draft for them. We just try to do it very incrementally like that.Alessio [00:07:15]: Do you think OAuth is just overall going to change? I think maybe before it was like, hey, we need to set up OAuth that humans only want to kind of do once. So we try to jam-pack things all at once versus what if you could on-demand get different permissions every time from different parts? Do you ever think about designing things knowing that maybe AI will use it instead of humans will use it? Yeah, for sure.Flo [00:07:37]: One pattern we've started to see is people provisioning accounts for their AI agents. And so, in particular, Google Workspace accounts. So, for example, Lindy can be used as a scheduling assistant. So you can just CC her to your emails when you're trying to find time with someone. And just like a human assistant, she's going to go back and forth and offer other abilities and so forth. Very often, people don't want the other party to know that it's an AI. So it's actually funny. They introduce delays. They ask the agent to wait before replying, so it's not too obvious that it's an AI. And they provision an account on Google Suite, which costs them like $10 a month or something like that. So we're seeing that pattern more and more. I think that does the job for now. I'm not optimistic on us actually patching OAuth. Because I agree with you, ultimately, we would want to patch OAuth because the new account thing is kind of a clutch. It's really a hack. You would want to patch OAuth to have more granular access control and really be able to put your sugar in the box. I'm not optimistic on us doing that before AGI, I think. That's a very close timeline.Swyx [00:08:36]: I'm mindful of talking about a thing without showing it. And we already have the setup to show it. Why don't we jump into a screen share? For listeners, you can jump on the YouTube and like and subscribe. But also, let's have a look at how you show off Lindy. Yeah, absolutely.Flo [00:08:51]: I'll give an example of a very simple Lindy and then I'll graduate to a much more complicated one. A super simple Lindy that I have is, I unfortunately bought some investment properties in the south of France. It was a really, really bad idea. And I put them on a Holydew, which is like the French Airbnb, if you will. And so I received these emails from time to time telling me like, oh, hey, you made 200 bucks. Someone booked your place. When I receive these emails, I want to log this reservation in a spreadsheet. Doing this without an AI agent or without AI in general is a pain in the butt because you must write an HTML parser for this email. And so it's just hard. You may not be able to do it and it's going to break the moment the email changes. By contrast, the way it works with Lindy, it's really simple. It's two steps. It's like, okay, I receive an email. If it is a reservation confirmation, I have this filter here. Then I append a row to this spreadsheet. And so this is where you can see the AI part where the way this action is configured here, you see these purple fields on the right. Each of these fields is a prompt. And so I can say, okay, you extract from the email the day the reservation begins on. You extract the amount of the reservation. You extract the number of travelers of the reservation. And now you can see when I look at the task history of this Lindy, it's really simple. It's like, okay, you do this and boom, appending this row to this spreadsheet. And this is the information extracted. So effectively, this node here, this append row node is a mini agent. It can see everything that just happened. It has context over the task and it's appending the row. And then it's going to send a reply to the thread. That's a very simple example of an agent.Swyx [00:10:34]: A quick follow-up question on this one while we're still on this page. Is that one call? Is that a structured output call? Yeah. Okay, nice. Yeah.Flo [00:10:41]: And you can see here for every node, you can configure which model you want to power the node. Here I use cloud. For this, I use GPT-4 Turbo. Much more complex example, my meeting recorder. It looks very complex because I've added to it over time, but at a high level, it's really simple. It's like when a meeting begins, you record the meeting. And after the meeting, you send me a summary and you send me coaching notes. So I receive, like my Lindy is constantly coaching me. And so you can see here in the prompt of the coaching notes, I've told it, hey, you know, was I unnecessarily confrontational at any point? I'm French, so I have to watch out for that. Or not confrontational enough. Should I have double-clicked on any issue, right? So I can really give it exactly the kind of coaching that I'm expecting. And then the interesting thing here is, like, you can see the agent here, after it sent me these coaching notes, moves on. And it does a bunch of other stuff. So it goes on Slack. It disseminates the notes on Slack. It does a bunch of other stuff. But it's actually able to backtrack and resume the automation at the coaching notes email if I responded to that email. So I'll give a super concrete example. This is an actual coaching feedback that I received from Lindy. She was like, hey, this was a sales call I had with a customer. And she was like, I found your explanation of Lindy too technical. And I was able to follow up and just ask a follow-up question in the thread here. And I was like, why did you find too technical about my explanation? And Lindy restored the context. And so she basically picked up the automation back up here in the tree. And she has all of the context of everything that happened, including the meeting in which I was. So she was like, oh, you used the words deterministic and context window and agent state. And that concept exists at every level for every channel and every action that Lindy takes. So another example here is, I mentioned she also disseminates the notes on Slack. So this was a meeting where I was not, right? So this was a teammate. He's an indie meeting recorder, posts the meeting notes in this customer discovery channel on Slack. So you can see, okay, this is the onboarding call we had. This was the use case. Look at the questions. How do I make Lindy slower? How do I add delays to make Lindy slower? And I was able, in the Slack thread, to ask follow-up questions like, oh, what did we answer to these questions? And it's really handy because I know I can have this sort of interactive Q&A with these meetings. It means that very often now, I don't go to meetings anymore. I just send my Lindy. And instead of going to like a 60-minute meeting, I have like a five-minute chat with my Lindy afterwards. And she just replied. She was like, well, this is what we replied to this customer. And I can just be like, okay, good job, Jack. Like, no notes about your answers. So that's the kind of use cases people have with Lindy. It's a lot of like, there's a lot of sales automations, customer support automations, and a lot of this, which is basically personal assistance automations, like meeting scheduling and so forth.Alessio [00:13:21]: Yeah, and I think the question that people might have is memory. So as you get coaching, how does it track whether or not you're improving? You know, if these are like mistakes you made in the past, like, how do you think about that?Flo [00:13:31]: Yeah, we have a memory module. So I'll show you my meeting scheduler, Lindy, which has a lot of memories because by now I've used her for so long. And so every time I talk to her, she saves a memory. If I tell her, you screwed up, please don't do this. So you can see here, oh, it's got a double memory here. This is the meeting link I have, or this is the address of the office. If I tell someone to meet me at home, this is the address of my place. This is the code. I guess we'll have to edit that out. This is not the code of my place. No dogs. Yeah, so Lindy can just manage her own memory and decide when she's remembering things between executions. Okay.Swyx [00:14:11]: I mean, I'm just going to take the opportunity to ask you, since you are the creator of this thing, how come there's so few memories, right? Like, if you've been using this for two years, there should be thousands of thousands of things. That is a good question.Flo [00:14:22]: Agents still get confused if they have too many memories, to my point earlier about that. So I just am out of a call with a member of the Lama team at Meta, and we were chatting about Lindy, and we were going into the system prompt that we sent to Lindy, and all of that stuff. And he was amazed, and he was like, it's a miracle that it's working, guys. He was like, this kind of system prompt, this does not exist, either pre-training or post-training. These models were never trained to do this kind of stuff. It's a miracle that they can be agents at all. And so what I do, I actually prune the memories. You know, it's actually something I've gotten into the habit of doing from back when we had GPT 3.5, being Lindy agents. I suspect it's probably not as necessary in the Cloud 3.5 Sunette days, but I prune the memories. Yeah, okay.Swyx [00:15:05]: The reason is because I have another assistant that also is recording and trying to come up with facts about me. It comes up with a lot of trivial, useless facts that I... So I spend most of my time pruning. Actually, it's not super useful. I'd much rather have high-quality facts that it accepts. Or maybe I was even thinking, were you ever tempted to add a wake word to only memorize this when I say memorize this? And otherwise, don't even bother.Flo [00:15:30]: I have a Lindy that does this. So this is my inbox processor, Lindy. It's kind of beefy because there's a lot of different emails. But somewhere in here,Swyx [00:15:38]: there is a rule where I'm like,Flo [00:15:39]: aha, I can email my inbox processor, Lindy. It's really handy. So she has her own email address. And so when I process my email inbox, I sometimes forward an email to her. And it's a newsletter, or it's like a cold outreach from a recruiter that I don't care about, or anything like that. And I can give her a rule. And I can be like, hey, this email I want you to archive, moving forward. Or I want you to alert me on Slack when I have this kind of email. It's really important. And so you can see here, the prompt is, if I give you a rule about a kind of email, like archive emails from X, save it as a new memory. And I give it to the memory saving skill. And yeah.Swyx [00:16:13]: One thing that just occurred to me, so I'm a big fan of virtual mailboxes. I recommend that everybody have a virtual mailbox. You could set up a physical mail receive thing for Lindy. And so then Lindy can process your physical mail.Flo [00:16:26]: That's actually a good idea. I actually already have something like that. I use like health class mail. Yeah. So yeah, most likely, I can process my physical mail. Yeah.Swyx [00:16:35]: And then the other product's idea I have, looking at this thing, is people want to brag about the complexity of their Lindys. So this would be like a 65 point Lindy, right?Flo [00:16:43]: What's a 65 point?Swyx [00:16:44]: Complexity counting. Like how many nodes, how many things, how many conditions, right? Yeah.Flo [00:16:49]: This is not the most complex one. I have another one. This designer recruiter here is kind of beefy as well. Right, right, right. So I'm just saying,Swyx [00:16:56]: let people brag. Let people be super users. Oh, right.Flo [00:16:59]: Give them a score. Give them a score.Swyx [00:17:01]: Then they'll just be like, okay, how high can you make this score?Flo [00:17:04]: Yeah, that's a good point. And I think that's, again, the beauty of this on-rails phenomenon. It's like, think of the equivalent, the prompt equivalent of this Lindy here, for example, that we're looking at. It'd be monstrous. And the odds that it gets it right are so low. But here, because we're really holding the agent's hand step by step by step, it's actually super reliable. Yeah.Swyx [00:17:22]: And is it all structured output-based? Yeah. As far as possible? Basically. Like, there's no non-structured output?Flo [00:17:27]: There is. So, for example, here, this AI agent step, right, or this send message step, sometimes it gets to... That's just plain text.Swyx [00:17:35]: That's right.Flo [00:17:36]: Yeah. So I'll give you an example. Maybe it's TMI. I'm having blood pressure issues these days. And so this Lindy here, I give it my blood pressure readings, and it updates a log that I have of my blood pressure that it sends to my doctor.Swyx [00:17:49]: Oh, so every Lindy comes with a to-do list?Flo [00:17:52]: Yeah. Every Lindy has its own task history. Huh. Yeah. And so you can see here, this is my main Lindy, my personal assistant, and I've told it, where is this? There is a point where I'm like, if I am giving you a health-related fact, right here, I'm giving you health information, so then you update this log that I have in this Google Doc, and then you send me a message. And you can see, I've actually not configured this send message node. I haven't told it what to send me a message for. Right? And you can see, it's actually lecturing me. It's like, I'm giving it my blood pressure ratings. It's like, hey, it's a bit high. Here are some lifestyle changes you may want to consider.Alessio [00:18:27]: I think maybe this is the most confusing or new thing for people. So even I use Lindy and I didn't even know you could have multiple workflows in one Lindy. I think the mental model is kind of like the Zapier workflows. It starts and it ends. It doesn't choose between. How do you think about what's a Lindy versus what's a sub-function of a Lindy? Like, what's the hierarchy?Flo [00:18:48]: Yeah. Frankly, I think the line is a little arbitrary. It's kind of like when you code, like when do you start to create a new class versus when do you overload your current class. I think of it in terms of like jobs to be done and I think of it in terms of who is the Lindy serving. This Lindy is serving me personally. It's really my day-to-day Lindy. I give it a bunch of stuff, like very easy tasks. And so this is just the Lindy I go to. Sometimes when a task is really more specialized, so for example, I have this like summarizer Lindy or this designer recruiter Lindy. These tasks are really beefy. I wouldn't want to add this to my main Lindy, so I just created a separate Lindy for it. Or when it's a Lindy that serves another constituency, like our customer support Lindy, I don't want to add that to my personal assistant Lindy. These are two very different Lindys.Alessio [00:19:31]: And you can call a Lindy from within another Lindy. That's right. You can kind of chain them together.Flo [00:19:36]: Lindys can work together, absolutely.Swyx [00:19:38]: A couple more things for the video portion. I noticed you have a podcast follower. We have to ask about that. What is that?Flo [00:19:46]: So this one wakes me up every... So wakes herself up every week. And she sends me... So she woke up yesterday, actually. And she searches for Lenny's podcast. And she looks for like the latest episode on YouTube. And once she finds it, she transcribes the video and then she sends me the summary by email. I don't listen to podcasts as much anymore. I just like read these summaries. Yeah.Alessio [00:20:09]: We should make a latent space Lindy. Marketplace.Swyx [00:20:12]: Yeah. And then you have a whole bunch of connectors. I saw the list briefly. Any interesting one? Complicated one that you're proud of? Anything that you want to just share? Connector stories.Flo [00:20:23]: So many of our workflows are about meeting scheduling. So we had to build some very open unity tools around meeting scheduling. So for example, one that is surprisingly hard is this find available times action. You would not believe... This is like a thousand lines of code or something. It's just a very beefy action. And you can pass it a bunch of parameters about how long is the meeting? When does it start? When does it end? What are the meetings? The weekdays in which I meet? How many time slots do you return? What's the buffer between my meetings? It's just a very, very, very complex action. I really like our GitHub action. So we have a Lindy PR reviewer. And it's really handy because anytime any bug happens... So the Lindy reads our guidelines on Google Docs. By now, the guidelines are like 40 pages long or something. And so every time any new kind of bug happens, we just go to the guideline and we add the lines. Like, hey, this has happened before. Please watch out for this category of bugs. And it's saving us so much time every day.Alessio [00:21:19]: There's companies doing PR reviews. Where does a Lindy start? When does a company start? Or maybe how do you think about the complexity of these tasks when it's going to be worth having kind of like a vertical standalone company versus just like, hey, a Lindy is going to do a good job 99% of the time?Flo [00:21:34]: That's a good question. We think about this one all the time. I can't say that we've really come up with a very crisp articulation of when do you want to use a vertical tool versus when do you want to use a horizontal tool. I think of it as very similar to the internet. I find it surprising the extent to which a horizontal search engine has won. But I think that Google, right? But I think the even more surprising fact is that the horizontal search engine has won in almost every vertical, right? You go through Google to search Reddit. You go through Google to search Wikipedia. I think maybe the biggest exception is e-commerce. Like you go to Amazon to search e-commerce, but otherwise you go through Google. And I think that the reason for that is because search in each vertical has more in common with search than it does with each vertical. And search is so expensive to get right. Like Google is a big company that it makes a lot of sense to aggregate all of these different use cases and to spread your R&D budget across all of these different use cases. I have a thesis, which is, it's a really cool thesis for Lindy, is that the same thing is true for agents. I think that by and large, in a lot of verticals, agents in each vertical have more in common with agents than they do with each vertical. I also think there are benefits in having a single agent platform because that way your agents can work together. They're all like under one roof. That way you only learn one platform and so you can create agents for everything that you want. And you don't have to like pay for like a bunch of different platforms and so forth. So I think ultimately, it is actually going to shake out in a way that is similar to search in that search is everywhere on the internet. Every website has a search box, right? So there's going to be a lot of vertical agents for everything. I think AI is going to completely penetrate every category of software. But then I also think there are going to be a few very, very, very big horizontal agents that serve a lot of functions for people.Swyx [00:23:14]: That is actually one of the questions that we had about the agent stuff. So I guess we can transition away from the screen and I'll just ask the follow-up, which is, that is a hot topic. You're basically saying that the current VC obsession of the day, which is vertical AI enabled SaaS, is mostly not going to work out. And then there are going to be some super giant horizontal SaaS.Flo [00:23:34]: Oh, no, I'm not saying it's either or. Like SaaS today, vertical SaaS is huge and there's also a lot of horizontal platforms. If you look at like Airtable or Notion, basically the entire no-code space is very horizontal. I mean, Loom and Zoom and Slack, there's a lot of very horizontal tools out there. Okay.Swyx [00:23:49]: I was just trying to get a reaction out of you for hot takes. Trying to get a hot take.Flo [00:23:54]: No, I also think it is natural for the vertical solutions to emerge first because it's just easier to build. It's just much, much, much harder to build something horizontal. Cool.Swyx [00:24:03]: Some more Lindy-specific questions. So we covered most of the top use cases and you have an academy. That was nice to see. I also see some other people doing it for you for free. So like Ben Spites is doing it and then there's some other guy who's also doing like lessons. Yeah. Which is kind of nice, right? Yeah, absolutely. You don't have to do any of that.Flo [00:24:20]: Oh, we've been seeing it more and more on like LinkedIn and Twitter, like people posting their Lindys and so forth.Swyx [00:24:24]: I think that's the flywheel that you built the platform where creators see value in allying themselves to you. And so then, you know, your incentive is to make them successful so that they can make other people successful and then it just drives more and more engagement. Like it's earned media. Like you don't have to do anything.Flo [00:24:39]: Yeah, yeah. I mean, community is everything.Swyx [00:24:41]: Are you doing anything special there? Any big wins?Flo [00:24:44]: We have a Slack community that's pretty active. I can't say we've invested much more than that so far.Swyx [00:24:49]: I would say from having, so I have some involvement in the no-code community. I would say that Webflow going very hard after no-code as a category got them a lot more allies than just the people using Webflow. So it helps you to grow the community beyond just Lindy. And I don't know what this is called. Maybe it's just no-code again. Maybe you want to call it something different. But there's definitely an appetite for this and you are one of a broad category, right? Like just before you, we had Dust and, you know, they're also kind of going after a similar market. Zapier obviously is not going to try to also compete with you. Yeah. There's no question there. It's just like a reaction about community. Like I think a lot about community. Lanespace is growing the community of AI engineers. And I think you have a slightly different audience of, I don't know what.Flo [00:25:33]: Yeah. I think the no-code tinkerers is the community. Yeah. It is going to be the same sort of community as what Webflow, Zapier, Airtable, Notion to some extent.Swyx [00:25:43]: Yeah. The framing can be different if you were, so I think tinkerers has this connotation of not serious or like small. And if you framed it to like no-code EA, we're exclusively only for CEOs with a certain budget, then you just have, you tap into a different budget.Flo [00:25:58]: That's true. The problem with EA is like, the CEO has no willingness to actually tinker and play with the platform.Swyx [00:26:05]: Maybe Andrew's doing that. Like a lot of your biggest advocates are CEOs, right?Flo [00:26:09]: A solopreneur, you know, small business owners, I think Andrew is an exception. Yeah. Yeah, yeah, he is.Swyx [00:26:14]: He's an exception in many ways. Yep.Alessio [00:26:16]: Just before we wrap on the use cases, is Rick rolling your customers? Like a officially supported use case or maybe tell that story?Flo [00:26:24]: It's one of the main jobs to be done, really. Yeah, we woke up recently, so we have a Lindy obviously doing our customer support and we do check after the Lindy. And so we caught this email exchange where someone was asking Lindy for video tutorials. And at the time, actually, we did not have video tutorials. We do now on the Lindy Academy. And Lindy responded to the email. It's like, oh, absolutely, here's a link. And we were like, what? Like, what kind of link did you send? And so we clicked on the link and it was a recall. We actually reacted fast enough that the customer had not yet opened the email. And so we reacted immediately. Like, oh, hey, actually, sorry, this is the right link. And so the customer never reacted to the first link. And so, yeah, I tweeted about that. It went surprisingly viral. And I checked afterwards in the logs. We did like a database query and we found, I think, like three or four other instances of it having happened before.Swyx [00:27:12]: That's surprisingly low.Flo [00:27:13]: It is low. And we fixed it across the board by just adding a line to the system prompt that's like, hey, don't recall people, please don't recall.Swyx [00:27:21]: Yeah, yeah, yeah. I mean, so, you know, you can explain it retroactively, right? Like, that YouTube slug has been pasted in so many different corpuses that obviously it learned to hallucinate that.Alessio [00:27:31]: And it pretended to be so many things. That's the thing.Swyx [00:27:34]: I wouldn't be surprised if that takes one token. Like, there's this one slug in the tokenizer and it's just one token.Flo [00:27:41]: That's the idea of a YouTube video.Swyx [00:27:43]: Because it's used so much, right? And you have to basically get it exactly correct. It's probably not. That's a long speech.Flo [00:27:52]: It would have been so good.Alessio [00:27:55]: So this is just a jump maybe into evals from here. How could you possibly come up for an eval that says, make sure my AI does not recall my customer? I feel like when people are writing evals, that's not something that they come up with. So how do you think about evals when it's such like an open-ended problem space?Flo [00:28:12]: Yeah, it is tough. We built quite a bit of infrastructure for us to create evals in one click from any conversation history. So we can point to a conversation and we can be like, in one click we can turn it into effectively a unit test. It's like, this is a good conversation. This is how you're supposed to handle things like this. Or if it's a negative example, then we modify a little bit the conversation after generating the eval. So it's very easy for us to spin up this kind of eval.Alessio [00:28:36]: Do you use an off-the-shelf tool which is like Brain Trust on the podcast? Or did you just build your own?Flo [00:28:41]: We unfortunately built our own. We're most likely going to switch to Brain Trust. Well, when we built it, there was nothing. Like there was no eval tool, frankly. I mean, we started this project at the end of 2022. It was like, it was very, very, very early. I wouldn't recommend it to build your own eval tool. There's better solutions out there and our eval tool breaks all the time and it's a nightmare to maintain. And that's not something we want to be spending our time on.Swyx [00:29:04]: I was going to ask that basically because I think my first conversations with you about Lindy was that you had a strong opinion that everyone should build their own tools. And you were very proud of your evals. You're kind of showing off to me like how many evals you were running, right?Flo [00:29:16]: Yeah, I think that was before all of these tools came around. I think the ecosystem has matured a fair bit.Swyx [00:29:21]: What is one thing that Brain Trust has nailed that you always struggled to do?Flo [00:29:25]: We're not using them yet, so I couldn't tell. But from what I've gathered from the conversations I've had, like they're doing what we do with our eval tool, but better.Swyx [00:29:33]: And like they do it, but also like 60 other companies do it, right? So I don't know how to shop apart from brand. Word of mouth.Flo [00:29:41]: Same here.Swyx [00:29:42]: Yeah, like evals or Lindys, there's two kinds of evals, right? Like in some way, you don't have to eval your system as much because you've constrained the language model so much. And you can rely on open AI to guarantee that the structured outputs are going to be good, right? We had Michelle sit where you sit and she explained exactly how they do constraint grammar sampling and all that good stuff. So actually, I think it's more important for your customers to eval their Lindys than you evaling your Lindy platform because you just built the platform. You don't actually need to eval that much.Flo [00:30:14]: Yeah. In an ideal world, our customers don't need to care about this. And I think the bar is not like, look, it needs to be at 100%. I think the bar is it needs to be better than a human. And for most use cases we serve today, it is better than a human, especially if you put it on Rails.Swyx [00:30:30]: Is there a limiting factor of Lindy at the business? Like, is it adding new connectors? Is it adding new node types? Like how do you prioritize what is the most impactful to your company?Flo [00:30:41]: Yeah. The raw capabilities for sure are a big limit. It is actually shocking the extent to which the model is no longer the limit. It was the limit a year ago. It was too expensive. The context window was too small. It's kind of insane that we started building this when the context windows were like 4,000 tokens. Like today, our system prompt is more than 4,000 tokens. So yeah, the model is actually very much not a limit anymore. It almost gives me pause because I'm like, I want the model to be a limit. And so no, the integrations are ones, the core capabilities are ones. So for example, we are investing in a system that's basically, I call it like the, it's a J hack. Give me these names, like the poor man's RLHF. So you can turn on a toggle on any step of your Lindy workflow to be like, ask me for confirmation before you actually execute this step. So it's like, hey, I receive an email, you send a reply, ask me for confirmation before actually sending it. And so today you see the email that's about to get sent and you can either approve, deny, or change it and then approve. And we are making it so that when you make a change, we are then saving this change that you're making or embedding it in the vector database. And then we are retrieving these examples for future tasks and injecting them into the context window. So that's the kind of capability that makes a huge difference for users. That's the bottleneck today. It's really like good old engineering and product work.Swyx [00:31:52]: I assume you're hiring. We'll do a call for hiring at the end.Alessio [00:31:54]: Any other comments on the model side? When did you start feeling like the model was not a bottleneck anymore? Was it 4.0? Was it 3.5? 3.5.Flo [00:32:04]: 3.5 Sonnet, definitely. I think 4.0 is overhyped, frankly. We don't use 4.0. I don't think it's good for agentic behavior. Yeah, 3.5 Sonnet is when I started feeling that. And then with prompt caching with 3.5 Sonnet, like that fills the cost, cut the cost again. Just cut it in half. Yeah.Swyx [00:32:21]: Your prompts are... Some of the problems with agentic uses is that your prompts are kind of dynamic, right? Like from caching to work, you need the front prefix portion to be stable.Flo [00:32:32]: Yes, but we have this append-only ledger paradigm. So every node keeps appending to that ledger and every filled node inherits all the context built up by all the previous nodes. And so we can just decide, like, hey, every X thousand nodes, we trigger prompt caching again.Swyx [00:32:47]: Oh, so you do it like programmatically, not all the time.Flo [00:32:50]: No, sorry. Anthropic manages that for us. But basically, it's like, because we keep appending to the prompt, the prompt caching works pretty well.Alessio [00:32:57]: We have this small podcaster tool that I built for the podcast and I rewrote all of our prompts because I noticed, you know, I was inputting stuff early on. I wonder how much more money OpenAN and Anthropic are making just because people don't rewrite their prompts to be like static at the top and like dynamic at the bottom.Flo [00:33:13]: I think that's the remarkable thing about what we're having right now. It's insane that these companies are routinely cutting their costs by two, four, five. Like, they basically just apply constraints. They want people to take advantage of these innovations. Very good.Swyx [00:33:25]: Do you have any other competitive commentary? Commentary? Dust, WordWare, Gumloop, Zapier? If not, we can move on.Flo [00:33:31]: No comment.Alessio [00:33:32]: I think the market is,Flo [00:33:33]: look, I mean, AGI is coming. All right, that's what I'm talking about.Swyx [00:33:38]: I think you're helping. Like, you're paving the road to AGI.Flo [00:33:41]: I'm playing my small role. I'm adding my small brick to this giant, giant, giant castle. Yeah, look, when it's here, we are going to, this entire category of software is going to create, it's going to sound like an exaggeration, but it is a fact it is going to create trillions of dollars of value in a few years, right? It's going to, for the first time, we're actually having software directly replace human labor. I see it every day in sales calls. It's like, Lindy is today replacing, like, we talk to even small teams. It's like, oh, like, stop, this is a 12-people team here. I guess we'll set up this Lindy for one or two days, and then we'll have to decide what to do with this 12-people team. And so, yeah. To me, there's this immense uncapped market opportunity. It's just such a huge ocean, and there's like three sharks in the ocean. I'm focused on the ocean more than on the sharks.Swyx [00:34:25]: So we're moving on to hot topics, like, kind of broadening out from Lindy, but obviously informed by Lindy. What are the high-order bits of good agent design?Flo [00:34:31]: The model, the model, the model, the model. I think people fail to truly, and me included, they fail to truly internalize the bitter lesson. So for the listeners out there who don't know about it, it's basically like, you just scale the model. Like, GPUs go brr, it's all that matters. I think it also holds for the cognitive architecture. I used to be very cognitive architecture-filled, and I was like, ah, and I was like a critic, and I was like a generator, and all this, and then it's just like, GPUs go brr, like, just like let the model do its job. I think we're seeing it a little bit right now with O1. I'm seeing some tweets that say that the new 3.5 SONNET is as good as O1, but with none of all the crazy...Swyx [00:35:09]: It beats O1 on some measures. On some reasoning tasks. On AIME, it's still a lot lower. Like, it's like 14 on AIME versus O1, it's like 83.Flo [00:35:17]: Got it. Right. But even O1 is still the model. Yeah.Swyx [00:35:22]: Like, there's no cognitive architecture on top of it.Flo [00:35:23]: You can just wait for O1 to get better.Alessio [00:35:25]: And so, as a founder, how do you think about that, right? Because now, knowing this, wouldn't you just wait to start Lindy? You know, you start Lindy, it's like 4K context, the models are not that good. It's like, but you're still kind of like going along and building and just like waiting for the models to get better. How do you today decide, again, what to build next, knowing that, hey, the models are going to get better, so maybe we just shouldn't focus on improving our prompt design and all that stuff and just build the connectors instead or whatever? Yeah.Flo [00:35:51]: I mean, that's exactly what we do. Like, all day, we always ask ourselves, oh, when we have a feature idea or a feature request, we ask ourselves, like, is this the kind of thing that just gets better while we sleep because models get better? I'm reminded, again, when we started this in 2022, we spent a lot of time because we had to around context pruning because 4,000 tokens is really nothing. You really can't do anything with 4,000 tokens. All that work was throwaway work. Like, now it's like it was for nothing, right? Now we just assume that infinite context windows are going to be here in a year or something, a year and a half, and infinitely cheap as well, and dynamic compute is going to be here. Like, we just assume all of these things are going to happen, and so we really focus, our job to be done in the industry is to provide the input and output to the model. I really compare it all the time to the PC and the CPU, right? Apple is busy all day. They're not like a CPU wrapper. They have a lot to build, but they don't, well, now actually they do build the CPU as well, but leaving that aside, they're busy building a laptop. It's just a lot of work to build these things. It's interesting because, like,Swyx [00:36:45]: for example, another person that we're close to, Mihaly from Repl.it, he often says that the biggest jump for him was having a multi-agent approach, like the critique thing that you just said that you don't need, and I wonder when, in what situations you do need that and what situations you don't. Obviously, the simple answer is for coding, it helps, and you're not coding, except for, are you still generating code? In Indy? Yeah.Flo [00:37:09]: No, we do. Oh, right. No, no, no, the cognitive architecture changed. We don't, yeah.Swyx [00:37:13]: Yeah, okay. For you, you're one shot, and you chain tools together, and that's it. And if the user really wantsFlo [00:37:18]: to have this kind of critique thing, you can also edit the prompt, you're welcome to. I have some of my Lindys, I've told them, like, hey, be careful, think step by step about what you're about to do, but that gives you a little bump for some use cases, but, yeah.Alessio [00:37:30]: What about unexpected model releases? So, Anthropic released computer use today. Yeah. I don't know if many people were expecting computer use to come out today. Do these things make you rethink how to design, like, your roadmap and things like that, or are you just like, hey, look, whatever, that's just, like, a small thing in their, like, AGI pursuit, that, like, maybe they're not even going to support, and, like, it's still better for us to build our own integrations into systems and things like that. Because maybe people will say, hey, look, why am I building all these API integrationsFlo [00:38:02]: when I can just do computer use and never go to the product? Yeah. No, I mean, we did take into account computer use. We were talking about this a year ago or something, like, we've been talking about it as part of our roadmap. It's been clear to us that it was coming, My philosophy about it is anything that can be done with an API must be done by an API or should be done by an API for a very long time. I think it is dangerous to be overly cavalier about improvements of model capabilities. I'm reminded of iOS versus Android. Android was built on the JVM. There was a garbage collector, and I can only assume that the conversation that went down in the engineering meeting room was, oh, who cares about the garbage collector? Anyway, Moore's law is here, and so that's all going to go to zero eventually. Sure, but in the meantime, you are operating on a 400 MHz CPU. It was like the first CPU on the iPhone 1, and it's really slow, and the garbage collector is introducing a tremendous overhead on top of that, especially a memory overhead. For the longest time, and it's really only been recently that Android caught up to iOS in terms of how smooth the interactions were, but for the longest time, Android phones were significantly slowerSwyx [00:39:07]: and laggierFlo [00:39:08]: and just not feeling as good as iOS devices. Look, when you're talking about modules and magnitude of differences in terms of performance and reliability, which is what we are talking about when we're talking about API use versus computer use, then you can't ignore that, right? And so I think we're going to be in an API use world for a while.Swyx [00:39:27]: O1 doesn't have API use today. It will have it at some point, and it's on the roadmap. There is a future in which OpenAI goes much harder after your business, your market, than it is today. Like, ChatGPT, it's its own business. All they need to do is add tools to the ChatGPT, and now they're suddenly competing with you. And by the way, they have a GPT store where a bunch of people have already configured their tools to fit with them. Is that a concern?Flo [00:39:56]: I think even the GPT store, in a way, like the way they architect it, for example, their plug-in systems are actually grateful because we can also use the plug-ins. It's very open. Now, again, I think it's going to be such a huge market. I think there's going to be a lot of different jobs to be done. I know they have a huge enterprise offering and stuff, but today, ChatGPT is a consumer app. And so, the sort of flow detail I showed you, this sort of workflow, this sort of use cases that we're going after, which is like, we're doing a lot of lead generation and lead outreach and all of that stuff. That's not something like meeting recording, like Lindy Today right now joins your Zoom meetings and takes notes, all of that stuff.Swyx [00:40:34]: I don't see that so farFlo [00:40:35]: on the OpenAI roadmap.Swyx [00:40:36]: Yeah, but they do have an enterprise team that we talk to You're hiring GMs?Flo [00:40:42]: We did.Swyx [00:40:43]: It's a fascinating way to build a business, right? Like, what should you, as CEO, be in charge of? And what should you basically hireFlo [00:40:52]: a mini CEO to do? Yeah, that's a good question. I think that's also something we're figuring out. The GM thing was inspired from my days at Uber, where we hired one GM per city or per major geo area. We had like all GMs, regional GMs and so forth. And yeah, Lindy is so horizontal that we thought it made sense to hire GMs to own each vertical and the go-to market of the vertical and the customization of the Lindy templates for these verticals and so forth. What should I own as a CEO? I mean, the canonical reply here is always going to be, you know, you own the fundraising, you own the culture, you own the... What's the rest of the canonical reply? The culture, the fundraising.Swyx [00:41:29]: I don't know,Flo [00:41:30]: products. Even that, eventually, you do have to hand out. Yes, the vision, the culture, and the foundation. Well, you've done your job as a CEO. In practice, obviously, yeah, I mean, all day, I do a lot of product work still and I want to keep doing product work for as long as possible.Swyx [00:41:48]: Obviously, like you're recording and managing the team. Yeah.Flo [00:41:52]: That one feels like the most automatable part of the job, the recruiting stuff.Swyx [00:41:56]: Well, yeah. You saw myFlo [00:41:59]: design your recruiter here. Relationship between Factorio and building Lindy. We actually very often talk about how the business of the future is like a game of Factorio. Yeah. So, in the instance, it's like Slack and you've got like 5,000 Lindys in the sidebar and your job is to somehow manage your 5,000 Lindys. And it's going to be very similar to company building because you're going to look for like the highest leverage way to understand what's going on in your AI company and understand what levels do you have to make impact in that company. So, I think it's going to be very similar to like a human company except it's going to go infinitely faster. Today, in a human company, you could have a meeting with your team and you're like, oh, I'm going to build a facility and, you know, now it's like, okay,Swyx [00:42:40]: boom, I'm going to spin up 50 designers. Yeah. Like, actually, it's more important that you can clone an existing designer that you know works because the hiring process, you cannot clone someone because every new person you bring in is going to have their own tweaksFlo [00:42:54]: and you don't want that. Yeah.Swyx [00:42:56]: That's true. You want an army of mindless dronesFlo [00:42:59]: that all work the same way.Swyx [00:43:00]: The reason I bring this, bring Factorio up as well is one, Factorio Space just came out. Apparently, a whole bunch of people stopped working. I tried out Factorio. I never really got that much into it. But the other thing was, you had a tweet recently about how the sort of intentional top-down design was not as effective as just build. Yeah. Just ship.Flo [00:43:21]: I think people read a little bit too much into that tweet. It went weirdly viral. I was like, I did not intend it as a giant statement online.Swyx [00:43:28]: I mean, you notice you have a pattern with this, right? Like, you've done this for eight years now.Flo [00:43:33]: You should know. I legit was just hearing an interesting story about the Factorio game I had. And everybody was like, oh my God, so deep. I guess this explains everything about life and companies. There is something to be said, certainly, about focusing on the constraint. And I think it is Patrick Collison who said, people underestimate the extent to which moonshots are just one pragmatic step taken after the other. And I think as long as you have some inductive bias about, like, some loose idea about where you want to go, I think it makes sense to follow a sort of greedy search along that path. I think planning and organizing is important. And having older is important.Swyx [00:44:05]: I'm wrestling with that. There's two ways I encountered it recently. One with Lindy. When I tried out one of your automation templates and one of them was quite big and I just didn't understand it, right? So, like, it was not as useful to me as a small one that I can just plug in and see all of. And then the other one was me using Cursor. I was very excited about O1 and I just up frontFlo [00:44:27]: stuffed everythingSwyx [00:44:28]: I wanted to do into my prompt and expected O1 to do everything. And it got itself into a huge jumbled mess and it was stuck. It was really... There was no amount... I wasted, like, two hours on just, like, trying to get out of that hole. So I threw away the code base, started small, switched to Clouds on it and build up something working and just add it over time and it just worked. And to me, that was the factorial sentiment, right? Maybe I'm one of those fanboys that's just, like, obsessing over the depth of something that you just randomly tweeted out. But I think it's true for company building, for Lindy building, for coding.Flo [00:45:02]: I don't know. I think it's fair and I think, like, you and I talked about there's the Tuft & Metal principle and there's this other... Yes, I love that. There's the... I forgot the name of this other blog post but it's basically about this book Seeing Like a State that talks about the need for legibility and people who optimize the system for its legibility and anytime you make a system... So legible is basically more understandable. Anytime you make a system more understandable from the top down, it performs less well from the bottom up. And it's fine but you should at least make this trade-off with your eyes wide open. You should know, I am sacrificing performance for understandability, for legibility. And in this case, for you, it makes sense. It's like you are actually optimizing for legibility. You do want to understand your code base but in some other cases it may not make sense. Sometimes it's better to leave the system alone and let it be its glorious, chaotic, organic self and just trust that it's going to perform well even though you don't understand it completely.Swyx [00:45:55]: It does remind me of a common managerial issue or dilemma which you experienced in the small scale of Lindy where, you know, do you want to organize your company by functional sections or by products or, you know, whatever the opposite of functional is. And you tried it one way and it was more legible to you as CEO but actually it stopped working at the small level. Yeah.Flo [00:46:17]: I mean, one very small example, again, at a small scale is we used to have everything on Notion. And for me, as founder, it was awesome because everything was there. The roadmap was there. The tasks were there. The postmortems were there. And so, the postmortem was linkedSwyx [00:46:31]: to its task.Flo [00:46:32]: It was optimized for you. Exactly. And so, I had this, like, one pane of glass and everything was on Notion. And then the team, one day,Swyx [00:46:39]: came to me with pitchforksFlo [00:46:40]: and they really wanted to implement Linear. And I had to bite my fist so hard. I was like, fine, do it. Implement Linear. Because I was like, at the end of the day, the team needs to be able to self-organize and pick their own tools.Alessio [00:46:51]: Yeah. But it did make the company slightly less legible for me. Another big change you had was going away from remote work, every other month. The discussion comes up again. What was that discussion like? How did your feelings change? Was there kind of like a threshold of employees and team size where you felt like, okay, maybe that worked. Now it doesn't work anymore. And how are you thinking about the futureFlo [00:47:12]: as you scale the team? Yeah. So, for context, I used to have a business called TeamFlow. The business was about building a virtual office for remote teams. And so, being remote was not merely something we did. It was, I was banging the remote drum super hard and helping companies to go remote. And so, frankly, in a way, it's a bit embarrassing for me to do a 180 like that. But I guess, when the facts changed, I changed my mind. What happened? Well, I think at first, like everyone else, we went remote by necessity. It was like COVID and you've got to go remote. And on paper, the gains of remote are enormous. In particular, from a founder's standpoint, being able to hire from anywhere is huge. Saving on rent is huge. Saving on commute is huge for everyone and so forth. But then, look, we're all here. It's like, it is really making it much harder to work together. And I spent three years of my youth trying to build a solution for this. And my conclusion is, at least we couldn't figure it out and no one else could. Zoom didn't figure it out. We had like a bunch of competitors. Like, Gathertown was one of the bigger ones. We had dozens and dozens of competitors. No one figured it out. I don't know that software can actually solve this problem. The reality of it is, everyone just wants to get off the darn Zoom call. And it's not a good feeling to be in your home office if you're even going to have a home office all day. It's harder to build culture. It's harder to get in sync. I think software is peculiar because it's like an iceberg. It's like the vast majority of it is submerged underwater. And so, the quality of the software that you ship is a function of the alignment of your mental models about what is below that waterline. Can you actually get in sync about what it is exactly fundamentally that we're building? What is the soul of our product? And it is so much harder to get in sync about that when you're remote. And then you waste time in a thousand ways because people are offline and you can't get a hold of them or you can't share your screen. It's just like you feel like you're walking in molasses all day. And eventually, I was like, okay, this is it. We're not going to do this anymore.Swyx [00:49:03]: Yeah. I think that is the current builder San Francisco consensus here. Yeah. But I still have a big... One of my big heroes as a CEO is Sid Subban from GitLab.Flo [00:49:14]: Mm-hmm.Swyx [00:49:15]: Matt MullenwegFlo [00:49:16]: used to be a hero.Swyx [00:49:17]: But these people run thousand-person remote businesses. The main idea is that at some company

Ruby on Rails Podcast
Episode 527: Evangelizing Rails with Irina Nazarova

Ruby on Rails Podcast

Play Episode Listen Later Nov 13, 2024 30:36


We all love Ruby on Rails. You may remember that my first episode on this show was about whether Rails was still relevant. In the last few years, there's been so many exciting things coming to Ruby and Rails. I have felt that excitement in the community. I've felt it at conferences and in my conversations with many of you. Today, Irina Nazarova joins us to talk about how we can better evangelize Rails Show Notes Neighbor gem https://github.com/ankane/neighbor Irina's Keynote https://www.youtube.com/watch?v=-sFYiyFQMU8 Rails stack from Evil Martians: https://evilmartians.com/rails-startup-stack (this includes Turbo Mount and other things Irina mentioned) Irina's Meetups Blog Post https://evilmartians.com/chronicles/lets-have-more-tech-meetups-a-quick-start-guide-to-holding-your-own

The Changelog
Rails is having a moment (again) (Interview)

The Changelog

Play Episode Listen Later Oct 31, 2024 122:12


(Includes expletives) David Heinemeier Hansson (DHH), creator of Ruby on Rails and co-owner of 37signals, joined the show to discuss this Rails moment and renewed excitement for Rails. We discuss hard opinions, developers being cooked too long in the JavaScript soup, finding developer joy, the pros and cons of the BDFL, the ongoing WordPress drama with WP Engine, and what's to come in Rails 8.

Coder Radio
592: C++ Safety Dance

Coder Radio

Play Episode Listen Later Oct 23, 2024 45:24


C++'s Borg-like mission continues, and some thoughts on Rails 8.1. Plus, there is a little trouble in Microsoft Paradise. And why Chris finally paid for an LLM.

The Bike Shed
443: Rails World and Open Source with Stefanni Brasil

The Bike Shed

Play Episode Listen Later Oct 8, 2024 31:15


Learning from other developers is an important ingredient to your success. During this episode, Joël Quenneville is joined by Stefanni Brasil, Senior Developer at Thoughtbot, and core maintainer of faker-ruby. To open our conversation, she shares the details of her experience at the Rails World conference in Toronto and the projects she enjoyed seeing most. Next, we explore the challenge of Mac versus Windows and how these programs interact with Ruby on Rails and dive into Stefanni's involvement in Open Source for Thoughtbot and beyond; what she loves about it, and how she is working to educate others and expand the current limitations that people experience. This episode is also dedicated to the upcoming Open Source Summit that Stefanni is planning on 25 October 2024, what to expect, and how you can get involved. Thanks for listening! Key Points From This Episode: Introducing and catching up with Thoughtbot Senior Developer and maintainer of faker-ruby, Stefanni Brasil. Her experience at the Rails World conference in Toronto and the projects she found most inspiring. Why accessibility remains a key topic. How Ruby on Rails translates on Mac and Windows. Stefanni's involvement in Open Source and why she enjoys it. Her experience as core maintainer at faker-ruby. Ideas she is exploring around Jeremy Evans' book Polished Ruby Programming and the direction of Faker. Involvement in Thoughtbot's Open Source and how it drew her in initially. The coaching series on Open Source that she participated in earlier this year. What motivated her to create a public Google doc on Open Source maintenance. An upcoming event: the Open Source Summit. The time commitment expected from attendees. How Stefanni intends to interact with guests and the talk that she will give at the event. Why everyone is welcome to engage at any level they are comfortable with. Links Mentioned in Today's Episode: Stefanni Brasil (https://www.stefannibrasil.me/) Stefanni Brasil on X (https://x.com/stefannibrasil) Thoughtbot Open Summit (https://thoughtbot.com/events/open-summit) Open Source Issues doc (https://docs.google.com/document/d/1zok6snap6T6f4Z1H7mP9JomNczAvPEEqCEnIg42dkU4/edit#heading=h.rq72izdz9oh6) Open Source at Thoughtbot (https://thoughtbot.com/open-source) Polished Ruby Programming (https://www.packtpub.com/en-us/product/polished-ruby-programming-9781801072724) Faker Gem (https://github.com/faker-ruby/faker) Rails World
 (https://rubyonrails.org/world/) The Bike Shed (https://bikeshed.thoughtbot.com/) Joël Quenneville on LinkedIn (https://www.linkedin.com/in/joel-quenneville-96b18b58/)