SQL database engine software
POPULARITY
Categories
✨ Kuuntele Futucastin jaksot Woltista: https://press.wolt.com/fi-FI/257431-wolt-vieraana-futucast-podcastissa/
The Overtired trio reunites for the first time in ages, diving into a whirlwind of health updates, hilarious anecdotes, and the latest tech obsessions. Christina shares a dramatic spinal saga while Brett and Jeff discuss everything from winning reddit contests to creating a universal markdown processor. Tune in for updates on Mark 3, the magical world of Scrivener, and why Brett’s back on Bing. Don’t miss the banter or the tech tips, and as always, get ready to laugh, learn, and maybe feel a little overtired yourself. Sponsor Shopify is the commerce platform behind 10% of all eCommerce in the US, from household names like Mattel and Gymshark, to brands just getting started. Get started today at shopify.com/overtired. Chapters 00:00 Welcome to the Overtired Podcast 01:09 Christina’s Health Journey 10:53 Brett’s Insurance Woes 15:38 Jeff’s Mental Health Update 24:07 Sponsor Spot: Shopify 24:18 Sponsor: Shopify 26:23 Jeff Tweedy 27:43 Jeff’s Concert Marathon 32:16 Christina Wins Big 36:58 Monitor Setup Challenges 37:13 Ergotron Mounts and Tall Poles 38:33 Review Plans and Honest Assessments 38:59 Current Display Setup 41:30 Thunderbolt KVM and Display Preferences 42:51 MacBook Pro and Studio Comparisons 50:58 Markdown Processor: Apex 01:07:58 Scrivener and Writing Tools 01:11:55 Helium Browser and Privacy Features 01:13:56 Bing Delisting Incident Show Links Danny Brown's 10 in the New York Times (gift link) Indigo Stack Scrivener Helium Bangs Apex Apex Syntax Join the Marked 3 Beta LG 32 Inch UltraFine™evo 6K Nano IPS Black Monitor with Thunderbolt™ 5 Join the Conversation Merch Come chat on Discord! Twitter/ovrtrd Instagram/ovrtrd Youtube Get the Newsletter Thanks! You’re downloading today’s show from CacheFly’s network BackBeat Media Podcast Network Check out more episodes at overtiredpod.com and subscribe on Apple Podcasts, Spotify, or your favorite podcast app. Find Brett as @ttscoff, Christina as @film_girl, Jeff as @jsguntzel, and follow Overtired at @ovrtrd on Twitter. Transcript Brett + 2 Welcome to the Overtired Podcast Jeff: [00:00:00] Hello everybody. This is the Overtired podcast. The three of us are all together for the first time since the Carter administration. Um, it is great to see you both here. I am Jeff Severance Gunzel if I didn’t say that already. Um, and I’m here with Christina Warren and I’m here with Brett Terpstra and hello to both of you. Brett: Hi. Jeff: Great to see you both. Brett: Yeah, it’s good to see you too. I feel like I was really deadpan in the pre-show. I’ll try to liven it up for you. I was a horrible audience. You were cracking jokes and I was just Jeff: that’s true. Christina, before you came on, man, I was hot. I was on fire and Brett was, all Brett was doing was chewing and dropping Popsicle parts. Brett: Yep. I ate, I ate part of a coconut outshine Popsicle off of a concrete floor, but Jeff: It is true, and I didn’t even see him check it [00:01:00] for cat hair, Brett: I did though. Jeff: but I believe he did because he’s a, he’s a very Brett: I just vacuumed in Jeff: He’s a very good American Brett: All right. Christina’s Health Journey Brett: Well, um, I, Christina has a lot of health stuff to share and I wanna save time for that. So let’s kick off the mental health corner. Um, let’s let Christina go first, because if it takes the whole show, it takes the whole show. Go for it. Christina: Uh, I, I will not take this hold show, but thank you. Yeah. So, um, my mental health is okay-ish. Um, I would say the okay-ish part is, is because of things that are happening with my physical health and then some of the medications that I’ve had to be on, um, uh, to deal with it. Uh, prednisone. Fucking sucks, man. Never nev n never take it if you can avoid it. Um, but why Christina, why are you on prednisone or why were you on prednisone for five days? Um, uh, and I’m not anymore to be clear, but that certainly did not help my mental health. Um, at the beginning of November, I woke up and I thought that I’d [00:02:00] slept on my shoulder wrong. And, um, uh, and, and just some, some background. I, I don’t know if this is pertinent to how my injury took place or not, but, but it, I’m sure that it didn’t help. Um, I have scoliosis and in the top and the bottom of my spine, so I have it at the top of my, like, neck area and my lower back. And so my back is like a crooked s um, this will be relevant in a, in a second, but, but I, I thought that I had slept on my back bunny, and I was like, okay, well, all right, it hurts a lot, but fine. Um, and then it, a, a couple of days passed and it didn’t get any better, and then like a week passed and I was at the point where I was like, I almost feel like I need to go to the. Emergency room, I’m in pain. That is that significant. Um, and, you know, didn’t get any better. So I took some of grant’s, Gabapentin, and I took, um, some, some, uh, a few other things and I was able to get in with like a, a, a sports and spine guy. Um, and um, [00:03:00] he looked at me and he was like, yeah, I think that you have like a, a, a bolting disc, also known as a herniated disc. Go to physical therapy. See me later. We’ll, we’ll deal with it. Um. Basically like my whole left side was, was, was really sore and, and I had a lot of pain and then I had numbness in my, my fingers and um, and, and that was a problem the next day, which was actually my birthday. The numbness had at this point spread to my right side and also my lower extremities. And so at this point I called the doctor and he was like, yeah, you should go to the er. And so I went to the ER and, and they weren’t able to do anything for me other than give me, you know, like, um, you know, I was hoping they might give me like, some sort of steroid injection or something. They wouldn’t do anything other than, um, basically, um, they gave me like another type of maybe, maybe pain pill or whatever. Um, but that allowed the doctor to go ahead and. Write, uh, write up an MRI took forever for me to get an MRI, I actually had to get it in Atlanta. [00:04:00] Fun fact, uh, sometimes it is cheaper to just pay and not go through insurance and get an MR MRI and, um, a, um, uh, an x-ray, um, I was able to do it for $450 Jeff: Whoa. Really? Christina: Yeah, $400 for the MR mri. $50 for the x-ray. Jeff: Wow. Christina: Yeah. Yeah. Brett: how I, they, I had an MRI, they charged me like $1,200 and then they failed to bill insurance ’cause I was between insurance. Christina: Yes. Yeah. So what happened was, and and honestly that was gonna be the situation that I was in, not between insurance stuff, but they weren’t even gonna bill insurance. And insurance only approved certain facilities and to get into those facilities is almost impossible. Um, and so, no, there are a lot of like get an MR, I now get a, you know, mammogram, get ghetto, whatever places. And because America’s healthcare system is a HealthScape, you can bypass insurance and they will charge you way less than whatever they bill insurance for. So I, I don’t know if it’s part of the country, you know, like Seattle I think might [00:05:00] probably would’ve been more expensive. But yeah, I was able to find this place like a mile from like, not even a mile from where my parents lived, um, that did the x-rays and the MRI for $450 total. Brett: I, I hate, I hate that. That’s true, but Christina: Me too. Me too. No, no. It pisses me off. Honestly, it makes me angry because like, I’m glad that I was able to do that and get it, you know, uh, uh, expedited. Then I go into the spine, um, guy earlier this week and he looks at it and he’s like, yep, you’ve got a massive bulging disc on, on C seven, which is the, the part of your lower cervical or cervical spine, which is your neck. Um, and it’s where it connects to your ver bray. It’s like, you know, there are a few things you can do. You can do, you know, injections, you can do surgery. He is like, I’m gonna recommend you to a neurosurgeon. And I go to the neurosurgeon yesterday and he was showing me or not, uh, yeah, yesterday he was showing me the, the, the, the scans and, and showing like you up close and it’s, yeah, it’s pretty massive. Like where, where, where the disc is like it is. You could see it just from one view, like, just from like [00:06:00] looking at it like, kind of like outside, like you could actually like see like it was visible, but then when you zoomed in it’s like, oh shit, this, this thing is like massive and it’s pressing on these nerves that then go into my, my hands and other areas. But it’s pressing on both sides. It’s primarily on my left side, but it’s pressing on on my right side too, which is not good. So, um, he basically was like, okay. He was like, you know, this could go away. He was like, the pain isn’t really what I’m wanting to, to treat here. It’s, it’s the, the weakness because my, my left arm is incredibly weak. Like when they do like the, the test where like they, they push back on you to see like, okay, like how, how much can you, what, like, I am, I’m almost immediately like, I can’t hold anything back. Right? Like I’m, I’m, I’m like a toddler in terms of my strength. So, and, and then I’m freaked out because I don’t have a lot of feeling in my hands and, and that’s terrifying. Um, I’m also. Jeff: so terrifying, Christina: I’m, I’m also like in extreme pain because of, of, of where this sits. Like I can’t sleep well. Like [00:07:00] the whole thing sucks. Like the MRI, which was was like the most painful, like 25 minutes, like of my existence. ’cause I was laying flat on my back. I’m not allowed to move and I’m just like, I’m in just incredible pain with that part of, of, of, of my, my side. Like, it, it was. It was terrible. Um, but, uh, but he was like, yeah. Um, these are the sorts of surgical options we have. Um, he’s gonna, um, do basically what what he wants to do is basically do a thing where he would put in a, um, an artificial or, or synthetic disc. So they’re gonna remove the disc, put in a synthetic one. They’ll go in through the, the front of my throat to access the, my, my, my, my spine. Um, put that there and, um, you know, I’ll, I’ll be overnight in the hospital. Um, and then it’ll be a few weeks of recovery and the, the, the pain should go away immediately. Um, but it, it could be up to two years before I get full, you know, feeling back in my arm. So anyway, Jeff: years, Jesus. And Christina: I mean, and hopefully less than that, but, but it could be [00:08:00] up to that. Jeff: there’s no part of this at this point. That’s a mystery to you, right? Christina: The mystery is, I don’t know how this happened. Jeff: You don’t know how it happened, right? Of course. Yeah, of course. Yeah. Yeah. Brett: So tell, tell us about the ghastly surgery. The, the throat thing really threw me like, I can’t imagine that Christina: yeah, yeah. So, well, ’cause the thing is, is that usually if what they just do, like spinal fusion, they’ll go in at the back of your neck, um, and then they’ll remove the, the, um, the, the, the, the disc. And then they’ll fuse your, your, your two bones together. Basically. They’ll, they’ll, they’ll, they’ll fuse this part of the vertebrae, but because they’re going to be replacing the, the disc, they need more room. So that’s why they have to go in through the, through, through basically your throat so that they can have more room to work. Jeff: Good lord. No thank you. Brett: Ugh. Wow. Jeff: Okay. Brett: I am really sorry that is happening. That is, that is, that dwarfs my health concerns. That is just constant pain [00:09:00] and, and it would be really scary. Christina: Yeah. Yeah. It’s not great. It’s not great, but I’m, I’m, I’m doing what I can and, uh, like I have, you know, a small amount of, of Oxycodine and I have like a, a, a, you know, some other pain medication and I’m taking the gabapentin and like, that’s helpful. The bad part is like your body, like every 12, 15 hours, like whatever, like the, the, the cycle is like, you feel it leave your system and like if you’re asleep, you wake up, right? Like, it’s one of those things, like, you immediately feel it, like when it leaves your system. And I’ve never had to do anything for pain management before. And they have me on a very, they have me like on the smallest amount of like, oxycodone you can be on. Um, and I’m using it sparingly because I don’t wanna, you know, be reliant on, on it or whatever. But it, it, but it is one of those things where I’m like, yeah, like sometimes you need fucking opiates because, you know, the pain is like so constant. And the thing is like, what sucks is that it’s not always the same type of pain. Like sometimes it’s throbbing, sometimes it’s sharp, sometimes it’s like whatever. It sucks. But the hardest thing [00:10:00] is like, and. This does impact my mental health. Like it’s hard to sleep. Like, and I’m a side sleeper. I’m a side sleeper, and I’m gonna have to become a back sleeper. So, you know. Yeah. It’s just, it’s, it’s not great. It’s not great, but, you know, that, that, that, that, that’s me. The, the good news is, and I’m very, very gratified, like I have a good surgeon. Um, I’m gonna be able to get in to get this done relatively quickly. He had an appointment for next week. I don’t think that insurance would’ve even been able to approve things fast enough for, for, for that regard. And I have, um, commitments that I can’t make then. And I, and that would also mean that I wouldn’t be able to go visit my family for Christmas. So hopefully I’ll do it right after Christmas. I’m just gonna wait, you know, for, for insurance to, to do its thing, knock on wood, and then schedule, um, from there. But yeah, Jeff: Woof. Christina: so that’s me. Um, uh, who wants to go next? Jeff or, uh, Jeff or Brett? Jeff: It’s like, that’s me. Hot potato throwing it. Brett: I’ll, I’ll go. Brett’s Insurance Woes Brett: I can continue on the insurance topic. Um, I was, for a few months [00:11:00] after getting laid off, I was on Minsu, which is Minnesota’s Medicaid, um, v version of Medicaid. And so basically I paid nothing and I had better insurance than I usually have with, uh, you know, a full deductible and premiums and everything. And it was fantastic. I was getting all the care I needed for all of the health stuff I’m going through. Um, I, they, a, a new doctor I found, ordered the 15 tests and I passed out ’cause it was so much blood and. And it, I was getting, but I was getting all these tests run. I was getting results, we were discovering things. And then my unemployment checks, the income from unemployment went like $300 over the cap for Medicaid. So [00:12:00] all of a sudden, overnight I was cut from Medicaid and I had to do an early sign up, and now I’m on courts and it sucks bad. Like they’re not covering my meds. Last month cost me $600. I was also paying. In addition to that, a $300 premium plus every doctor’s visit is 50 bucks out of pocket. So this will hopefully only last until January, and then it’ll flip over and I will be able to demonstrate basically no income, um, until like Mark makes enough money that it gets reported. Um, and even, uh, until then, like I literally am making under the, the poverty limit. So, um, I hope to be back on Medicaid shortly. I have one more month. I’ll have to pay my $600 to refill. I [00:13:00] cashed out my 401k. Um, like things were, everything was up high enough that I had made, I. I had made tens of thousands of dollars just on the investments and the 401k, but I also have a lot of concerns about the market volatility around Nvidia and the AI bubble in general. Um, so taking my money out of the market just felt okay to me. I paid the 10%, uh, penalty Jeff: Mm-hmm. Brett: and ultimately I, I came out with enough cash that I can invest on my own and be able to cover the next six months. Uh, if I don’t have any other income, which I hope to, I hope to not spend my nest egg. Um, but I did, I did a lot of thinking and calculating and I think I made the right choices. But anyway, [00:14:00] that will help if I have to pay for medical stuff that will help. Um. And then I’ve had insomnia, bad on and off. Right now I’m coming off of two days of good sleep. You’re catching me on a good day. Um, but Jeff: Still wouldn’t laugh at my jokes. Brett: before that it was, well, that’s the thing is like before that, it was four nights where I slept two to four hours per night, and by the end of it, I could barely walk. And so two nights of sleep after a stint like that, like, I’m just super, I’m deadpan, I’m dazed. Um, I could lay down and fall asleep at any time. Um, I, so, so keep me awake. Um, but yeah, that’s, that’s, that’s me. Mental health is good. Like I’m in pretty high spirits considering all this, like financial stuff and everything. Like my mood has been pretty stable. I’ve been getting a lot of coding done. I’ll tell you about projects in [00:15:00] a minute, but, um, but that’s, that’s me. I’m done. Jeff: Awesome. I’m enjoying watching your cat roll around, but clearly cannot decide to lay down at this point. Brett: No, nobody is very persnickety. Jeff: I literally have to put my. Well, you say put a cat down like you used to. When you put a kid down for a nap, you say you wanna put ’em down. Right? That’s where it’s coming from. I now have a chair next to my desk, ’cause I have one cat that walks around Yowling at about 11:00 AM while I’m working. And I have to like, put ’em down for a nap. It’s pathetic. It’s pathetic that I do that. Let’s just be clear. Brett: Yeah. Jeff: soulmate though. Jeff’s Mental Health Update Jeff: Um, I’m doing good. I’m, I’m, I’ve been feeling kind of light lately in a nice way. I’ve had ups and downs, but even with the ups and downs, there’s like a, except for one day last week was, there’s just been feeling kind of good in general, which is remarkable in a way. ’cause it’s just like stressful time. There’s some stressful business stuff, like, [00:16:00] a lot of stuff like that. But I’m feeling good and, and just like, uh, yeah, just light. I don’t know, it’s weird. Like, I’ve just been noticing that I feel kind of light and, uh. And not, not manic, not high light. Brett: Yeah. No, that’s Jeff: uh, and that’s, that’s lovely. So yeah. And so I’m doing good. I’m doing good. I fucking, it’s cold. Which sucks ’cause it just means for everybody that’s heard about my workshop over the years, that I can’t really go out there and have it be pleasant Brett: It’s, it’s been Minnesota thus far. Has had, we’ve had like one, one Sub-Zero day. Jeff: whatever. It’s fucking cold. Christina: Yeah. What one? Brett? Brett. It’s December 6th as we’re recording this one Sub-Zero day. That’s insane. Brett: Is it Jeff: Granted, granted I’ve been dressing warm, so I’m ready to go out the door for ice related things. Meaning, meaning government, ice, Brett: Uh, yeah. Yeah. Jeff: So I like wear my long underwear during [00:17:00] the day. ’cause actually like recently. So at my son’s school, which is like six blocks from here, um, has a lot of Somali immigrants in it. And, and uh, and there was a, at one point there was ice activity in the other direction, um, uh, uh, near me. And so neighbors put out a call here around so that at dismissal time people would pair up at all the intersections surrounding the school. And, um, and like a quick signal group popped up, whatever. It was so amazing because like we all just popped out there. And by the time I got out, uh, everyone was already like, posted up and I was like, I’m a, in these situations, I am a wanderer. You want me roaming? I don’t want to pair up with somebody I don’t like, I just, I grabbed a camera with a Zoom on it and like, I was like, I’m in roam. Um, it’s what I was as an activist, what I was as a reporter, like it’s just my nature. Um, but like. Everybody was out and like, and they were just like, they were ready man. And then we got like the all clear and you could just see people in the [00:18:00] neighborhood just like standing down and going home. But because of the true threat and the ongoing arrests here, now that the Minneapolis stuff has started, like I do, I was like wearing long underwear just, and I have a little bag by the door ready to like pop out if something comes up and I can be helpful. Um, and uh, and I guess what I’m saying is I should use that to go into the garage as well if I’m already prepared. Brett: Right. Jeff: But here’s, okay, so here’s a mental health thing actually. So I, one of the, I’ve gone through a few years of just sort of a little bit of paralysis around being able to just, I don’t know what, like do anything that is kind of project related that takes some thinking, whatever it is, like I’m talking about around the house or things that have kind of broken over the years, whatever. So I’ve had this snowblower and it’s a really good snowblower. It’s got headlights. And, uh, and I used to love snow blowing the entire block. Like it just made me feel good, made me feel useful. Um, and sorry I cough. I left it outside for a [00:19:00] year for a, like a winter and a spring and water got into the gas tank. It rusted out in there. I knew I couldn’t start it or I’d ruin the whole damn engine. So I left it for two years and I felt bad about myself. But this year, just like probably a month before the first big snowfall, I fucking replaced a gas tank and a carburetor on a machine. And I have never done anything like that in my life. And so then we got the snowfall and I, and I snow blowed this whole block Brett: Nice. Jeff: great. ’cause now they all owe me. Brett: I, uh, I have a, uh, so I have a little electric powered, uh, snowblower that can handle like two inches of snow. Um, and, and on big snowfalls, if you get out there every hour and keep up with it, it, it works. But, but I, my back right now, I can’t stand for, I can’t stand still for 10 minutes and I can’t move for more than like five minutes. And so I’m, I’m very disabled and El has good days and bad days, uh, thus [00:20:00] far. L’s been out there with a shovel, um, really being the hero. But we have a next door neighbor with a big gas powered snowblower. And so we went over, brought them gifts, and, um, asked if they would take care of our driveway on days we couldn’t, uh, for like, you know, we’d pay ’em 25 bucks to do the driveway. And, uh, and they were, he was still reluctant to accept money. Um. But, but we both agreed it was better to like make it a, a transaction. Jeff: Oh my God. You don’t want to get into weird Minnesota neighbor relational. Brett: right. You don’t want the you owe me thing. Um, so, so we have that set up. But in the process we made really good friends with our neighbor. Like we sat down in their living room for I think 45 minutes and just like talked about health and politics and it was, it was really fun. They’re, they’re retired. They’re in their [00:21:00] seventies and like act, he always looks super grumpy. I always thought he was a mean old man. He’s actually, he laughs more easily than most people I’ve ever met. Um, he’s actually, when people say, oh, he is actually a teddy bear, this guy really is, he’s just jovial. Uh, he just has resting angry old man face. Jeff: Or like my, I have public mis throat face, like when I’m out and about, especially when I’m shopping, I know that my face is, I’m gonna fucking kill you if you look me in the eye Brett: I used Jeff: is not my general disposition. Brett: people used to tell me that about myself, but I feel like I, I carry myself differently these days than I did when I was younger. Jeff: You know what I learned? Do you, have you both watched Veep, Christina: Yes, Jeff: you know, Richard sp split, right? Um, and, and he always kind of has this sweet like half smile and he is kind of looking up and I, I figured out at one point I was in an airport, which is where my kill everybody face especially comes up. Just to be clear. TSA, it’s just a feeling inside. I [00:22:00] have no desire to act to this out. I realized that if I make the Richard Plet face, which I can try to make for you now, which is something like if I just make the Richard Plet face, my whole disposition Brett: yeah. Yeah. Jeff: uh, and I even feel a little better. And so I just wanna recommend that to people. Look up Richard Spt, look at his face. Christina: Hey, future President Bridges split. Jeff: future President Richard Splat, also excellent in the Detroiters. Um, that’s all, uh, that’s all I wanted to say about that. Brett: I have found that like when I’m texting with someone, if I start to get frustrated, you know, you know that point where you’re still adding smiley emoticons even though you’re actually not, you’re actually getting pissed off, but you don’t wanna sound super bitchy about it, so you’re adding smile. I have found that when I add a smiley emoji in those circumstances, if I actually smile before I send it, it like my [00:23:00] mood will adjust to match, to match the tone I’m trying to convey, and it lessens my frustration with the other person. Jeff: a little joy wrist rocket. Christina: Yeah. Hey, I mean, no, but hey, but, but that, that, that, that, that’s interesting. I mean, they’re, they, they’ve done studies that like show that, right? That like show like, you know, I mean, like, some of this is all like bullshit to a certain extent, but there is something to be said for like, you know, like the power of like positive thinking and like, you know, if you go into things with like, different types of attitudes or even like, even if you like, go into job interviews or other situations, like you act confident or you smile, or you act happy or whatever. Even if you’re not like it, the, the, the, the euphoria, you know, that those sorts of uh, um, endorphin reactions or whatever can be real. So that’s interesting. Brett: Yeah, I found, I found going into job interviews with my usual sarcastic and bitter, um, kind of mindset, Jeff: I already hate this job. Brett: it doesn’t play well. It doesn’t play well. So what are your weaknesses? Fuck off. Um,[00:24:00] Christina: right. Well, well, well, I hate people. Jeff: Yeah. Dealing with motherfuckers like you, that’s one weakness. Sponsor Spot: Shopify Brett: let’s, uh, let’s do a sponsor spot and then I want to hear about Christina winning a contest. Christina: yes. Jeff: very Brett: wanna, you wanna take it away? Sponsor: Shopify Jeff: I will, um, our sponsor this week is Shopify. Um, have you ever, have you just been dreaming of owning your own business? Is that why you can’t sleep? In addition to having something to sell, you need a website. And I’ll tell you what, that’s been true for a long time. You need a payment system, you need a logo, you need a way to advertise new customers. It can all be overwhelming and confusing, but that is where today’s sponsor, Shopify comes in. shopify is the commerce platform behind millions of businesses around the world and 10% of all e-commerce in the US from household names like Mattel and Gym Shark to brands just getting started. Get started with your own design studio with hundreds of ready to use [00:25:00] templates. Shopify helps you build a beautiful online store to match your brand’s style, accelerate your content creation. Shopify is packed with helpful AI tools that write product descriptions, page headlines, and even enhance your product photography. Get the word out like you have a marketing team behind you. Easily create email and social media campaigns wherever your customers are scrolling or strolling. And best yet, Shopify is your commerce expert with world class expertise in everything from managing inventory to international shipping, to processing returns and beyond. If you’re ready to sell, you are ready to Shopify. Turn your Big Business Idea into with Shopify on your side. Sign up for your $1 per month trial and start selling today@shopify.com slash Overtired. Go to shopify.com/ Overtired. What was that? Say it with me. shopify.com/ Overtired [00:26:00] cha. Uh, Brett: the, uh, the group, the group input on the last URL, I feel like we can charge extra for that. That was Jeff: Yeah. Cha-ching Brett: they got the chorus, they got the Overtired Christina: You did. You got the Overtired Jeff: They didn’t think to ask for it, but that’s our brand. Christina: shopify.com/ Overtired. Jeff Tweedy Jeff: What was, uh, I was watching a Stephen Colbert interview with Jeff Tweedy, who just put out a triple album and, uh, it was a very thoughtful, sweet interview. And then Stephen Colbert said, you know, you’re not supposed to do this. And Jeff Tweety said, it’s all part of my career long effort to leave the public wanting less. Christina: Ha, Jeff: That was a great bit. Christina: that’s a fantastic bit. A side note, there are a couple of really good NPR, um, uh, tiny desks that have come out in the last couple of month, uh, couple of weeks. Um, uh, one is shockingly, I, I’ll, I’ll just be a a, a fucking boomer about it. The Googo dolls. Theirs was [00:27:00] great. It’s fantastic. They did a great job. It already has like millions of views, like it wrecked up like over a million views, I think like in like, like less than 24 hours. They did a great job, but, uh, but Brandy Carlisle, uh, did one, um, the other day and hers is really, really good too. So, um, so yeah. Yeah, exactly. So yeah. Anyway, you said, you saying Jeff pd maybe, I don’t know how I got from Wilco to like, you know, there, Jeff: Yeah. Well, they’ve done some good, he’s done his own good Christina: he has, he has done his own. Good, good. That’s honestly, that’s probably what I was thinking of, but Jeff: It’s my favorite Jeff besides me because Bezos, he’s not in the, he’s not in the game. Christina: No. No, he’s not. No. Um, he, he’s, he’s not on the Christmas card list at all. Jeff: Oh man. Jeff’s Concert Marathon Jeff: Can I just tell you guys that I did something, um, I did something crazy a couple weeks ago and I went to three shows in one week, like I was 20 fucking two, Brett: Good grief. Jeff: and. It was a blast. So, okay, so the background of this is my oldest son [00:28:00] loves hip hop, and when we drive him to college and back, or when I do, it’s often just me. Um, he, he goes deep and he, it’s a lot of like, kind of indie hip hop and a lot. It’s just an interesting, he listens to interesting shit, but he will go deep and he’ll just like, give me a tour through someone’s discography or through all their features somewhere, whatever it is. And like, it’s the kind of input that I love, which is just like, I don’t, even if it’s not my genre, like if you’re passionate and you can just weave me through the interrelationship and the history and whatever it is I’m in. So as a result of that, made me a huge fan of Danny Brown and made me a huge fan of the sky, Billy Woods. And so what happened was I went to a hip hop show at the seventh Street entry, uh, which is attached to First Avenue. It’s a little club, very small, lovely little place, the only place my band could sell out. Um, and I watched a hip hop show there on a Monday night, Tuesday night. I went to the Uptown Theater, which Brett is now a actually an operating [00:29:00] theater for shows. Uh, and I, and I saw Danny Brown, but I also saw two hyper pop bands, a genre I was not previously aware of, including one, which was amazing, called Fem Tenal. And I was in line to get into that show behind furries, behind trans Kids. Like it was this, I was the weirdest, like I did not belong. Underscores played, and, and this will mean something to somebody out there, but not, didn’t mean anything to me until that night. And, uh. I felt like such, there were times, not during Danny Brown, Danny Brown’s my age all good. But like there were times where I was in the crowd ’cause I’m tall. Anybody that doesn’t know I’m very tall and I’m wearing like a not very comfortable or safe guy seeming outfit, a black hoodie, a black stocking cap. Like I basically looked like I’m possibly a shooter and, and I’m like standing among all these young people loving it, but feeling a little like, should I go to the back? Even like I was leaving that show [00:30:00] and the only people my age were people’s parents that were waiting to pick them up on the way out. So anyway, that was night two. Danny Brown was awesome. And then two nights later I went to see, this is way more my speed, a band called the Dazzling Kilman who were a band that. Came out in the nineties, St. Louis and a noisy Matthew Rock. Wikipedia claims they invented math rock. It’s a really stupid claim, uh, but it’s a lovely, interesting band and it’s a friend of mine named Nick Sakes, who’s who fronted that band and was in all these great bands back when I was in bands called Colos Mite and Sick Bay, and all this is great shit. So they played a reunion show. In this tiny punk rock club here called Cloudland, just a lovely little punk rock club. And, um, and, and that was like rounded out my week. So like, I was definitely, uh, a tourist the early part of the week, mostly at the Danny Brown Show. But then I like got to come home to my noisy punk rock [00:31:00] on, uh, on Thursday night. And I, I fucking did three shows and it hurt so bad. Like even by the first of three bands on the second night. I was like, I don’t think I can make it. And I do. I already pregame shows with ibuprofen. Just to be really clear, I microdose glucose tabs at shows like, like I am, I am a full on old man doing these things. But, um, I did get some cred with my kids for being at a hyper pop show all by myself. And, Christina: Hell yeah. A a Jeff: friends seemed impressed. Christina: no, as a as, as as they should be. I’m impressed. And like, and I, I, I typically like, I definitely go to like more of like, I go, I go to shows more frequently and, and I’m, I’m even like, I’m, I’m gonna be real with you. I’m like, yeah, three in one week. Jeff: That’s a lot. Christina: That’s a lot. That’s a lot. Jeff: man. Did I feel good when I walked home from that last show though? I was like, I fucking did it. I did not believe I wasn’t gonna bail on at least two of those shows, if not all three. Anyway, just wanted to say Brett: I [00:32:00] do like one show a year, but Jeff: that’s how I’ve been for years this year. I think I’ve seen eight shows. Brett: damn. Jeff: Yeah, it’s Brett: Alright, so you’ve been teasing us about this, this contest you won. Jeff: Yeah, please, Christina. Sorry to push that off. Christina: No, no, no, no. That’s, that’s completely okay. That, that, that, that’s great. Uh, no. Christina Wins Big Christina: So, um, I won two six K monitors. Brett: Damn. Jeff: is that what those boxes are behind you? Christina: Yeah, yeah. This is what the boxes are behind me, so I haven’t been able to get them up because this happened. I got them literally right in the midst of all this stuff with my back. Um, but I do have an Ergotron poll now that is here, and, and Grant has said that he will, will get them up. But yeah, so I won 2 32 inch six K monitors from a Reddit contest. Brett: How, how, how, Jeff: How does this happen? How do I find a Reddit contest? Christina: Yeah. So I got lucky. So I have, I, I have a clearly, well, well, um, there was a little, there was a little bit of like, other step to it than that, but like, uh, so how it worked was basically, um, LG is basically just put out [00:33:00] two, they put out a new 32 inch six K monitor. I’ll have it linked in, in, in the show notes. Um, so we’ve talked about this on this podcast before, but like one of my big, like. Pet peeve, like things that I can’t get past. It’s like I need like a retina screen. Like I need like the, the perfect pixel doubling thing for that the Mac Os deals with, because I’ve used a 5K screen, either through an iMac or um, an lg, um, ultra fine or, um, a, uh, studio display. For like 11 years. And, and I, and I’ve been using retina displays on laptops even longer than that. And so if I use like a regular 4K display, like it just, it, it doesn’t work for me. Um, you can use apps like, um, like better control and other things to kind of emulate, like what would be like if you doubled the resolution, then it, it down, you know, um, of samples that, so that. It looks better than, than if it’s just like the, the, the 4K stuff where in the, the user interface things are too big and whatnot. And to be clear, this is a Macco West problem. If [00:34:00] you are using Windows or Linux or any other operating system that does fractional scaling, um, correctly, then this is not a problem. But Macco West does not do fractional scaling direct, uh, correctly. Um, weirdly iOS can, like, they can do three X resolution and other things. Um, but, but, but Macs does not. And that’s weird because some of the native resolutions on some of the MacBook errors are not even perfectly pixeled doubled, meaning Apple is already having to do a certain amount of like resolution changes to, to fit into their own, created by their, their own hubris, like way of insisting on, on only having like, like two x pixel doubling 18 years ago, we could have had independent, uh, resolutions, uh, um, for, for UI elements and, and, and window bars. But anyway, I, I’m, I’m digressing anyway. I was looking at trying to get either a second, uh, studio display, which I don’t wanna do because Apple’s reportedly going to be putting out a new one. Um, and they’re expensive or getting, um, there are now a number of different six K [00:35:00] displays that are not $6,000 that are on the market. So, um, uh, uh, Asus has one, um, there is one from like a, a Chinese company called like, or Q Con that, um, looks like a, a complete copy of this, of the pro display XDR. It has a different panel, but it’s, it’s six K and they, they’ve copied the whole design and it’s aluminum and it’s glossy and it looks great, but I’d have to like get it from like. A weird distributor, and if I have any issues with it, I don’t really wanna have to send it back to China and whatnot. And then LG has one that they just put out. And so I’ve been researching these on, on Mac rumors and on some other forums. And, um, I, uh, I, somebody in one of the Mac Roomers forums like posted that there was like a contest that LG was running in a few different subreddits where they were like, tell us why you should get one of, like, we’re gonna be giving away like either one or two monitors, and I guess they did this in a few subreddits. Tell us why this would be good for your workflow. And, um, I guess I, I guess I’m one of the people who kind of read the [00:36:00] assignment because it, okay, I’ll just be honest with this, with, with you guys on this podcast, uh, because I, I don’t think anyone from LG will hear this and my answers were accurate anyway. But anyway, this was not the sort of contest where it was like we will randomly select a winner. This was the moderators and lg, were going to read the responses and choose the winner. Jeff: Got it. Christina: So if you spend a little bit of time and thoughtfully write out a response, maybe you stand a better chance of winning the contest. Jeff: yeah, yeah. Put the work in like it was 2002. Christina: Right. Anyway, I still was shocked when I like woke up like on like Halloween and they were like, congratulations, you’ve won two monitors. I’m like, I’m sorry. What? Jeff: That’s amazing. Christina: Yeah, yeah, yeah, Jeff: Nice work. I know I’ve, you know, I’ve been staring at those boxes behind you this whole time, just being like, those look like some sweet monitors. Christina: yeah, yeah. Monitor Setup Challenges Christina: I mean, and, uh, [00:37:00] uh, it’s, it’s, it’s, it’s, it’s, and I, I’m very much, so my, my, my only issue is, okay, how am I gonna get these on my desk? So I’m gonna have to do something with my iMac and I’m probably gonna have to get rid of my, my my, my 5K, um, uh, uh, studio display, at least in the short term. Ergotron Mounts and Tall Poles Christina: Um, but what I did do is I, um, I ordered from, um, Ergotron, ’cause I already have. Um, two of their, um, LX mounts, um, or, or, or, or arms. Um, and only one of them is being used right now. And then I have a different arm that I use for the, um, um, iMac. Um, they sell like a, if you call ’em directly, you can get them to send you a tall pole so that you can put the two arms on top of them. And that way I think I can like, have them so that I can have like one pole and then like have one on one side, one Jeff: I have a tall pole. Christina: and, and yeah, that’s what she said. Um, Jeff: as soon as I said it, I was like, for fuck’s sake. But Christina: um, but, uh, but, but yeah, but so that way I think I, I can, I, in theory, I can stack the market and have ’em side by side. I don’t know. Um, I got that. I, I had to call Tron and, and order that from them. [00:38:00] Um, it was only a hundred dollars for, for the poll and then $50 for a handling fee. Jeff: It’s not easy to ship a tall pole. Brett: That’s what she said. Christina: that is what she said. Uh, that is exactly what she said. But yeah, so I, I, the, the, the unfortunate thing is that, um, I, um, I, I had to, uh, get a, like all these, they, they came in literally right before Thanksgiving, and then I’ve had, like, all my back stuff has Jeff: Yeah, no Christina: debilitating, but I’m looking forward to, um, getting them set up and used. And, uh, yeah. Review Plans and Honest Assessments Christina: And then full review will be coming to, uh, to, I have to post a review on Reddit, but then I will also be doing a more in depth review, uh, on this podcast if anybody’s interested in, in other places too, to like, let let you know, like if it’s worth your money or not. Um, ’cause there, like I said, there are, there are a few other options out there. So it’s not one of those things where like, you know, um, like, thank you very much for the free monitor, um, monitors. But, but I, I will, I will give like the, the, you know, an honest assessment or Current Display Setup Brett: So [00:39:00] do you currently have a two display setup? Christina: No. Um, well, yes, and kind of, so I have my, my, I have my 5K studio display, and then I have like my iMac that I use as a two to display setup. But then otherwise, what I’ve had to do, and this is actually part of why I’m looking forward to this, is I have a 4K 27 inch monitor, but it’s garbage. And it, it’s one of those things where I don’t wanna use it with my Mac. And so I wind up only using it with my, with my Windows machine, with my framework desktop, um, with my Windows or Linux machine. And, and because that, even though I, it supports Thunderbolt, the Apple display is pain in the ass to use with those things. It doesn’t have the KVM built in. Like, it doesn’t like it, it just, it’s not good for that situation. So yeah, this will be of this size. I mean, again, like I, I, I’m 2 32 inch monitors. I don’t know how I’m gonna deal with that on my Jeff: I Brett: yeah. So right now I’m looking at 2 32 inch like UHD monitors, Christina: Yeah,[00:40:00] Brett: I will say that on days when my neck hurts, it sucks. It’s a, it’s too wide a range to, to like pan back and forth quickly. Like I’ll throw my back out, like trying to keep track of stuff. Um, but I have found that like if I keep the second display, just like maybe social media apps is the way I usually set it up. And then I only work on one. I tried buying an extra wide curve display, hated it. Jeff: Uh, I’ve always wanted to try one, but Christina: I don’t like them. Jeff: Yeah. Christina: Well, for me, well for me it’s two things. One, it’s the, I don’t love the whole like, you know, thing or whatever, but the big thing honestly there, if you could give me, ’cause people are like, oh, you can get a really big 5K, 2K display. I’m like, that’s not a 5K display. That is 2 27 inch, 1440 P displays. One, you know, ultra wide, which is great. Good for you. That’s not retina. And I’m a sicko Who [00:41:00] needs the, the pixel doubling? Like I wish that my eyes could not use that, but, but, but, Jeff: that needs the pixel. Like was that the headline of your Reddit, uh, Christina: no, no. It wasn’t, it wasn’t. But, but maybe it should be. Hi, I’m a sicko who only, um, fucks with, with, with, with, with, with, with retina displays. Ask me anything. Um, but no, but that’s a good point. Brett: I think 5K Psycho is the Christina: 5K Sicko is the po is the po title. I like that. I like that. No, what I’m thinking about doing and that’s great to know, Brett. Um, this kind of reaffirms my thing. Thunderbolt KVM and Display Preferences Christina: So what’s nice about these monitors is that they come with like, built in like, um, Thunderbolt 5K VM. So, which is nice. So you could conceivably have multiple, you know, computers, uh, connected, you know, to to, to one monitor, which I really like. Um, I mean like, ’cause like look, I, I’ve bitched and moaned about the studio display, um, primarily for the price, but at the same time, if mine broke tomorrow and if I didn’t have any way to replace it, I’ve, I’ve also gone on record saying I would buy a new one immediately. As mad as I am about a [00:42:00] lot of different things with that, that the built-in webcam is garbage. The, you know, the, the fact that there’s not a power button is garbage. The fact that you can’t use it with multiple inputs, it’s garbage. But it’s a really good display and it’s what I’m used to. Um, it’s really not any better than my LG Ultra fine from 2016. But you know what? Whatever it is, what it is. Um. I, I am a 5K sicko, but being able to, um, connect my, my personal machine and my work machine at the same time to one, and then have my Windows slash Linux computer connected to another, I think that’s gonna be the scenario where I’m in. So I’m not gonna necessarily be in a place where I’m like, okay, I need to try to look at both of them across 2 32 inch displays. ’cause I think that that, like, that would be awesome. But I feel like that’s too much. Brett: I would love a decent like Thunderbolt KVM setup that could actually swap like my hubs back and Christina: Yes. MacBook Pro and Studio Comparisons Brett: Um, so, ’cause I, I have a studio and I have my, uh, Infor MacBook Pro [00:43:00] and I actually work mostly on the MacBook Pro. Um, but if I could easily dock it and switch everything on my desk over to it, I would, I would work in my office more often. ’cause honestly, the M four MacBook Pro is, it’s a better machine than the original studio was. Um, and I haven’t upgraded my studio to the latest, but, um, I imagine the new one is top notch. Christina: Oh yeah. Yeah. Brett: my, my other one, a couple years old now is already long in the tooth. Christina: No, I mean, they’re still good. I mean, it’s funny, I saw that some YouTube video the other day where they were like, the best value MacBook you can get is basically a 4-year-old M1 max. And I was like, I don’t know about that guys. Like, I, I kind of disagree a little bit. Um, but the M1 max, which is I think is what is in the studio, is still a really, really good ship. But to your point, like they’ve made those, um. You know, the, the, the new ones are still so good. Like, I have an M three max as my personal laptop, and [00:44:00] that’s kind of like the dog chip in the, in the m um, series lineup. So I kind of am regretful for spending six grand on that one, but it is what it is, and I’m like, I’m not, I’m not upgrading. Um, I mean, maybe, maybe in, in next year if, if the M five Pro, uh, or M five max or whatever is, is really exceptional, maybe I’ll look at, okay, how much will you give me to, to trade it in? But even then, I, I, but I feel like I’m at that point where I’m like, it gets to a point where like it’s diminishing returns. Um, but, uh, just in terms of my own budget. But, um, yeah, the, the new just info like pro or or max, whatever, Brett: I have, I have an M four MacBook Pro sitting around that I keep forgetting to sell. Uh, it’s the one that I, it only had a 256 gigabyte hard drive, Jeff: what happened to me when I bought my M1, Brett: and I, and I regretted that enough that I just ordered another one. But, uh, for various reasons, I couldn’t just return the one I didn’t Jeff: ’cause it was.[00:45:00] Brett: so now I, now I have to sell it and I should sell it while it’s still a top of the line machine Christina: Sell it before, sell, sell, sell, sell it before next month, um, or, or February or whenever they sell it before then the, the pros come out. ’cause right now the M five base is out, but the pros are not. So I think feel like you could still get most of your value for it, especially since it has very few battery cycles. Be sure to put the battery cycles on your Facebook marketplace or eBay thing or whatever. Um, I bought my, uh, she won’t listen to this so she won’t know, but, um, they, there was a, a killer Cyber Monday deal, uh, for Best Buy where they had like a, the, the, the, so it’s several years old, but it was the, the M two MacBook Air, but the one that they upgraded to 16 gigs of Ram when Apple was like, oh, we have to have Apple Intelligence and everything, because they actually thought that they were actually gonna ship Apple Intelligence. So they like went back and they, like, they, they, you know, retconned like made the base model MacBook Air, like 16 [00:46:00] gigs. Um, and, uh, anyway, it was, it was $600, um, Jeff: still crazy. Christina: which, which like even for like a, a, a 2-year-old machine or whatever, I was like, yeah, she, my sister, I think she’s on like, like a 2014 or older than that. Like, like MacBook Air. She doesn’t even know where the MagSafe is. I don’t think she even knows where the laptop is. So she’s basically doing everything like on her phone and I’m like, okay, you need a laptop of some type, but at this point. I do feel strongly that like the, the, the $600 or, or, or actually I think it was $650, it was actually less, it is actually more expensive than what the, the, the Cyber Monday sale was, um, the M1, Walmart, MacBook Air. I’m like, absolutely not like that is at this point, do not buy that. Right? Like, I, especially with eight gigs of ram, I’m, I’m like, it’s been, it’s five years old. It’s a, it was a great machine and it was great value for a long time. $200. Cool, right? Like, if you could get something like use and, and, and, and if you could replace the battery or, you know, [00:47:00] for, for, you know, not, not too much money or whatever. Like, I, I, I could see like an argument to be made like value, right? But there’d be no way in hell that I would ever spend or tell anybody else to spend $650 on that new, but $600 for an M two with Jeff: Now we’re talking. Christina: which has the redesign brand new. I’m like, okay. Spend $150 more and you could have got the M four, um, uh, MacBook Air, obviously all around Better Machine. But for my sister, she doesn’t need that, Jeff: What do we have to do to put your sister in this M two MacBook Christina: that, that, that, that, that, that’s exactly it. So I, I, I was, well, also, it was one of those things I was like, I think that she would rather me spend the money on toys for my nephew for Santa Claus than, than, uh, giving her like a, a processor upgrade. Um, Jeff: Claus isn’t real. Brett: Oh shit. Jeff: Gotcha. Every year I spoil it for somebody. This year it was Christina and Brett. Sorry guys. Brett: right. Well, can I tell you guys Jeff: Yeah. [00:48:00] Brett Software. Brett: two quick projects before we do Jeff: Hold on. You don’t have to be quick ’cause you could call it Brett: We’re already at 45 minutes and I want Jeff: What I’m saying, skip GrAPPtitude. This is it? Brett: okay. Christina: us about Mark. Tell us about your projects. Brett: So, so Mark three is, there’s a public, um, test flight beta link. Uh, if you go to marked app.com, not marked two app.com, uh, marked app.com. Uh, you, there’s a link in the, in the, at the top for Christina: Join beta. Mm-hmm. Brett: Um, and that is public and you can join it and you can send me feedback directly through email because, um, uh, uh, the feedback reporter sucks for test flight and you can’t attach files. And half the time they come through as anonymous feedback and I can’t even follow up on ’em. So email me. But, um, I’ll be announcing that on my blog soon-ish. Um, right now there’s like [00:49:00] maybe a couple dozen, um, testers and I, it’s nice and small and I’m solving the biggest bugs right away. Um, so that’s been, that’s been big. Like Mark, even since we last talked has added. Do you remember Jeff when Merlin was on and he wanted to. He wanted to be able to manage his styles, um, and disable built-in styles. There’s now a whole table based style manager where you Jeff: saw that. Brett: you can, you can reorder, including built-in styles. You can reorder, enable, disable, edit, duplicate. Um, it’s like a full, full fledged, um, style manager. And I just built a whole web app that is a style generator that gives you, um, automatic like rhythm calculations for your CSS and you can, you can control everything through like, uh, like UI fields instead of having to [00:50:00] write CSS. Uh, but you can also o open up a very, I’ve spent a lot of time on the code mirror CSS editor in the web app. Uh, so, and it’s got live preview as you edit in the code mirror field. Um, so that’s pretty cool. And that’s built into marts. So if you go to style, um, generate style, it’ll load up a, a style generator for you. Anyway, there’s, there’s a ton. I’m not gonna go into all the details, but, uh, anyone listening who uses markdown for anything, especially if you want ability to export to like Word and epub and advanced PDF export, um, join the beta. Let me know what you think. Uh, help me squash bugs. But the other thing, every time I push a beta for review before the new bug reports come in, I’ve been putting time into a tool. Markdown Processor: Apex Brett: I’m calling [00:51:00] Apex and um, I haven’t publicly announced this one yet, but I probably will by the time this podcast comes out. Jeff: I mean, doesn’t this count? Brett: It, it does. I’m saying like this, this might be a, you hear you heard it here first kind of thing, um, but if you go to github.com/tt sc slash apex, um, I built a, uh, pure C markdown processor that combines syntax from cram down GitHub flavored markdown, multi markdown maku, um, common mark. And basically you can write syntax from any of those processors, including all of their special features, um, and in one document, and then use Apex in its unified mode, and it’ll just figure out what. All of your syntax is supposed to do. Um, so you can take, you can port documents from one platform to another [00:52:00] without worrying about how they’re gonna render. Um, if I can get any kind of adoption with Apex, it could solve a lot of problems. Um, I built it because I want to make it the default processor in marked ’cause right now, you, you have to choose, you know, cram Christina: Which one? Brett: mark and, and choosing one means you lose something in order to gain something. Um, so I wanted to build a universal one that brought together everything. And I added cool features from some extensions of other languages, such as if you have two lists in a row, normally in markdown, it’s gonna concatenate those into one list. Now you can put a carrot on a line between the two lists and it’ll break it into two lists. I also added support for a. An extension to cram down that lets you put double uh, carrots inside a table cell and [00:53:00] create a row band. So like a cell that, that expands it, you rows but doesn’t expand the rest of the row. Um, so you can do cell spans and row spans and it has a relaxed table version where you don’t have to have an alignment row, which is, uh, sometimes we just wanna make quickly table. You make two lines. You put some pipes in. This will, if there’s no alignment row, it will generate a table with just a table body and table data cells in no header. It also allows footers, you can add a footer to a table by using equals in the separator line. Um, it, it’s, Jeff: This is very civilized, Brett: it is. Christina: is amazing, Brett: So where Common Mark is extremely strict about things, um, apex is extremely permissive. Jeff: also itty bitty things like talk about the call out boxes from like Brett: oh yeah, it, it can handle call out syntax from Obsidian and Bear and Xcode Playgrounds. [00:54:00] Um, and it incorporates all of Mark’s syntax for like file includes and even renders like auto scroll pauses that work in marked and some other teleprompter situations. Um, it uses file ude syntax from multi markdown, like, which is just like a curly brace and, uh, marked, which is, uh, left like a double left, uh, angle bracket and then different. Brackets to surround a file name and it handles IA writer file inclusion where you just type a forward slash and then the name of a file and it automatically detects if that file is an image or source code or markdown text, and it will import it accordingly. And if it’s a CSV file, it’ll generate a table from it automatically. It’s, it’s kind of nuts. I, it’s kind of nuts. I could not have done this [00:55:00] without copilot. I, I am very thankful for copilot because my C skills are not, would not on their own, have been up to this task. I know enough to bug debug, but yeah, a lot of these features I got a big hand from copilot on. Jeff: This is also Brett. This is some serious Brett Terpstra. TURPs Hard Christina: Yeah, it is. I was gonna say, this is like Jeff: and also that’s right. Also, if your grandma ever wrote you a note and it, and though you couldn’t really read it, it really well, that renders perfectly Christina: Amazing. No, I was gonna say this is like, okay, so Apex is like the perfect name ’cause this is the apex of Brett. Jeff: Yes. Apex of Brett. Christina: That’s also that, that’s, that’s not an alternate episode title Apex of Brett. Because genuinely No, Brett, like I am, I am so stunned and impressed. I mean, you all, you always impressed me like you are the most impressive like developer that I, that I’ve ever known. But you, this is incredible. And, and this, I, I love this [00:56:00] because as you said, like common Mark is incredibly strict. This is incredibly permissive. But this is great. ’cause there are those scenarios where you might have like, I wanna use one feature from one thing or one from another, or I wanna combine things in various ways, or I don’t wanna have to think about it, you know? Brett: I aals, I forgot to mention I aals inline attribute list, which is a crammed down feature that lets you put curly brackets after like a paragraph and then a colon and then say, dot call out inside the curly brackets. And then when it renders the markdown, it creates that paragraph and adds class equals call out to the paragraph. Um, and in, in Cramon you can apply these to everything from list items to list to block quotes. Like you can do ’em for spans. You could like have one after, uh, link syntax and just apply, say dot external to a link. So the IAL syntax can add IDs classes and uh, arbitrary [00:57:00] attributes to any element in your markdown when it renders to HTML. And, uh, and Apex has first class support for I aals. Was really, that was, that Christina: that was really hard, Brett: I wrote it because I wanted, I wanted multi markdown, uh, for my prose writing, but I really missed the als. Christina: Yes. Okay. Because see, I run into this sort of thing too, right? Because like, this is a problem like that. I mean, it’s a very niche problem, um, that, that, you know, people who listen to this podcast probably are more familiar with than other types of people. But like, when you have to choose your markdown processor, which as you said, like Brett, like that can be a problem. Like, like with, with using Mark or anything else, you’re like, what am I giving up? What do I have? And, and like for me, because I started using mul, you know, markdown, um, uh, largely because of you, um, I think I was using it, I knew about it before you, but largely because of, of, of you, like multi markdown has always been like kind of my, or was historically my flavor of choice. It has since shifted to being [00:58:00] GitHub, labor bird markdown. But that’s just because the industry has taken that on, right? But there were, you know, certain things like in like, you know, multi markdown that work a certain way. And then yeah, there are things in crammed down. There are things in these other things in like, this is just, this is awesome. This Brett: It is, the whole thing is built on top of C mark, GFM, which is GitHub’s port of common mark with the GitHub flavored markdown Christina: Right. Brett: Um, and I built, like, I kept that as a sub-module, totally clean, and built all of this as extensions on top of Cmar, GFM, which, you know, so it has full compatibility with GitHub and with Common Merck by out, like outta the box. And then everything else is built on top of that. So it, uh, it covers, it covers all the bases. You’ll love it Christina: I’m so excited. No, this is awesome. And I Brett: blazing fast. It can render, I have a complex document that, that uses all of its features and it can render it in [00:59:00] 0.006 seconds. Christina: that’s awesome. Jeff: Awesome. Christina: That’s so cool. No, this is great. And yeah, I, and I think that honestly, like this is the sort of thing like if, yeah, if you can eventually get this to like be like the engine that powers like mark three, like, that’ll be really slick, right? Because then like, yeah, okay, I can take one document and then just, you know, kind of, you know, wi with, with the, you know, ha have, have the compatibility mode where you’re like, okay, the unified mode or whatever yo
Heute geht's um eine dieser unscheinbaren Technologien, die du wahrscheinlich täglich nutzt: UUIDs! Ob in deiner Datenbank, im Betriebssystem oder in verteilten Systemen. Wie und warum funktionieren UUIDs, welche Versionen gibt es und warum ist nicht jede UUID gleich gut für deine Datenbank.. Außerdem: Alternativen wie Snowflake, ULID oder NanoID..Im Engineering-Kiosk-Adventskalender 2025 sprechen befreundete Podcaster⋅innen und wir selbst, Andy und Wolfi, jeden Tag kurz & knackig innerhalb weniger Minuten über ein interessantes Tech-Thema.Unsere aktuellen Werbepartner findest du auf https://engineeringkiosk.dev/partnersDas schnelle Feedback zur Episode:
Join hosts Lois Houston and Nikita Abraham as they explore the world of MySQL 8.4. Together with Perside Foster, a MySQL Principal Solution Engineer, they break down the fundamentals of MySQL, its wide range of applications, and why it's so popular among developers and database administrators. This episode also covers key topics like licensing options, support services, and the various tools, features, and plugins available in MySQL Enterprise Edition. MySQL 8.4 Essentials: https://mylearn.oracle.com/ou/course/mysql-84-essentials/141332/226362 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode.
Piilaakson kasvun kaava | Mickos | #neuvottelija 362. Mårten Mickos - Kasvun kaava-kirjan keskeiset opit eksponentiaalisesti kasvaviin yrityksiin. Miksi Piilaakso onnistuu kerrasta toiseen luomaan räjähtävää kasvua keskittymällä välittömään tuottamiseen. Miten toimitusjohtaja voi luoda toimialavoittajia. Mitä suomalaisten tulisi tehdä saadakseen talouden kasvamaan.00:00 Mårten Mickos - Kasvun kaava-kirja00:33 Kasvun kaava vs Woltin tarina ja Startup-käsikirja01:18 Mahdollisuuksiin keskittyminen ja kellotaajuus02:15 Alustatalous, Wolt ja bisneksen tulevaisuus | Marianne Vikkula & Paavo Ritala https://youtu.be/8ZfSWgTbvpo02:42 Piilaakson ja Euroopan täysin eri kellotaajuus03:06 Lineaarinen kasvu vastaan digitaalinen eksponentiaalinen kasvu03:36 Pandemiat ja markkinat eksponentiaalisen käyttäytymisen esimerkkeinä04:00 Päivän viive voi digibisneksessä olla kohtalokas, aika kriittisenä resurssina05:38 Lineaarinen sarja vastaan binaarinen räjähtävä tuplaantuminen07:04 Digimaailmassa kannattaa tehdä nopeasti epätäydellistä08:21 Eksponentiaalisuus pitää muistaa jokaisessa työpäivän päätöksessä08:59 Itsensä kylmäkäynnistäminen ota ensimmäinen ja toinen askel09:34 Onnellisuus ja vikkelä työskentely eivät ole vastakohtia09:57 Nopeasti tehdyn työn jälkeen pitää elää hitaasti10:30 Long form -keskustelut ja ajattelun hitaampi kypsyminen12:14 Piilaakson jaettu aikakäsitys synkronoi ihmisten kellotaajuudet12:45 Kahdenkymmenentuhannen kontaktin suodattaminen ja priorisointi13:15 Amerikkalainen hyväksyy lyhyet kohtaamiset tehokkuuden vuoksi13:39 Piilaakson fyysinen tiheys ja törmäytysten merkitys14:34 Väittely ilman tekemistä ja suomalainen sivuroolin perisynti16:19 MySQL-tarina. Mårtenin Mickosin saapuminen toimitusjohtajaksi17:23 MySQL-suunnitteluperiaate säästää käyttäjän aikaa17:54 Avoin lähdekoodi mahdollisti sosiaalisen median skaalautuvuuden18:55 Toimitusjohtajaksi tulo ja perusasioiden järjestäminen19:37 Sopimukset, tekijänoikeudet, rahoitus ja kasvustrategia kuntoon21:14 Mårten ylpeä palkkasoturijohtajana ja partiolaisena21:57 Elon Muskin ego vallan ja riskin räjähtäisherkkä yhdistelmä22:49 Johtajan rooli vallan ja omistajaedun tasapainottajana23:12 Mårten Mickos hakeutuu älykkäiden ja kilttien ihmisten piireihin24:22 Partioveneet ja opiskeluvuodet johtajuuskouluna25:14 Johtoryhmän tekeminen paremmaksi kuin osiensa summa26:21 Rakennetaan paras mahdollinen tiimi niillä resursseilla27:11 Nöyrä vs julkisuudessa aktiivisesti esiintyvä toimitusjohtaja28:16 Working 9 to 3 -oivallus ja tietokantamarkkinan kutistaminen29:25 Kleiner Perkinsin Ray Lane innostuu MySQL:n uhasta Oraclelle30:15 Disruptiivinen firma tarvitsee heittäytyvän puhuvan toimitusjohtajan32:00 Suomen hiljaiset johtajat ja liiallinen nöyryys julkisuudessa34:34 Aalto-yliopiston rohkeus, yhteistyö ja vastuullisuus arvoina37:53 Vibe coding society ja yritysanalysaattoreiden rakentaminen38:50 Tekoäly tuo Otaniemen kaltaiset mahdollisuudet koko maahan41:41 Etätyössä tulos näkyy paremmin kuin toimistolla45:05 Etätyössä ihmisen inhimillinen puoli on tuotava näkyviin46:20 Slack, IRC ja yhteiset kanavat lisäävät tuottavuutta50:20 Kasvun kaava -kirjoitusprosessi Google Docs -kustantajan kanssa51:17 Startup-yrittäjän ei pidä kantaa Suomen pelastusvastuuta52:39 Startup-yritykset Suomen talouskasvun tärkein moottori54:01 Hallinnollisten esteiden ja lupakynnyksien systemaattinen purkaminen55:21 Pienet päätökset ja talkoohenki kasvun mahdollistajina58:48 Synnytyslaitosten läheisyys hyvinvointivaltion todellisena mittarina1:00:10 Suomesta maailman paras pieni valtio ja yritysmaa1:04:13 Ekspattiveroetu kannustaa palaamaan ja rakentamaan Suomea1:04:33 Tekoäly antaa luvan kokeilla ja rikkoa1:05:04 Mårtenin Crazy Finn -identiteetti ja snapsilaulut maailmalla#neuvottelija Sisäpiirissä keskustellaan mm. tiimipalaverissa laulamisesta ja muista erikoisista johtajuuden lajeista
Mårten Mickos on johtanut kolmea globaalia teknologiayritystä ja vienyt yhden niistä yli miljardin dollarin yrityskauppaan. Tänään hän käyttää suuren osan ajastaan nuorten startup-toivojen valmentamiseen ja kasvuajattelun edistämiseen. Mårtenin johtamat MySQL, Eucalyptus ja HackerOne olivat edelläkävijöitä hajautetussa organisaatiorakenteessa, etätyössä, verkostoissa ja vastuunjaossa. Hän syventyi tähän kaikkeen ja käänsi ne menestystarinoiksi ennen kuin moni ehti edes jyvälle etätyöstä tai verkosto-organisaatioista. Siksi keskitymme tässä podcastissa siihen, minkä Mårten osaa parhaiten, eli johtamiseen. Vauhtia haemme hänen uuden kirjansa Kasvun kaava teemoista. Puhumme myös paljon kulttuurin ja arvojen merkityksestä sekä tietenkin siitä, kuinka Suomi saataisiin heräämään horroksesta. Mårtenin avainsana tässä on toimijuus, ihmisen kokemus ja ymmärrys siitä, että hänellä on velvollisuus toimia. Hyviä kuunteluhetkiä!
In this Technology Reseller News, Podcast, Doug Green interviews Don Boxley, CEO and Co-Founder of DH2i, for a deep dive into one of the biggest challenges facing IT teams today: the migration gap between legacy Windows-based SQL Server deployments and the containerized, Linux-driven future that modern applications increasingly require. DH2i, a long-time Microsoft and Red Hat partner, delivers high availability, secure communication, and cross-platform mobility for SQL Server, Oracle, and MySQL environments. Their flagship platform, DX Enterprise, enables customers to run the same production-grade SQL instance across Windows, Linux, and Kubernetes — and move between them with seamless failover. As Boxley explains, this eliminates the traditional roadblocks that keep enterprises locked into aging infrastructure. With SQL Server 2025 bringing new support for AI-driven applications, Microsoft tapped DH2i to deliver mission-critical HA capabilities for these next-generation workloads. The company also introduced a hands-on, step-by-step Minikube tutorial, allowing DBAs and MSPs to experiment safely with Kubernetes-based SQL deployments on their own PCs before ever touching production. “Most teams think they're stuck on Windows — they're not. With DX Enterprise, you can move SQL Server to Linux or containers without disruption, and your customers won't even know the difference,” Boxley notes. DH2i's developer edition is available as a free download, complete with 30-day support, giving IT teams a no-risk path to testing, learning, and modernizing their database infrastructure. Learn more at https://dh2i.com/. Software Mind Telco Days 2025: On-demand online conference Engaging Customers, Harnessing Data
For memberships: join this channel as a member here:https://www.youtube.com/channel/UC_mGuY4g0mggeUGM6V1osdA/joinExploring Cloud Databases, Scalability, and Simple Engineering with Sam Lambert, CEO of PlanetScaleIn this episode of The Geek Narrator podcast, we welcome Sam Lambert, CEO and Co-Founder of PlanetScale, known for creating the world's fastest and most scalable cloud database. Sam shares his insights on databases, operational excellence, and simple engineering. We discuss topics such as scalability, Postgres versus MySQL, and replication. Sam also talks about handling complexity in engineering, the unique features of Vites, and how PlanetScale achieves high availability. Don't miss this deep dive into the future of cloud databases. Like, share, and subscribe to support the channel!Chapters:00:00 Introduction and Episode Overview01:13 Meet Sam Lambert: Background and Career02:42 Balancing Work and Social Media05:48 The Philosophy of Simple Engineering14:21 The Slotted Counter Pattern at GitHub18:27 Postgres vs MySQL: Design Flaws and Philosophical Differences28:58 Sharding and Scaling with Vitess37:01 Database Branching and Schema Changes38:50 Common Practices in Startups39:07 Challenges with Data Branching40:45 Legal and Ethical Considerations42:31 Staging Environments vs. Dev Branches45:26 Trade-offs in Cloud Databases52:41 Replication and Durability01:00:02 Ensuring High Availability01:08:04 Backup Strategies and Testing01:10:41 Conclusion and Final ThoughtsLearn about PlanetScale: https://planetscale.com/For memberships: join this channel as a member here:https://www.youtube.com/channel/UC_mGuY4g0mggeUGM6V1osdA/joinDon't forget to like, share, and subscribe for more insights!=============================================================================Like building stuff? Try out CodeCrafters and build amazing real world systems like Redis, Kafka, Sqlite. Use the link below to signup and get 40% off on paid subscription.https://app.codecrafters.io/join?via=geeknarrator=============================================================================Database internals series: https://youtu.be/yV_Zp0Mi3xsPopular playlists:Realtime streaming systems: https://www.youtube.com/playlist?list=PLL7QpTxsA4se-mAKKoVOs3VcaP71X_LA-Software Engineering: https://www.youtube.com/playlist?list=PLL7QpTxsA4sf6By03bot5BhKoMgxDUU17Distributed systems and databases: https://www.youtube.com/playlist?list=PLL7QpTxsA4sfLDUnjBJXJGFhhz94jDd_dModern databases: https://www.youtube.com/playlist?list=PLL7QpTxsA4scSeZAsCUXijtnfW5ARlrsNStay Curios! Keep Learning!
Dans cet épisode, Arnaud et Guillaume discutent des dernières évolutions dans le monde de la programmation, notamment les nouveautés de Java 25, JUnit 6, et Jackson 3. Ils abordent également les récents développements en IA, les problèmes rencontrés dans le cloud, et l'état actuel de React et du web. Dans cette conversation, les intervenants abordent divers sujets liés à la technologie, notamment les spécifications de Wasteme, l'utilisation des UUID dans les bases de données, l'approche RAG en intelligence artificielle, les outils MCP, et la création d'images avec Nano Banana. Ils discutent également des complexités du format YAML, des récents dramas dans la communauté Ruby, de l'importance d'une bonne documentation, des politiques de retour au bureau, et des avancées de Cloud Code. Enfin, ils évoquent l'initiative de cafés IA pour démystifier l'intelligence artificielle. Enregistré le 24 octobre 2025 Téléchargement de l'épisode LesCastCodeurs-Episode-331.mp3 ou en vidéo sur YouTube. News Langages GraalVM se détache du release train de Java https://blogs.oracle.com/java/post/detaching-graalvm-from-the-java-ecosystem-train Un article de Loic Mathieu sur Java 25 et ses nouvelles fonctionalités https://www.loicmathieu.fr/wordpress/informatique/java-25-whats-new/ Sortie de Groovy 5.0 ! https://groovy-lang.org/releasenotes/groovy-5.0.html Groovy 5: Évolution des versions précédentes, nouvelles fonctionnalités et simplification du code. Compatibilité JDK étendue: Full support JDK 11-25, fonctionnalités JDK 17-25 disponibles sur les JDK plus anciens. Extension majeure des méthodes: Plus de 350 méthodes améliorées, opérations sur tableaux jusqu'à 10x plus rapides, itérateurs paresseux. Améliorations des transformations AST: Nouveau @OperatorRename, génération automatique de @NamedParam pour @MapConstructor et copyWith. REPL (groovysh) modernisé: Basé sur JLine 3, support multi-plateforme, coloration syntaxique, historique et complétion. Meilleure interopérabilité Java: Pattern Matching pour instanceof, support JEP-512 (fichiers source compacts et méthodes main d'instance). Standards web modernes: Support Jakarta EE (par défaut) et Javax EE (héritage) pour la création de contenu web. Vérification de type améliorée: Contrôle des chaînes de format plus robuste que Java. Additions au langage: Génération d'itérateurs infinis, variables d'index dans les boucles, opérateur d'implication logique ==>. Améliorations diverses: Import automatique de java.time.**, var avec multi-assignation, groupes de capture nommés pour regex (=~), méthodes utilitaires de graphiques à barres ASCII. Changements impactants: Plusieurs modifications peuvent nécessiter une adaptation du code existant (visibilité, gestion des imports, comportement de certaines méthodes). **Exigences JDK*: Construction avec JDK17+, exécution avec JDK11+. Librairies Intégration de LangChain4j dans ADK pour Java, permettant aux développeurs d'utiliser n'importe quel LLM avec leurs agents ADK https://developers.googleblog.com/en/adk-for-java-opening-up-to-third-party-language-models-via-langchain4j-integration/ ADK pour Java 0.2.0 : Nouvelle version du kit de développement d'agents de Google. Intégration LangChain4j : Ouvre ADK à des modèles de langage tiers. Plus de choix de LLM : En plus de Gemini et Claude, accès aux modèles d'OpenAI, Anthropic, Mistral, etc. Modèles locaux supportés : Utilisation possible de modèles via Ollama ou Docker Model Runner. Améliorations des outils : Création d'outils à partir d'instances d'objets, meilleur support asynchrone et contrôle des boucles d'exécution. Logique et mémoire avancées : Ajout de callbacks en chaîne et de nouvelles options pour la gestion de la mémoire et le RAG (Retrieval-Augmented Generation). Build simplifié : Introduction d'un POM parent et du Maven Wrapper pour un processus de construction cohérent. JUnit 6 est sorti https://docs.junit.org/6.0.0/release-notes/ :sparkles: Java 17 and Kotlin 2.2 baseline :sunrise_over_mountains: JSpecify nullability annotations :airplane_departure: Integrated JFR support :suspension_railway: Kotlin suspend function support :octagonal_sign: Support for cancelling test execution :broom: Removal of deprecated APIs JGraphlet, une librairie Java sans dépendances pour créer des graphes de tâches à exécuter https://shaaf.dev/post/2025-08-25-think-in-graphs-not-just-chains-jgraphlet-for-taskpipelines/ JGraphlet: Bibliothèque Java légère (zéro-dépendance) pour construire des pipelines de tâches. Principes clés: Simplicité, basée sur un modèle d'exécution de graphe. Tâches: Chaque tâche a une entrée/sortie, peut être asynchrone (Task) ou synchrone (SyncTask). Pipeline: Un TaskPipeline construit et exécute le graphe, gère les I/O. Modèle Graph-First: Le flux de travail est un Graphe Orienté Acyclique (DAG). Définition des tâches comme des nœuds, des connexions comme des arêtes. Support naturel des motifs fan-out et fan-in. API simple: addTask("id", task), connect("fromId", "toId"). Fan-in: Une tâche recevant plusieurs entrées reçoit une Map (clés = IDs des tâches parentes). Exécution: pipeline.run(input) retourne un CompletableFuture (peut être bloquant via .join() ou asynchrone). Cycle de vie: TaskPipeline est AutoCloseable, garantissant la libération des ressources (try-with-resources). Contexte: PipelineContext pour partager des données/métadonnées thread-safe entre les tâches au sein d'une exécution. Mise en cache: Option de mise en cache pour les tâches afin d'éviter les re-calculs. Au tour de Microsoft de lancer son (Microsoft) Agent Framework, qui semble être une fusion / réécriture de AutoGen et de Semnatic Kernel https://x.com/pyautogen/status/1974148055701028930 Plus de détails dans le blog post : https://devblogs.microsoft.com/foundry/introducing-microsoft-agent-framework-the-open-source-engine-for-agentic-ai-apps/ SDK & runtime open-source pour systèmes multi-agents sophistiqués. Unifie Semantic Kernel et AutoGen. Piliers : Standards ouverts (MCP, A2A, OpenAPI) et interopérabilité. Passerelle recherche-production (patterns AutoGen pour l'entreprise). Extensible, modulaire, open-source, connecteurs intégrés. Prêt pour la production (observabilité, sécurité, durabilité, "human in the loop"). Relation SK/AutoGen : S'appuie sur eux, ne les remplace pas, simplifie la migration. Intégrations futures : Alignement avec Microsoft 365 Agents SDK et Azure AI Foundry Agent Service. Sortie de Jackson 3.0 (bientôt les Jackson Five !!!) https://cowtowncoder.medium.com/jackson-3-0-0-ga-released-1f669cda529a Jackson 3.0.0 a été publié le 3 octobre 2025. Objectif : base propre pour le développement à long terme, suppression de la dette technique, architecture simplifiée, amélioration de l'ergonomie. Principaux changements : Baseline Java 17 requise (vs Java 8 pour 2.x). Group ID Maven et package Java renommés en tools.jackson pour la coexistence avec Jackson 2.x. (Exception: jackson-annotations ne change pas). Suppression de toutes les fonctionnalités @Deprecated de Jackson 2.x et renommage de plusieurs entités/méthodes clés. Modification des paramètres de configuration par défaut (ex: FAIL_ON_UNKNOWN_PROPERTIES désactivé). ObjectMapper et TokenStreamFactory sont désormais immutables, la configuration se fait via des builders. Passage à des exceptions de base non vérifiées (JacksonException) pour plus de commodité. Intégration des "modules Java 8" (pour les noms de paramètres, Optional, java.time) directement dans l'ObjectMapper par défaut. Amélioration du modèle d'arbre JsonNode (plus de configurabilité, meilleure gestion des erreurs). Testcontainers Java 2.0 est sorti https://github.com/testcontainers/testcontainers-java/releases/tag/2.0.0 Removed JUnit 4 support -> ups Grails 7.0 est sortie, avec son arrivée à la fondation Apache https://grails.apache.org/blog/2025-10-18-introducing-grails-7.html Sortie d'Apache Grails 7.0.0 annoncée le 18 octobre 2025. Grails est devenu un projet de premier niveau (TLP) de l'Apache Software Foundation (ASF), graduant d'incubation. Mise à jour des dépendances vers Groovy 4.0.28, Spring Boot 3.5.6, Jakarta EE. Tout pour bien démarrer et développer des agents IA avec ADK pour Java https://glaforge.dev/talks/2025/10/22/building-ai-agents-with-adk-for-java/ Guillaume a partagé plein de resources sur le développement d'agents IA avec ADK pour Java Un article avec tous les pointeurs Un slide deck et l'enregistrement vidéo de la présentation faite lors de Devoxx Belgique Un codelab avec des instructions pour démarrer et créer ses premiers agents Plein d'autres samples pour s'inspirer et voir les possibilités offertes par le framework Et aussi un template de projet sur GitHub, avec un build Maven et un premier agent d'exemple Cloud Internet cassé, du moins la partie hébergée par AWS #hugops https://www.theregister.com/2025/10/20/aws_outage_amazon_brain_drain_corey_quinn/ Panne majeure d'AWS (région US-EAST-1) : problème DNS affectant DynamoDB, service fondamental, causant des défaillances en cascade de nombreux services internet. Réponse lente : 75 minutes pour identifier la cause profonde; la page de statut affichait initialement "tout va bien". Cause sous-jacente principale : "fuite des cerveaux" (départ d'ingénieurs AWS seniors). Perte de connaissances institutionnelles : des décennies d'expertise critique sur les systèmes AWS et les modes de défaillance historiques parties avec ces départs. Prédictions confirmées : un ancien d'AWS avait anticipé une augmentation des pannes majeures en 2024. Preuves de la perte de talents : Plus de 27 000 licenciements chez Amazon (2022-2025). Taux élevé de "départs regrettés" (69-81%). Mécontentement lié à la politique de "Return to Office" et au manque de reconnaissance de l'expertise. Conséquences : les nouvelles équipes, plus réduites, manquent de l'expérience nécessaire pour prévenir les pannes ou réduire les temps de récupération. Perspective : Le marché pourrait pardonner cette fois, mais le problème persistera, rendant les futurs incidents plus probables. Web React a gagné "par défaut" https://www.lorenstew.art/blog/react-won-by-default/ React domine par défaut, non par mérite technique, étouffant ainsi l'innovation front-end. Choix par réflexe ("tout le monde connaît React"), freinant l'évaluation d'alternatives potentiellement supérieures. Fondations techniques de React (V-DOM, complexité des Hooks, Server Components) vues comme des contraintes actuelles. Des frameworks innovants (Svelte pour la compilation, Solid pour la réactivité fine, Qwik pour la "resumability") offrent des modèles plus performants mais sont sous-adoptés. La monoculture de React génère une dette technique (runtime, réconciliation) et centre les compétences sur le framework plutôt que sur les fondamentaux web. L'API React est complexe, augmentant la charge cognitive et les risques de bugs, contrairement aux alternatives plus simples. L'effet de réseau crée une "prison": offres d'emploi spécifiques, inertie institutionnelle, leaders choisissant l'option "sûre". Nécessité de choisir les frameworks selon les contraintes du projet et le mérite technique, non par inertie. Les arguments courants (maturité de l'écosystème, recrutement, bibliothèques, stabilité) sont remis en question; une dépendance excessive peut devenir un fardeau. La monoculture ralentit l'évolution du web et détourne les talents, nuisant à la diversité essentielle pour un écosystème sain et innovant. Promouvoir la diversité des frameworks pour un écosystème plus résilient et innovant. WebAssembly 3 est sortie https://webassembly.org/news/2025-09-17-wasm-3.0/ Data et Intelligence Artificielle UUIDv4 ou UUIDv7 pour vos clés primaires ? Ça dépend… surtout pour les bases de données super distribuées ! https://medium.com/google-cloud/understanding-uuidv7-and-its-impact-on-cloud-spanner-b8d1a776b9f7 UUIDv4 : identifiants entièrement aléatoires. Cause des problèmes de performance dans les bases de données relationnelles (ex: PostgreSQL, MySQL, SQL Server) utilisant des index B-Tree. Inserts aléatoires réduisent l'efficacité du cache, entraînent des divisions de pages et la fragmentation. UUIDv7 : nouveau standard conçu pour résoudre ces problèmes. Intègre un horodatage (48 bits) en préfixe de l'identifiant, le rendant ordonné temporellement et "k-sortable". Améliore la performance dans les bases B-Tree en favorisant les inserts séquentiels, la localité du cache et réduisant la fragmentation. Problème de UUIDv7 pour certaines bases de données distribuées et scalables horizontalement comme Spanner : La nature séquentielle d'UUIDv7 (via l'horodatage) crée des "hotspots d'écriture" (points chauds) dans Spanner. Spanner distribue les données en "splits" (partitions) basées sur les plages de clés. Les clés séquentielles concentrent les écritures sur un seul "split". Ceci empêche Spanner de distribuer la charge et de scaler les écritures, créant un goulot d'étranglement ("anti-pattern"). Quand ce n'est PAS un problème pour Spanner : Si le taux d'écriture total est inférieur à environ 3 500 écritures/seconde pour un seul "split". Le hotspot est "bénin" à cette échelle et n'entraîne pas de dégradation de performance. Solutions pour Spanner : Principe clé : S'assurer que la première partie de la clé primaire est NON séquentielle pour distribuer les écritures. UUIDv7 peut être utilisé, mais pas comme préfixe. Nouvelle conception ("greenfield") : ▪︎ Utiliser une clé primaire non-séquentielle (ex: UUIDv4 simple). Pour les requêtes basées sur le temps, créer un index secondaire sur la colonne d'horodatage, mais le SHARDER (ex: shardId) pour éviter les hotspots sur l'index lui-même. Migration (garder UUIDv7) : ▪︎ Ajouter un préfixe de sharding : Introduire une colonne `shard` calculée (ex: `MOD(ABS(FARM_FINGERPRINT(order_id_v7)), N)`) et l'utiliser comme PREMIER élément d'une clé primaire composite (`PRIMARY KEY (shard, order_id_v7)`). Réordonner les colonnes (si clé primaire composite existante) : Si la clé primaire est déjà composite (ex: (order_id_v7, tenant_id)), réordonner en (tenant_id, order_id_v7). Cela aide si tenant_id a une cardinalité élevée et distribue bien. (Un tenant_id très actif pourrait toujours nécessiter un préfixe de sharding supplémentaire). RAG en prod, comment améliorer la pertinence des résultats https://blog.abdellatif.io/production-rag-processing-5m-documents Démarrage rapide avec Langchain + Llamaindex: prototype fonctionnel, mais résultats de production jugés "subpar" par les utilisateurs. Ce qui a amélioré la performance (par ROI): Génération de requêtes: LLM crée des requêtes sémantiques et mots-clés multiples basées sur le fil de discussion pour une meilleure couverture. Reranking: La technique la plus efficace, modifie grandement le classement des fragments (chunks). Stratégie de découpage (Chunking): Nécessite beaucoup d'efforts, compréhension des données, création de fragments logiques sans coupures. Métadonnées à l'LLM: L'injection de métadonnées (titre, auteur) améliore le contexte et les réponses. Routage de requêtes: Détecte et traite les questions non-RAG (ex: résumer, qui a écrit) via API/LLM distinct. Outillage Créer un serveur MCP (mode HTTP Streamable) avec Micronaut et quelques éléments de comparaison avec Quarkus https://glaforge.dev/posts/2025/09/16/creating-a-streamable-http-mcp-server-with-micronaut/ Micronaut propose désormais un support officiel pour le protocole MCP. Exemple : un serveur MCP pour les phases lunaires (similaire à une version Quarkus pour la comparaison). Définition des outils MCP via les annotations @Tool et @ToolArg. Point fort : Micronaut gère automatiquement la validation des entrées (ex: @NotBlank, @Pattern), éliminant la gestion manuelle des erreurs. Génération automatique de schémas JSON détaillés pour les structures d'entrée/sortie grâce à @JsonSchema. Nécessite une configuration pour exposer les schémas JSON générés comme ressources statiques. Dépendances clés : micronaut-mcp-server-java-sdk et les modules json-schema. Testé avec l'inspecteur MCP et intégration avec l'outil Gemini CLI. Micronaut offre une gestion élégante des entrées/sorties structurées grâce à son support JSON Schema riche. Un agent IA créatif : comment utiliser le modèle Nano Banana pour générer et éditer des images (en Java, avec ADK) https://glaforge.dev/posts/2025/09/22/creative-ai-agents-with-adk-and-nano-banana/ Modèles de langage (LLM) deviennent multimodaux : traitent diverses entrées (texte, images, vidéo, audio). Nano Banana (gemini-2.5-flash-image-preview) : modèle Gemini, génère et édite des images, pas seulement du texte. ADK (Agent Development Kit pour Java) : pour configurer des agents IA créatifs utilisant ce type de modèle. Application : Base pour des workflows créatifs complexes (ex: agent de marketing, enchaînement d'agents pour génération d'assets). Un vieil article (6 mois) qui illustre les problèmes du format de fichier YAML https://ruudvanasseldonk.com/2023/01/11/the-yaml-document-from-hell YAML est extrêmement complexe malgré son objectif de convivialité humaine. Spécification volumineuse et versionnée (YAML 1.1, 1.2 diffèrent significativement). Comportements imprévisibles et "pièges" (footguns) courants : Nombres sexagésimaux (ex: 22:22 parsé comme 1342 en YAML 1.1). Tags (!.git) pouvant mener à des erreurs ou à l'exécution de code arbitraire. "Problème de la Norvège" : no interprété comme false en YAML 1.1. Clés non-chaînes de caractères (on peut devenir une clé booléenne True). Nombres accidentels si non-guillemets (ex: 10.23 comme flottant). La coloration syntaxique n'est pas fiable pour détecter ces subtilités. Le templating de documents YAML est une mauvaise idée, source d'erreurs et complexe à gérer. Alternatives suggérées : TOML : Similaire à YAML mais plus sûr (chaînes toujours entre guillemets), permet les commentaires. JSON avec commentaires (utilisé par VS Code), mais moins répandu. Utiliser un sous-ensemble simple de YAML (difficile à faire respecter). Générer du JSON à partir de langages de programmation plus puissants : ▪︎ Nix : Excellent pour l'abstraction et la réutilisation de configuration. Python : Facilite la création de JSON avec commentaires et logique. Gros binz dans la communauté Ruby, avec l'influence de grosses boîtes, et des pratiques un peu douteuses https://joel.drapper.me/p/rubygems-takeover/ Méthodologies Les qualités d'une bonne documentation https://leerob.com/docs Rapidité Chargement très rapide des pages (préférer statique). Optimisation des images, polices et scripts. Recherche ultra-rapide (chargement et affichage des résultats). Lisibilité Concise, éviter le jargon technique. Optimisée pour le survol (gras, italique, listes, titres, images). Expérience utilisateur simple au départ, complexité progressive. Multiples exemples de code (copier/coller). Utilité Documenter les solutions de contournement (workarounds). Faciliter le feedback des lecteurs. Vérification automatisée des liens morts. Matériel d'apprentissage avec un curriculum structuré. Guides de migration pour les changements majeurs. Compatible IA Trafic majoritairement via les crawlers IA. Préférer cURL aux "clics", les prompts aux tutoriels. Barre latérale "Demander à l'IA" référençant la documentation. Prêt pour les agents Faciliter le copier/coller de contenu en Markdown pour les chatbots. Possibilité de visualiser les pages en Markdown (ex: via l'URL). Fichier llms.txt comme répertoire de fichiers Markdown. Finition soignée Zones de clic généreuses (boutons, barres latérales). Barres latérales conservant leur position de défilement et état déplié. Bons états actifs/survol. Images OG dynamiques. Titres/sections lienables avec ancres stables. Références et liens croisés entre guides, API, exemples. Balises méta/canoniques pour un affichage propre dans les moteurs de recherche. Localisée Pas de /en par défaut dans l'URL. Routage côté serveur pour la langue. Localisation des chaînes statiques et du contenu. Responsive Excellents menus mobiles / support Safari iOS. Info-bulles sur desktop, popovers sur mobile. Accessible Lien "ignorer la navigation" vers le contenu principal. Toutes les images avec des balises alt. Respect des paramètres système de mouvement réduit. Universelle Livrer la documentation "en tant que code" (JSDoc, package). Livrer via des plateformes comme Context7, ou dans node_modules. Fichiers de règles (ex: AGENTS.md) avec le produit. Évaluations et modèles spécifiques recommandés pour le produit. Loi, société et organisation Microsoft va imposer une politique de Return To Office https://www.businessinsider.com/microsoft-execs-explain-rto-mandate-in-internal-meeting-2025-9 Microsoft impose 3 jours de présence au bureau par semaine à partir de février 2026, débutant par la région de Seattle Le CEO Satya Nadella explique que le télétravail a affaibli les liens sociaux nécessaires à l'innovation Les dirigeants citent des données internes montrant que les employés présents au bureau "prospèrent" davantage L'équipe IA de Microsoft doit être présente 4 jours par semaine, règles plus strictes pour cette division stratégique Les employés peuvent demander des exceptions jusqu'au 19 septembre 2025 pour trajets complexes ou absence d'équipe locale Amy Coleman (RH) affirme que la collaboration en personne améliore l'énergie et les résultats, surtout à l'ère de l'IA La politique s'appliquera progressivement aux 228 000 employés dans le monde après les États-Unis Les réactions sont mitigées, certains employés critiquent la perte d'autonomie et les bureaux inadéquats Microsoft rattrape ses concurrents tech qui ont déjà imposé des retours au bureau plus stricts Cette décision intervient après 15 000 licenciements en 2025, créant des tensions avec les employés Comment Claude Code est né ? (l'histoire de sa création) https://newsletter.pragmaticengineer.com/p/how-claude-code-is-built Claude Code : outil de développement "AI-first" créé par Boris Cherny, Sid Bidasaria et Cat Wu. Performance impressionnante : 500M$ de revenus annuels, utilisation multipliée par 10 en 3 mois. Adoption interne massive : Plus de 80% des ingénieurs d'Anthropic l'utilisent quotidiennement, y compris les data scientists. Augmentation de productivité : 67% d'augmentation des Pull Requests (PR) par ingénieur malgré le doublement de l'équipe. Origine : Commande CLI simple évoluant vers un outil accédant au système de fichiers, exploitant le "product overhang" du modèle Claude. Raison du lancement public : Apprendre sur la sécurité et les capacités des modèles d'IA. Pile technologique "on distribution" : TypeScript, React (avec Ink), Yoga, Bun. Choisie car le modèle Claude est déjà très performant avec ces technologies. "Claude Code écrit 90% de son propre code" : Le modèle prend en charge la majeure partie du développement. Architecture légère : Simple "shell" autour du modèle Claude, minimisant la logique métier et le code (suppression constante de code superflu). Exécution locale : Privilégiée pour sa simplicité, sans virtualisation. Sécurité : Système de permissions granulaire demandant confirmation avant chaque action potentiellement dangereuse (ex: suppression de fichiers). Développement rapide : Jusqu'à 100 releases internes/jour, 1 release externe/jour. 5 Pull Requests/ingénieur/jour. Prototypage ultra-rapide (ex: 20+ prototypes d'une fonctionnalité en quelques heures) grâce aux agents IA. Innovation UI/UX : Redéfinit l'expérience du terminal grâce à l'interaction LLM, avec des fonctionnalités comme les sous-agents, les styles de sortie configurables, et un mode "Learning". Le 1er Café IA publique a Paris https://www.linkedin.com/pulse/my-first-caf%25C3%25A9-ia-paris-room-full-curiosity-an[…]o-goncalves-r9ble/?trackingId=%2FPHKdAimR4ah6Ep0Qbg94w%3D%3D Conférences La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 30-31 octobre 2025 : Agile Tour Bordeaux 2025 - Bordeaux (France) 30-31 octobre 2025 : Agile Tour Nantais 2025 - Nantes (France) 30 octobre 2025-2 novembre 2025 : PyConFR 2025 - Lyon (France) 4-7 novembre 2025 : NewCrafts 2025 - Paris (France) 5-6 novembre 2025 : Tech Show Paris - Paris (France) 5-6 novembre 2025 : Red Hat Summit: Connect Paris 2025 - Paris (France) 6 novembre 2025 : dotAI 2025 - Paris (France) 6 novembre 2025 : Agile Tour Aix-Marseille 2025 - Gardanne (France) 7 novembre 2025 : BDX I/O - Bordeaux (France) 12-14 novembre 2025 : Devoxx Morocco - Marrakech (Morocco) 13 novembre 2025 : DevFest Toulouse - Toulouse (France) 15-16 novembre 2025 : Capitole du Libre - Toulouse (France) 19 novembre 2025 : SREday Paris 2025 Q4 - Paris (France) 19-21 novembre 2025 : Agile Grenoble - Grenoble (France) 20 novembre 2025 : OVHcloud Summit - Paris (France) 21 novembre 2025 : DevFest Paris 2025 - Paris (France) 24 novembre 2025 : Forward Data & AI Conference - Paris (France) 27 novembre 2025 : DevFest Strasbourg 2025 - Strasbourg (France) 28 novembre 2025 : DevFest Lyon - Lyon (France) 1-2 décembre 2025 : Tech Rocks Summit 2025 - Paris (France) 4-5 décembre 2025 : Agile Tour Rennes - Rennes (France) 5 décembre 2025 : DevFest Dijon 2025 - Dijon (France) 9-11 décembre 2025 : APIdays Paris - Paris (France) 9-11 décembre 2025 : Green IO Paris - Paris (France) 10-11 décembre 2025 : Devops REX - Paris (France) 10-11 décembre 2025 : Open Source Experience - Paris (France) 11 décembre 2025 : Normandie.ai 2025 - Rouen (France) 14-17 janvier 2026 : SnowCamp 2026 - Grenoble (France) 29-31 janvier 2026 : Epitech Summit 2026 - Paris - Paris (France) 2-5 février 2026 : Epitech Summit 2026 - Moulins - Moulins (France) 2-6 février 2026 : Web Days Convention - Aix-en-Provence (France) 3 février 2026 : Cloud Native Days France 2026 - Paris (France) 3-4 février 2026 : Epitech Summit 2026 - Lille - Lille (France) 3-4 février 2026 : Epitech Summit 2026 - Mulhouse - Mulhouse (France) 3-4 février 2026 : Epitech Summit 2026 - Nancy - Nancy (France) 3-4 février 2026 : Epitech Summit 2026 - Nantes - Nantes (France) 3-4 février 2026 : Epitech Summit 2026 - Marseille - Marseille (France) 3-4 février 2026 : Epitech Summit 2026 - Rennes - Rennes (France) 3-4 février 2026 : Epitech Summit 2026 - Montpellier - Montpellier (France) 3-4 février 2026 : Epitech Summit 2026 - Strasbourg - Strasbourg (France) 3-4 février 2026 : Epitech Summit 2026 - Toulouse - Toulouse (France) 4-5 février 2026 : Epitech Summit 2026 - Bordeaux - Bordeaux (France) 4-5 février 2026 : Epitech Summit 2026 - Lyon - Lyon (France) 4-6 février 2026 : Epitech Summit 2026 - Nice - Nice (France) 12-13 février 2026 : Touraine Tech #26 - Tours (France) 26-27 mars 2026 : SymfonyLive Paris 2026 - Paris (France) 31 mars 2026 : ParisTestConf - Paris (France) 16-17 avril 2026 : MiXiT 2026 - Lyon (France) 22-24 avril 2026 : Devoxx France 2026 - Paris (France) 23-25 avril 2026 : Devoxx Greece - Athens (Greece) 6-7 mai 2026 : Devoxx UK 2026 - London (UK) 22 mai 2026 : AFUP Day 2026 Lille - Lille (France) 22 mai 2026 : AFUP Day 2026 Paris - Paris (France) 22 mai 2026 : AFUP Day 2026 Bordeaux - Bordeaux (France) 22 mai 2026 : AFUP Day 2026 Lyon - Lyon (France) 17 juin 2026 : Devoxx Poland - Krakow (Poland) 4 septembre 2026 : JUG Summer Camp 2026 - La Rochelle (France) 17-18 septembre 2026 : API Platform Conference 2026 - Lille (France) 5-9 octobre 2026 : Devoxx Belgium - Antwerp (Belgium) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via X/twitter https://twitter.com/lescastcodeurs ou Bluesky https://bsky.app/profile/lescastcodeurs.com Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/
In this insightful episode, explore why everyone is becoming an AI operator and why mastering prompt engineering is the key differentiator in the modern workforce. Learn how to effectively communicate with AI to get the results you want—a skill that takes just 10 minutes of proper setup versus 50 minutes of back-and-forth fixes. The conversation also delves into global marketing challenges across different cultures and geographical regions, examining real examples from Sweden to the UK to show how cultural nuances still matter despite the supposed cultural flattening from media. A major part of this discussion focuses on demystifying "revolutionary" technology buzzwords. Discover how cloud computing, APIs, and AI integration aren't actually new—they're just rebranded existing concepts with fancy names. From AWS RDS being just hosted MySQL to MCP being REST API calls, learn to see through the marketing terminology and recognize the simplicity beneath. The episode concludes with thought-provoking discussions about AI's potential for real-world task automation, existential risks, and why treating AI like a good team member with proper context and instructions leads to superior results. Perfect for: Content creators, social media managers, and anyone looking to understand modern technology without the jargon. Try Vista Social for FREE today Book a Demo Follow us on Instagram Follow us on LinkedIn Follow us on Youtube
This week Sebastian Trzcinski-Clément from Canonical joins Noah and Steve to talk about Ubuntu Summit 25.10. -- During The Show -- 00:50 Intro Steve's TV Falling asleep to TV LG OLED 55A1 06:05 Sebastian Trzcinski-Clément What was out there Ubuntu CDs Working for Google Promotion Committees Putting on a Conference Abdula Developer Library Google to Canonical Ubuntu Summit Dreamworks MoonRay Microsoft & WSL Thinking Fast and Slow Reach out to people The Canonical Office 39:32 News Wire Peazip 10.7 - peazip.github.io (https://peazip.github.io/changelog.html) Calibre 8.13 - calibre-ebook.com (https://calibre-ebook.com/whats-new) Squid Proxy 7.2 - github.com (https://github.com/squid-cache/squid/releases) Less 685 - greenwoodsoftware.com (https://www.greenwoodsoftware.com/less) Digikam 8.8 - digikam.org (https://www.digikam.org/news/2025-10-19-8.8.0_release_announcement) Gnome 49.1 - gnome.org (https://discourse.gnome.org/t/gnome-49-1-released/31977) Thunderbird 144 - thunderbird.net (https://www.thunderbird.net/en-US/thunderbird/144.0/releasenotes) Firefox 144 - firefox.com (https://www.firefox.com/en-US/firefox/144.0/releasenotes) Zorin OS 18 - zorin.com (https://blog.zorin.com/2025/10/14/zorin-os-18-has-arrived) Tails 7.1 - torproject.org (https://blog.torproject.org/new-release-tails-7_1) Q3 Malware Index - eweek.com (https://www.eweek.com/news/open-source-malware-2025) LinkPro Root Kit - thehackernews.com (https://thehackernews.com/2025/10/linkpro-linux-rootkit-uses-ebpf-to-hide.html) Operation Zero Disco - cyberpress.org (https://cyberpress.org/cisco-snmp-vulnerability-exploited) F5 Hacked - thehackernews.com (https://thehackernews.com/2025/10/f5-breach-exposes-big-ip-source-code.html) EdenSpark - gamingonlinux.com (https://www.gamingonlinux.com/2025/10/gaijin-announced-edenspark-an-open-source-ai-assisted-platform-for-making-games) Project CodeGuard - siliconangle.com (https://siliconangle.com/2025/10/16/cisco-unveils-project-codeguard-open-source-framework-secure-ai-written-software) Coral in SL2610 SoCs - cnx-software.com (https://www.cnx-software.com/2025/10/17/google-open-source-coral-npu-synaptics-sl2610-edge-ai-socs) AI and Cerebral Palsy - globenewswire.com (https://www.globenewswire.com/news-release/2025/10/16/3168076/0/en/Open-Source-AI-Tool-by-Yandex-Detects-Signs-of-Infant-Cerebral-Palsy-With-Over-90-Accuracy.html) Mysql 9.5.0 - mysql.com (https://dev.mysql.com/doc/relnotes/mysql/9.5/en/news-9-5-0.html) SuperTuxKart 1.5 - supertuxkart.net (https://supertuxkart.net/Main_Page.html) KDE Plasma 6.5 - kde.org (https://kde.org/announcements/plasma/6/6.5.0) Fedora 43 - phoronix.com (https://www.phoronix.com/news/Fedora-43-Release-Day) Zorin saw 300,000 Downloads - reddit.com (https://www.reddit.com/r/linux/comments/1oih2hv/zorin_os_18_has_already_hit_over_300000_downloads) Qilin Malware - trendmicro.com (https://www.trendmicro.com/en_us/research/25/j/agenda-ransomware-deploys-linux-variant-on-windows-systems.html) Lightricks LTX-2 - prnewswire.com (https://www.prnewswire.com/news-releases/lightricks-releases-ltx-2-the-first-complete-open-source-ai-video-foundation-model-302593012.html) MiniMax-M2 - venturebeat.com (https://venturebeat.com/ai/minimax-m2-is-the-new-king-of-open-source-llms-especially-for-agentic-tool) IBM Mellea - app.daily.dev (https://app.daily.dev/posts/ibm-s-mellea-tackles-open-source-ai-s-hidden-weakness-xqmwbepaq) 90% of Games Run on Linux - tomshardware.com (https://www.tomshardware.com/software/linux/nearly-90-percent-of-windows-games-now-run-on-linux-latest-data-shows-as-windows-10-dies-gaming-on-linux-is-more-viable-than-ever) boilingsteam.com (https://boilingsteam.com/windows-games-compatibility-on-linux-is-at-a-all-time-high) 42:55 F5 Breach Claims Nation State Level Attack Effect of Breach Disclosure laws If you can get the info off your system, do so F5 Competitors MetalLB (https://github.com/metallb/metallb) HA Proxy (https://www.haproxy.org/) theregister.com (https://www.theregister.com/2025/10/15/highly_sophisticated_government_hackers_breached/) 51:05 Proxmox Backup Server DHCP? - David Tiny From the community Proxmox Virtual Enviornment Proxmox Backup Server Use the tailscale IP or DNS Name Proxmox docs (https://forum.proxmox.com/threads/changing-network-to-manage-backups.104771/) -- The Extra Credit Section -- For links to the articles and material referenced in this week's episode check out this week's page from our podcast dashboard! This Episode's Podcast Dashboard (http://podcast.asknoahshow.com/464) Phone Systems for Ask Noah provided by Voxtelesys (http://www.voxtelesys.com/asknoah) Join us in our dedicated chatroom #GeekLab:linuxdelta.com on Matrix (https://element.linuxdelta.com/#/room/#geeklab:linuxdelta.com) -- Stay In Touch -- Find all the resources for this show on the Ask Noah Dashboard Ask Noah Dashboard (http://www.asknoahshow.com) Need more help than a radio show can offer? Altispeed provides commercial IT services and they're excited to offer you a great deal for listening to the Ask Noah Show. Call today and ask about the discount for listeners of the Ask Noah Show! Altispeed Technologies (http://www.altispeed.com/) Contact Noah live [at] asknoahshow.com -- Twitter -- Noah - Kernellinux (https://twitter.com/kernellinux) Ask Noah Show (https://twitter.com/asknoahshow) Altispeed Technologies (https://twitter.com/altispeed) Special Guest: Sebastian Trzcinski-Clément.
AWS Morning Brief for the week of October 27th, with Corey Quinn. Links:Streamline in-place application upgrades with Amazon VPC LatticeBuild a proactive AI cost management system for Amazon Bedrock – Part 2 -Overview and best practices of multithreaded replication in Amazon RDS for MySQL, Amazon RDS for MariaDB, and Amazon Aurora MySQL AWS announces Nitro Enclaves are now available in all AWS RegionsAmazon CloudWatch Synthetics now supports bundled multi-check canaries Amazon U7i instances now available in Europe (London) RegionAmazon Connect now supports automated follow-up evaluations triggered by initial evaluation resultsHow the Wildlife Conservation Society uses AWS to accelerate coral reef monitoring worldwideAmazon MQ is now available in AWS Asia Pacific (New Zealand) Region Amazon CloudWatch introduces interactive incident reportingAWS Secret-West Region is now availableCharting the life of an Amazon CloudFront request
En este episodio de "atareao con Linux", abordamos una frustración común: la sobrecarga de complejidad en el mundo del blogging. Si has intentado usar WordPress y te has cansado de gestionar plugins, temas y vulnerabilidades, o si las soluciones de Static Site Generator (SSG) te parecen excesivas para simplemente publicar notas y código, Noet es la solución que has estado buscando.Noet es una plataforma de blogging de código abierto con una filosofía clara: priorizar la escritura. Su diseño se basa en quitar todo lo que se interpone entre tú y la publicación de tu contenido. Es, esencialmente, un editor de texto avanzado que guarda posts en una base de datos y los sirve como un sitio web limpio y legible.La verdadera magia de Noet reside en su simplicidad técnica, lo cual lo hace perfecto para nuestro entorno Linux (VPS, Raspberry Pi, o tu servidor local):Single Binary (Go): Todo el backend se compila en un único ejecutable (escrito en Go), lo que facilita enormemente el despliegue y el mantenimiento en cualquier plataforma Linux.SQLite para la Gestión de Datos: En lugar de depender de bases de datos externas como MySQL o PostgreSQL, Noet usa SQLite. Esto significa que todos tus posts y configuraciones se almacenan en un solo archivo, noet.db. Esta característica es fundamental para una gestión de datos eficiente y para realizar copias de seguridad de forma increíblemente sencilla.Despliegue con Docker: Fieles a nuestro estilo práctico, te mostramos el archivo docker-compose.yaml necesario para poner Noet en marcha en cuestión de minutos. Si ya usas Docker para servicios como Traefik, Syncthing o tus bases de datos [cite: 2025-07-15], añadir Noet a tu stack es trivial.Para el escritor técnico o el power user de Linux, Noet brilla en su editor:Soporte Markdown Nativo: Usa la sintaxis que ya conoces.Código y LaTeX: El editor soporta resaltado de sintaxis para bloques de código y permite incrustar ecuaciones matemáticas con LaTeX/KaTeX. Es ideal para documentar tus proyectos o publicar tutoriales avanzados.Auto-guardado: No pierdas ni una línea de lo que escribes.Sencillez en Imágenes: Arrastra y suelta para subir imágenes y gestiona su tamaño con un clic.Si buscas mejorar tu productividad, simplificar tu infraestructura y tener un blog que se sienta tan ligero y moderno como Neovim u Obsidian [cite: 2025-07-15] pero listo para publicar en la web, tienes que probar Noet.Escucha el episodio para obtener todos los comandos, el archivo docker-compose y los mejores consejos de uso.Más información y enlaces en las notas del episodio
An airhacks.fm conversation with Philipp Page (@PagePhilipp) about: early computing experiences with Windows XP and Intel Pentium systems, playing rally car games like Dirt with split-screen multiplayer, transitioning from gaming to server administration through Minecraft, running Minecraft servers at age 13 with memory limitations and out-of-memory exceptions, implementing caching mechanisms with cron jobs and MySQL databases, learning about SQL injection attacks and prepared statements, discovering connection pooling advantages over PHP approaches, appreciating type safety and Object-oriented programming principles in Java, the tendency to over-abstract and create unnecessary abstractions as junior developers, obsession with avoiding dependencies and implementing frameworks from scratch, building custom Model-View-Controller patterns and dependency injection systems, developing e-learning platform for aerospace industry using PHP Symfony framework, implementing time series forecasting in pure Java without external dependencies, internship and employment at AWS Dublin in Frontier Networking team, working on AWS Outposts and Ground Station hybrid cloud offerings, using python and rust for networking control plane development, learning to appreciate Python despite initial resistance to dynamically typed languages, joining AWS Lambda Powertools team as Java tech lead, maintaining open-source serverless development toolkit, providing utilities for observability including structured JSON logging with Lambda-specific information, implementing metrics and tracing for distributed event-driven architectures, mapping utilities to AWS Well-Architected Framework serverless lens recommendations, caching parameters and secrets to improve scalability and reduce costs, debate about AspectJ dependency and alternatives like Micronaut and quarkus approaches, providing both annotation-based and programmatic interfaces for utilities, newer utilities like Kafka consumer avoiding AspectJ dependency, comparing Micronaut's compiler-based approach and Quarkus extensions for bytecode generation, AspectJ losing popularity in enterprise Java projects, preferring Java standards over external dependencies for long-term maintainability, agents in electricity trading simulations for renewable energy scenarios, comparing on-premise Java capabilities versus cloud-native AWS features, default architecture pattern of Lambda with S3 for persistent storage, using AWS Calculator for cost analysis before architecture decisions, event-driven architectures being native to AWS versus artificially created in traditional Java projects, everything in AWS emitting events naturally through services like EventBridge, filtering events rather than creating them artificially, avoiding unnecessary microservices complexity when simple method calls suffice, directly wiring API Gateway to DynamoDB without Lambda for no-code solutions, using Java for CDK infrastructure as code while minimizing runtime dependencies, maximizing cloud-native features when in cloud versus on-premise optimization strategies, starting with simplest possible architecture and justifying complexity, blue-green deployments and load balancing handled automatically by Lambda, internal AWS teams using Lambda for orchestration and event interception, Lambda as foundational zero-level service across AWS infrastructure, preferring highest abstraction level services like Lambda and ECS Fargate, only dropping to EC2 when specific requirements demand lower-level control, contributing to Powertools for AWS Lambda Python repository before joining team, compile-time weaving avoiding Lambda cold start performance impacts, GraalVM compilation considerations for Quarkus and Micronaut approaches, customer references available on Powertools website, contrast between low-level networking and serverless development, LinkedIn as primary social media platform for professional connections, Powertools for AWS Lambda (Java) Philipp Page on twitter: @PagePhilipp
AWS Morning Brief for the week of October 20th, with Corey Quinn. Links:Amazon Location Service Introduces New Map Styling Features for Enhanced CustomizationAWS Resource Explorer launches immediate resource discovery within a Region AWS SAM CLI adds Finch support, expanding local development tool options for serverless applicationsSimplified model access in Amazon BedrockAmazon EC2 now supports CPU options optimization for license-included instancesIntroducing Amazon EBS Volume Clones: Create instant copies of your EBS volumesOptimizing document AI and structured outputs by fine-tuning Amazon Nova Models and on-demand inferenceIntroducing URL and host header rewrite with AWS Application Load BalancersNew Amazon EKS Auto Mode features for enhanced security, network control, and performanceMonitor, analyze, and manage capacity usage from a single interface with Amazon EC2 Capacity ManagerPerformance optimization strategies for MySQL on Amazon RDSAWS re:Invent 2025: Reimagining customer experience with Amazon AWS Deprecates Two Dozen Services (Most of Which You've Never Heard Of)A FinOps Guide to Comparing Containers and Serverless Functions for ComputeAnnouncing vector search for Amazon ElastiCache
Back in May, the Remix cofounders revealed they were reimagining Remix v3 from the ground up, and this past week at Remix Jam, they gave a sneak peek of it. It's fair to say this new framework shouldn't be called Remix at all because it's departed so far from its origins: devs manually update state, it uses signals, routes are defined in a TS doc, and it will ship with a component library, for starters. Will it catch on, who knows?Not to be outdone by React v19.2 last week, Next.js 16 beta debuted (with support for React 19.2 included). In addition to the latest version of React, Next.js 16 has also declared Turbopack, RSC support, and React Compiler all stable, and improved its caching system as well.And Bun is back in the news with the release of Bun 1.3, and it's a doozy of a minor version release. Bun wants to be a full-stack JavaScript runtime as it now includes a full-stack dev server, built in support for MySQL and Redis DBs, routing, and the ability to package an entire project into one executable for cross-platform support. Well done, Bun team!Chapter Markers:01:14 - Remix v310:38 - Next.js 16 beta17:35 - Bun 1.324:42 - Firefox 144 released w/view transition support25:19 - HBO changes TV channel names28:00 - W3C has a new logo31:25 - What's making us happyNews:Paige - Bun 1.3Jack - Remix v3TJ - Next.js 16 betaLightning News:Firefox 144 released w/view transition supportW3C has a new logo and the Gavin Belson signature from Silicon Valley HBO changes TV channel namesWhat Makes Us Happy this Week:Paige - The Gilded Age TV seriesJack - KPop Demon HuntersTJ - Madison, WIThanks as always to our sponsor, the Blue Collar Coder channel on YouTube. You can join us in our Discord channel, explore our website and reach us via email, or talk to us on X, Bluesky, or YouTube.Front-end Fire websiteBlue Collar Coder on YouTubeBlue Collar Coder on DiscordReach out via emailTweet at us on X @front_end_fireFollow us on Bluesky @front-end-fire.com
This show has been flagged as Explicit by the host. Part I - Lee talks about: Cyber - Capture the flag, providing OAuth, Secure design and static typing Databases - SQL Server, MySQL and SQLite Test Frameworks Generative AI for coding Hardware (as in IoT, not as in computers) Part II - A ramble about neurdivergence In academia and work Accommodation vs Encouraging work styles that fit the task Remote working Unusual career paths Technical communication Some personal code projects Url to Markdown Konsole extension Epub in a terminal Markdown table generator MySQL output formatter Resources of note Report on Changing the Workplace (2022) - about disability and remote working Model Context Protocol - A way to give AI chat bots access to software systems to increase their relevant knowledge and abilities Secure by Design book No chatbots were harmed in the making of this episode Provide feedback on this episode.
Michael and Jake open with retro arcade serendipity (a Mortal Kombat cabinet sighting!) and tumble into family bowling, kid-approved card games, and why tactile gadgets are back in style.Then they pivot hard into dev-mode: shadcn/ui (and shadcn-vue), Inertia, React-ish forms, and the age-old tradeoff between “batteries-included” simplicity and modern real-time UX.Highlights:Mortal Kombat cabinet & mini arcades, gift ideas for Laracon AUDuckpin bowling explainer and family bowling stories (plus UNO, Yahtzee, Taco Cat Goat Cheese Pizza)The “analog is cool again” thread: mechanical keyboards, a Keychron board, and a retro 3D-printed mouse shell for a Logitech M185Dev deep-dive: shadcn docs, Inertia forms, partial reloads vs full refresh, Livewire/Alpine, and real-time updates with Pusher/ReverbShow linksRetroPie / Arcade1UpLaracon AUDuckpin bowlingKeychron keyboard3D-printed retro mouse shell for Logitech M185Taco Cat Goat Cheese PizzaInertia.jsshadcn/uishadcn-vueLivewireAlpine.jsPusherLaravel ReverbAxiosfetch
"Escolha uma área e fique ali. É o tempo que vai dar espaço para a multidisciplinariedade. Tente criar algo com aquilo" - Felipe Nunes No sexto episódio do Hipsters.Talks, PAULO SILVEIRA , CVO do Grupo Alun, conversa com FELIPE NUNES, senior sales engineer da NEO4J, sobre bancos de dados de grafos e como eles estão revolucionando a forma de trabalhar com dados. Uma conversa sobre como os grafos democratizam o acesso aos dados e potencializam a inteligência artificial. Prepare-se para um episódio cheio de conhecimento e inspiração! Espero que aproveitem :) Sinta-se à vontade para compartilhar suas perguntas e comentários. Vamos adorar conversar com vocês!
Jake and Michael dive into a wide range of topics, from coding practices in Laravel to the evolving role of AI in software development. They kick things off with daylight savings and weekend updates before moving into technical discussions on authorization, policies, and form requests in Laravel.The conversation expands to cover recent changes in middleware and controller patterns, contextual attributes in the service container, and practical approaches to request validation.Later, the focus shifts toward AI tools like Claude, Grok, and Cursor, including their strengths, frustrations, and industry-wide adoption pressures. We reflect on the uneasy balance between developer control and AI assistance, wrapping up with thoughts on productivity, value, and what it means to let machines write code.Show linksLawn HubArcade 1UpRetroPieMortal Kombat cabinetNuno's authorization on form requestsContextual AttributesGrok Code Fast 1
Sam Lambert, my former boss at PlanetScale, talks to me about PlanetScale moving from a MySQL company to now also having a Postgres offering. Sam shares why PlanetScale decided to move to Postgres, how MySQL and Postgres are different at a technical level, and how the change has impacted the company culture. Stay to the end for a special surprise!PlanetScale Metal Episode: https://youtu.be/3r9PsVwGkg4Join the waitlist to be notified of the MySQL for Developers release on Database School: https://databaseschool.com/mysqlFollow Sam: PlanetScale: https://planetscale.comTwitter: https://twitter.com/isamlambertFollow Aaron:Twitter: https://twitter.com/aarondfrancis LinkedIn: https://www.linkedin.com/in/aarondfrancisWebsite: https://aaronfrancis.com - find articles, podcasts, courses, and more.Database School: https://databaseschool.comDatabase School YouTube Channel: https://www.youtube.com/@UCT3XN4RtcFhmrWl8tf_o49g (Subscribe today)Chapters:00:00 - Inaugural episode on this channel01:46 - Introducing Sam Lambert and his background03:04 - How PlanetScale built on MySQL and Vitess06:10 - Explaining the layers of PlanetScale's architecture09:57 - Node lifecycles, failover, and operational discipline12:02 - How Vitess makes sharding work14:21 - PlanetScale's edge network and resharding19:02 - Why downtime is unacceptable at scale20:04 - From Metal to Postgres: the decision process23:06 - Why Postgres vibes matter for startups27:04 - How PlanetScale adapted its stack for Postgres34:38 - Entering the Postgres ecosystem and extensions41:02 - Permissions, security, and reliability trade-offs45:04 - Building Ni: a Vitess-style system for Postgres53:33 - Why PlanetScale insists on control for reliability1:02:05 - Competing in the broader Postgres landscape1:08:33 - Why PlanetScale stays “just a database”1:12:33 - What GA means for Postgres at PlanetScale1:17:43 - Call to action for new Postgres users1:18:49 - Surprise!1:22:21 - Wrap-up and where to find Sam
In this episode, Michael and Jake catch up on life and code. They talk about fatigue, seasonal shifts, lawn adventures, and the return of hay fever.We dive into replacing a legacy Salesforce integration with Saloon, frustrations with mocks, and how Saloon fakes have improved testing workflows. Michael walks through his experiments with AI tools like Claude and opencode to prototype fake gateways - treating AI as a “junior dev” pair. The discussion covers gateway patterns, middleware, registry-based response handling, and strategies for testing Salesforce without polluting production environments.From weeds and soil temps to software fakes and AI-driven dev, this one's a mix of everyday life and practical engineering insights.Show linksLawnHub – Michael's lawn care supplierSaloon (by Sam Carré) – Laravel/HTTP client packageSalesforce – CRM platform discussed in the episodeMockery – PHP mocking frameworkopencode – terminal tool for AI coding (by SST's Dax and Adam, Terminal Coffee)Claude – AI model used for coding explorationGitHub Copilot – AI coding assistantStripe test cards – referenced in gateway fake analogy
Tony Cardella is a seasoned software engineer based in Houston, Texas. With a robust background in enterprise development, Tony brings deep expertise in the .NET Framework (C#), Python, and cloud platforms including Microsoft Azure and Amazon Web Services. His technical repertoire spans both relational databases — such as SQL Server, MySQL, and PostgreSQL — and NoSQL solutions like Azure Cosmos DB. Tony is a strong advocate for developer productivity tools, frequently leveraging JetBrains products including ReSharper, DataGrip, PyCharm, and Rider, as well as Visual Studio. Outside the world of code, Tony is equally passionate about strength training, whether he's lifting weights himself or coaching others in the discipline. Topics of Discussion: [1:34] Tony shares his career journey, starting with a consulting company that reached out to him while he was job hunting. [3:17] NCrunch is an automated testing tool that runs unit tests continuously, focusing on impacted tests. [5:08] Challenges and benefits of NCrunch, and why would we need to use it? [7:44] Tony shares his approach to unit testing, focusing on covering 80% of the code with minimal effort and addressing the remaining 20% as needed. [8:51] The importance of not over-investing in unit tests that may not provide significant value. [11:47] Tony explains how Ncrunch provides code coverage metrics and visual indicators of covered and uncovered code. [12:59] The tool's ability to show exactly where unit tests are failing, without needing to dive into stack traces. [13:51] Distributed processing and integration tests. [27:44] The challenges of running integration tests with external dependencies, such as databases. [29:18] Exploratory testing and code quality. [32:34] Tony emphasizes the value of unit tests in codifying tribal knowledge and ensuring code quality. Mentioned in this Episode: Clear Measure Way Architect Forum Software Engineer Forum Tony Cardella Lightning Talks! The Code Gorilla Survey: Fixing Bugs Stealing Time from Development NCrunch Want to Learn More? Visit AzureDevOps.Show for show notes and additional episodes.
In this episode, Jake and Michael catch up on life, family, and tech.Michael shares proud stories about his son Eli turning into a “soccer terrorist” on the field, while Jake recounts his own stint as a stand-in soccer coach. They dive into Laracon AU updates — from speaker announcements and Road to Laracon podcasts, to quiz night and swag planning.Other highlights include experiments with AI-generated artwork, Bruce's new social media adventures, sponsor promotion, and even a tangent on coding tools like PHPStan and how AI can help fix issues in the background.Show linksLaracon AURoad to LaraconBruce on XLaravel Live DenmarkBoost
Matt Hamann knew he was going to be in tech way back in his younger days. His Dad worked for IBM, so there were always fun things to talk about and play with. He got his first family computer when he was 4 years old, and started programming BASIC when he was 8. Eventually, they got dialup through AOL - and he took off building websites with PHP & MySQL. Outside of tech, he is married with 3 kids. He loves to travel and spend time with his family. He also plays several instruments, including the piano and pipe organ, and enjoys tinkering with smart home devices.Right around the time of the pandemic, Matt and his co-founder were pitching a new company idea in Y Combinator, around data privacy. After receiving the feedback that there wasn't a big market for the original idea, they started to jam on ideas on how to pivot - and quickly landed on how cool it would be to have password-less authentication.This is the creation story of Rownd.SponsorsPaddle.comSema SoftwarePropelAuthPostmanMeilisearchMailtrap.TECH Domains (https://get.tech/codestory)Linkshttps://rownd.com/https://www.linkedin.com/in/matthamann/Support this podcast at — https://redcircle.com/code-story-insights-from-startup-tech-leaders/donationsAdvertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
MariaDB is a name with deep roots in the open-source database world, but in 2025 it is showing the energy and ambition of a company on the rise. Taken private in 2022 and backed by K1 Investment Management, MariaDB is doubling down on innovation while positioning itself as a strong alternative to MySQL and Oracle. At a time when many organisations are frustrated with Oracle's pricing and MySQL's cloud-first pivot, MariaDB is finding new opportunities by combining open-source freedom with enterprise-grade reliability. In this conversation, I sit down with Vikas Mathur, Chief Product Officer at MariaDB, to explore how the company is capitalising on these market shifts. Vikas shares the thinking behind MariaDB's renewed focus, explains how the platform delivers similar features to Oracle at up to 80 percent lower total cost of ownership, and details how recent innovations are opening the door to new workloads and use cases. One of the most significant developments is the launch of Vector Search in January 2023. This feature is built directly into InnoDB, eliminating the need for separate vector databases and delivering two to three times the performance of PG Vector. With hardware acceleration on both x86 and IBM Power architectures, and native connectors for leading AI frameworks such as LlamaIndex, LangChain and Spring AI, MariaDB is making it easier for developers to integrate AI capabilities without complex custom work. Vikas explains how MariaDB's pluggable storage engine architecture allows users to match the right engine to the right workload. InnoDB handles balanced transactional workloads, MyRocks is optimised for heavy writes, ColumnStore supports analytical queries, and Moroonga enables text search. With native JSON support and more than forty functions for manipulating semi-structured data, MariaDB can also remove the need for separate document databases. This flexibility underpins the company's vision of one database for infinite possibilities. The discussion also examines how MariaDB manages the balance between its open-source community and enterprise customers. Community adoption provides early feedback on new features and helps drive rapid improvement, while enterprise customers benefit from production support, advanced security, high availability and disaster recovery capabilities such as Galera-based synchronous replication and the MacScale proxy. We look ahead to how MariaDB plans to expand its managed cloud services, including DBaaS and serverless options, and how the company is working on a “RAG in a box” approach to simplify retrieval-augmented generation for DBAs. Vikas also shares his perspective on market trends, from the shift away from embedded AI and traditional machine learning features toward LLM-powered applications, to the growing number of companies moving from NoSQL back to SQL for scalability and long-term maintainability. This is a deep dive into the strategy, technology and market forces shaping MariaDB's next chapter. It will be of interest to database architects, AI engineers, and technology leaders looking for insight into how an open-source veteran is reinventing itself for the AI era while challenging the biggest names in the industry.
In this episode, Michael and Jake reflect on their recent time at Laracon US 2025 in Denver - catching up in person after six years, reconnecting with the Laravel community, and sharing behind-the-scenes stories from the conference floor.They also cover:Why this Laracon felt like a true “homecoming”Building Laravel meetups and fostering communityThe book (and tv show) Station Eleven (and how different things might have been)The value of attending conferences, particularly as a non-speakerContinued discussion on the complexities of handling roles and permissionsThe episode weaves together community highlights, technical challenges, and personal reflections.
In this episode, I speak with Peter Zaitsev of Percona about the history of MySQL, his history with the venerable database, and his history with Percona.Try the best git GUI for macOS and WindowsGrapple git without the grief and try Tower, the best graphical interface for git on macOS and Windows.https://go.chrischinchilla.com/tower For show notes and an interactive transcript, visit chrischinchilla.com/podcast/To reach out and say hello, visit chrischinchilla.com/contact/To support the show for ad-free listening and extra content, visit chrischinchilla.com/support/
In this episode, Michael and Jake kick things off with some Laracon travel talk, sharing their hotel plans, coffee quests, and even jokes about pillow fights at the conference hotel. Michael reveals his precise coffee scouting for the Vib by Best Western hotel, determined not to survive three days on Starbucks alone.Should you define middleware in a controller's constructor? Michael explains why he avoids it - preferring to keep all middleware in route definitions for better visibility and maintainability. Jake explores the pros and cons and why he's still tempted to use it for certain edge cases.Dynamic permissions vs. static definitions: We switch gears to talk about the balance between flexibility and clarity when defining permissions for applications, especially when it comes to handling user roles, teams, and complex business rules.Mentioned in this episode:Laracon US travel plansVib by Best Western (the hotel coffee and tacos!)Laravel middleware usagePermission handling in appsTravel gear for developers on the go
In this episode, Jake and Michael discuss the nuance of being “busy”, saying no to features (and why), handling user feedback early, Laravel-powered static views with dynamic data, and building tools that stand the test of time.
In this episode, Jake and Michael reflect on parenting, discuss Apple's new Liquid Glass UI, finding smarter ways to use video on the web, plus share thoughts on AI overload, Laracon prep, and why Wistia might be your next favourite video tool.In this episode:- Apple's Liquid Glass UI- Kit.com and Wistia for video- Reflections on AI, tech bubbles, and accessibility- Laracon US and vox pop interviews- The emotional ride of watching your kids grow up
Welcome to episode 306 of The Cloud Pod – where the forecast is always cloudy! This week, we have a bunch of announcements concerning the newest offering from Anthropic – Claude Sonnet 4 and Opus 4, plus container security, Azure MySQL Maintenance, Vertex AI, and Mistral AI. Plus, we've got a Cloud Journey installment AND an aftershow – so get comfy and get ready for a trip to the clouds! Titles we almost went with this week: ECS Failures Now Have 4x the Excuses Nailing Down Your Container Security, One Patch at a Time HashiCorp’s New Recipe: Terraform, AI, and a Pinch of MCP Teaching an Old DNS New IPv6 Tricks Dash-ing through the Klusters, in an AWS Console Google’s Generative AI Playground Gets a Glow-Up Vertex AI Studio: Now with 200% More Darkness! Like our souls Claude Opus 4 Strikes a Chord on Google Cloud Sovereign-teed to Please: Google Cloud’s Royal Treatment Google’s Cloud Kingdom Expands its Borders Shall I Compare Thee to a Summer’s AI? Anthropic Drops Sonne(t) 4 Knowledge on Vertex Mistral AI Chats Up a Storm on Google Cloud Google Cloud’s Vertex AI Gets a Dose of Mistral Magic .NET Aspire on Azure: The App Service Strikes Back Default Outbound Access Retires, Decides Florida Isn’t for Everyone AI Is Going Great – or How ML Makes Money 01:52 Introducing Claude 4 Claude has launched the latest models in Claude Opus 4 and Claude Sonnet 4, setting new standards for coding, advancing reasoning and AI agents. Maybe they'll actually follow instructions when told to shut down? (Looking at you, ChatGPT.) Claude Opus 4 is “the world's best coding model” with sustained performance on complex, long-running tasks and agent workflows. Opus 4 has 350 billion parameters, making it one of the largest publicly available language models. It demonstrates strong performance on academic benchmarks, including research. Sonnet 4 is a smaller 10 billion parameter model optimized for dialogue, making it well-suited for conversational AI applications. Alongside the models, they are also announcing: Extended thinking with tool use (beta): Both models can use tools – like web search – during extended thinking, allowing Claude to alternate between reasoning and tool use to improve its responses. New Model Capabilities: Both models can use tools in parallel, follow instructions more precisely, and when given access to local files by developers — demonstrate significantly improved memory capabilities, extracting and saving key facts maintain continuity and build tacit knowledge over time Claude code is now generally available: After receiving extensive positive feedback during our research preview, they are expanding how developers can collaborate with Claude. Claude code now supports background tasks via gith
In this episode, Jake and Michael discuss Jake's new stealth grill, his eldest son's takeover of the state finals (and metric's takeover of measurement), and Michael goes through the process of refining over 150 talk submissions down to the final Laracon AU schedule.
In this episode, Jake and Michael discuss using interfaces as a dictionary of constants, working with and testing inputs passed down multiple layers of the application, and refactoring legacy code with PHP's ArrayAccess interface.
In this episode, Jake and Michael discuss the ramp up of Laracon AU planning, touch base on Jake's unorthodox usage of Laravel Horizon, and Michael finally coming around to using AI.
Kevin Weil is the chief product officer at OpenAI, where he oversees the development of ChatGPT, enterprise products, and the OpenAI API. Prior to OpenAI, Kevin was head of product at Twitter, Instagram, and Planet, and was instrumental in the development of the Libra (later Novi) cryptocurrency project at Facebook.In this episode, you'll learn:1. How OpenAI structures its product teams and maintains agility while developing cutting-edge AI2. The power of model ensembles—using multiple specialized models together like a company of humans with different skills3. Why writing effective evals (AI evaluation tests) is becoming a critical skill for product managers4. The surprisingly enduring value of chat as an interface for AI, despite predictions of its obsolescence5. How “vibe coding” is changing how companies operate6. What OpenAI looks for when hiring product managers (hint: high agency and comfort with ambiguity)7. “Model maximalism” and why today's AI is the worst you'll ever use again8. Practical prompting techniques that improve AI interactions, including example-based prompting—Brought to you by:• Eppo—Run reliable, impactful experiments• Persona—A global leader in digital identity verification• OneSchema—Import CSV data 10x faster—Where to find Kevin Weil:• X: https://x.com/kevinweil• LinkedIn: https://www.linkedin.com/in/kevinweil/—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Kevin's background(04:06) OpenAI's new image model(06:52) The role of chief product officer at OpenAI(10:18) His recruitment story and joining OpenAI(17:20) The importance of evals in AI(24:59) Shipping quickly and consistently(28:34) Product reviews and iterative deployment(39:35) Chat as an interface for AI(43:59) Collaboration between researchers and product teams(46:41) Hiring product managers at OpenAI(48:45) Embracing ambiguity in product management(51:41) The role of AI in product teams(53:21) Vibe coding and AI prototyping(55:55) The future of product teams and fine-tuned models(01:04:36) AI in education(01:06:42) Optimism and concerns about AI's future(01:16:37) Reflections on the Libra project(01:20:37) Lightning round and final thoughts—Referenced:• OpenAI: https://openai.com/• The AI-Generated Studio Ghibli Trend, Explained: https://www.forbes.com/sites/danidiplacido/2025/03/27/the-ai-generated-studio-ghibli-trend-explained/• Introducing 4o Image Generation: https://openai.com/index/introducing-4o-image-generation/• Waymo: https://waymo.com/• X: https://x.com• Facebook: https://www.facebook.com/• Instagram: https://www.instagram.com/• Planet: https://www.planet.com/• Sam Altman on X: https://x.com/sama• A conversation with OpenAI's CPO Kevin Weil, Anthropic's CPO Mike Krieger, and Sarah Guo: https://www.youtube.com/watch?v=IxkvVZua28k• OpenAI evals: https://github.com/openai/evals• Deep Research: https://openai.com/index/introducing-deep-research/• Ev Williams on X: https://x.com/ev• OpenAI API: https://platform.openai.com/docs/overview• Dwight Eisenhower quote: https://www.brainyquote.com/quotes/dwight_d_eisenhower_164720• Inside Bolt: From near-death to ~$40m ARR in 5 months—one of the fastest-growing products in history | Eric Simons (founder & CEO of StackBlitz): https://www.lennysnewsletter.com/p/inside-bolt-eric-simons• StackBlitz: https://stackblitz.com/• Claude 3.5 Sonnet: https://www.anthropic.com/news/claude-3-5-sonnet• Anthropic: https://www.anthropic.com/• Four-minute mile: https://en.wikipedia.org/wiki/Four-minute_mile• Chad: https://chatgpt.com/g/g-3F100ZiIe-chad-open-a-i• Dario Amodei on LinkedIn: https://www.linkedin.com/in/dario-amodei-3934934/• Figma: https://www.figma.com/• Julia Villagra on LinkedIn: https://www.linkedin.com/in/juliavillagra/• Andrej Karpathy on X: https://x.com/karpathy• Silicon Valley CEO says ‘vibe coding' lets 10 engineers do the work of 100—here's how to use it: https://fortune.com/2025/03/26/silicon-valley-ceo-says-vibe-coding-lets-10-engineers-do-the-work-of-100-heres-how-to-use-it/• Cursor: https://www.cursor.com/• Windsurf: https://codeium.com/windsurf• GitHub Copilot: https://github.com/features/copilot• Patrick Srail on X: https://x.com/patricksrail• Khan Academy: https://www.khanacademy.org/• CK-12 Education: https://www.ck12.org/• Sora: https://openai.com/sora/• Sam Altman's post on X about creative writing: https://x.com/sama/status/1899535387435086115• Diem (formerly known as Libra): https://en.wikipedia.org/wiki/Diem_(digital_currency)• Novi: https://about.fb.com/news/2020/05/welcome-to-novi/• David Marcus on LinkedIn: https://www.linkedin.com/in/dmarcus/• Peter Zeihan on X: https://x.com/PeterZeihan• The Wheel of Time on Prime Video: https://www.amazon.com/Wheel-Time-Season-1/dp/B09F59CZ7R• Top Gun: Maverick on Prime Video: https://www.amazon.com/Top-Gun-Maverick-Joseph-Kosinski/dp/B0DM2LYL8G• Thinking like a gardener not a builder, organizing teams like slime mold, the adjacent possible, and other unconventional product advice | Alex Komoroske (Stripe, Google): https://www.lennysnewsletter.com/p/unconventional-product-advice-alex-komoroske• MySQL: https://www.mysql.com/—Recommended books:• Co-Intelligence: Living and Working with AI: https://www.amazon.com/Co-Intelligence-Living-Working-Ethan-Mollick/dp/059371671X• The Accidental Superpower: Ten Years On: https://www.amazon.com/Accidental-Superpower-Ten-Years/dp/1538767341• Cable Cowboy: https://www.amazon.com/Cable-Cowboy-Malone-Modern-Business/dp/047170637X—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. Get full access to Lenny's Newsletter at www.lennysnewsletter.com/subscribe
In this episode, Jake and Michael discuss Michael's new recording gear, building integrations with external APIs using Saloon, and configuring Laravel Horizon.
RJJ Software's Software Development Service This episode of The Modern .NET Show is supported, in part, by RJJ Software's Podcasting Services, whether your company is looking to elevate its UK operations or reshape its US strategy, we can provide tailored solutions that exceed expectations. Show Notes "So I've been focused on the code to cloud journey, I like to call it, for the template. And two years ago, my goal was to provide a solution that could take you from code to cloud in 45 minutes or less. So I wanted it to be "file new project" to deploy a solution on Azure—because that's where my main focus is—within 45 minutes."— Jason Taylor Welcome friends to The Modern .NET Show; the premier .NET podcast, focusing entirely on the knowledge, tools, and frameworks that all .NET developers should have in their toolbox. We are the go-to podcast for .NET developers worldwide, and I am your host: Jamie "GaProgMan" Taylor. In this episode, Jason Taylor (no relation) joined us to talk about his journey from Classic ASP to .NET and Azure. He also discusses clean architecture's maintainability, and his open-source Clean Architecture Solution template for ASP .NET Core, along with strategies for learning new frameworks and dealing with complexity. "Right now the template supports PostgreSQL, SQLite, and SQL Server. If you want to support MySQL, it's relatively easy to do because there's already a Bicep module or a Terraform module that you can go in and use it. So I went from 45 minutes to now I can get things up and running in like, I don't know, two minutes of effort and 15 minutes of waiting around while I make my coffee"— Jason Taylor Along the way, we talk about some of the complexities involved with creating a template which supports multiple different frontend technologies and .NET Aspire (which was news to me when we recorded), all the while maintaining the goal of being the simplest approach for enterprise development with Clean Architecture. Anyway, without further ado, let's sit back, open up a terminal, type in `dotnet new podcast` and we'll dive into the core of Modern .NET. Supporting the Show If you find this episode useful in any way, please consider supporting the show by either leaving a review (check our review page for ways to do that), sharing the episode with a friend or colleague, buying the host a coffee, or considering becoming a Patron of the show. Full Show Notes The full show notes, including links to some of the things we discussed and a full transcription of this episode, can be found at: https://dotnetcore.show/season-7/from-code-to-cloud-in-15-minutes-jason-taylors-expert-insights-and-the-clean-architecture-template/ Jason's Links: Jason's Clean Architecture repo on GitHub Jason's Northwind Traders with Clean Architecture repo on Github Connect with Jason Jason's RapidBlazor repo on GitHub Other Links: C# DevKit for Visual Studio Code Code, Coffee, and Clever Debugging: Leslie Richardson's Microsoft Journey and the C# Dev Kit in Visual Studio Code with Leslie Richardson dotnet scaffold devcontainers .NET Aspire Azure Developer CLI GitHub CLI Obsidian Supporting the show: Leave a rating or review Buy the show a coffee Become a patron Getting in Touch: Via the contact page Joining the Discord Remember to rate and review the show on Apple Podcasts, Podchaser, or wherever you find your podcasts, this will help the show's audience grow. Or you can just share the show with a friend. And don't forget to reach out via our Contact page. We're very interested in your opinion of the show, so please get in touch. You can support the show by making a monthly donation on the show's Patreon page at: https://www.patreon.com/TheDotNetCorePodcast. Music created by Mono Memory Music, licensed to RJJ Software for use in The Modern .NET Show
If you're in SF: Join us for the Claude Plays Pokemon hackathon this Sunday!If you're not: Fill out the 2025 State of AI Eng survey for $250 in Amazon cards!We are SO excited to share our conversation with Dharmesh Shah, co-founder of HubSpot and creator of Agent.ai.A particularly compelling concept we discussed is the idea of "hybrid teams" - the next evolution in workplace organization where human workers collaborate with AI agents as team members. Just as we previously saw hybrid teams emerge in terms of full-time vs. contract workers, or in-office vs. remote workers, Dharmesh predicts that the next frontier will be teams composed of both human and AI members. This raises interesting questions about team dynamics, trust, and how to effectively delegate tasks between human and AI team members.The discussion of business models in AI reveals an important distinction between Work as a Service (WaaS) and Results as a Service (RaaS), something Dharmesh has written extensively about. While RaaS has gained popularity, particularly in customer support applications where outcomes are easily measurable, Dharmesh argues that this model may be over-indexed. Not all AI applications have clearly definable outcomes or consistent economic value per transaction, making WaaS more appropriate in many cases. This insight is particularly relevant for businesses considering how to monetize AI capabilities.The technical challenges of implementing effective agent systems are also explored, particularly around memory and authentication. Shah emphasizes the importance of cross-agent memory sharing and the need for more granular control over data access. He envisions a future where users can selectively share parts of their data with different agents, similar to how OAuth works but with much finer control. This points to significant opportunities in developing infrastructure for secure and efficient agent-to-agent communication and data sharing.Other highlights from our conversation* The Evolution of AI-Powered Agents – Exploring how AI agents have evolved from simple chatbots to sophisticated multi-agent systems, and the role of MCPs in enabling that.* Hybrid Digital Teams and the Future of Work – How AI agents are becoming teammates rather than just tools, and what this means for business operations and knowledge work.* Memory in AI Agents – The importance of persistent memory in AI systems and how shared memory across agents could enhance collaboration and efficiency.* Business Models for AI Agents – Exploring the shift from software as a service (SaaS) to work as a service (WaaS) and results as a service (RaaS), and what this means for monetization.* The Role of Standards Like MCP – Why MCP has been widely adopted and how it enables agent collaboration, tool use, and discovery.* The Future of AI Code Generation and Software Engineering – How AI-assisted coding is changing the role of software engineers and what skills will matter most in the future.* Domain Investing and Efficient Markets – Dharmesh's approach to domain investing and how inefficiencies in digital asset markets create business opportunities.* The Philosophy of Saying No – Lessons from "Sorry, You Must Pass" and how prioritization leads to greater productivity and focus.Timestamps* 00:00 Introduction and Guest Welcome* 02:29 Dharmesh Shah's Journey into AI* 05:22 Defining AI Agents* 06:45 The Evolution and Future of AI Agents* 13:53 Graph Theory and Knowledge Representation* 20:02 Engineering Practices and Overengineering* 25:57 The Role of Junior Engineers in the AI Era* 28:20 Multi-Agent Systems and MCP Standards* 35:55 LinkedIn's Legal Battles and Data Scraping* 37:32 The Future of AI and Hybrid Teams* 39:19 Building Agent AI: A Professional Network for Agents* 40:43 Challenges and Innovations in Agent AI* 45:02 The Evolution of UI in AI Systems* 01:00:25 Business Models: Work as a Service vs. Results as a Service* 01:09:17 The Future Value of Engineers* 01:09:51 Exploring the Role of Agents* 01:10:28 The Importance of Memory in AI* 01:11:02 Challenges and Opportunities in AI Memory* 01:12:41 Selective Memory and Privacy Concerns* 01:13:27 The Evolution of AI Tools and Platforms* 01:18:23 Domain Names and AI Projects* 01:32:08 Balancing Work and Personal Life* 01:35:52 Final Thoughts and ReflectionsTranscriptAlessio [00:00:04]: Hey everyone, welcome back to the Latent Space podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Small AI.swyx [00:00:12]: Hello, and today we're super excited to have Dharmesh Shah to join us. I guess your relevant title here is founder of Agent AI.Dharmesh [00:00:20]: Yeah, that's true for this. Yeah, creator of Agent.ai and co-founder of HubSpot.swyx [00:00:25]: Co-founder of HubSpot, which I followed for many years, I think 18 years now, gonna be 19 soon. And you caught, you know, people can catch up on your HubSpot story elsewhere. I should also thank Sean Puri, who I've chatted with back and forth, who's been, I guess, getting me in touch with your people. But also, I think like, just giving us a lot of context, because obviously, My First Million joined you guys, and they've been chatting with you guys a lot. So for the business side, we can talk about that, but I kind of wanted to engage your CTO, agent, engineer side of things. So how did you get agent religion?Dharmesh [00:01:00]: Let's see. So I've been working, I'll take like a half step back, a decade or so ago, even though actually more than that. So even before HubSpot, the company I was contemplating that I had named for was called Ingenisoft. And the idea behind Ingenisoft was a natural language interface to business software. Now realize this is 20 years ago, so that was a hard thing to do. But the actual use case that I had in mind was, you know, we had data sitting in business systems like a CRM or something like that. And my kind of what I thought clever at the time. Oh, what if we used email as the kind of interface to get to business software? And the motivation for using email is that it automatically works when you're offline. So imagine I'm getting on a plane or I'm on a plane. There was no internet on planes back then. It's like, oh, I'm going through business cards from an event I went to. I can just type things into an email just to have them all in the backlog. When it reconnects, it sends those emails to a processor that basically kind of parses effectively the commands and updates the software, sends you the file, whatever it is. And there was a handful of commands. I was a little bit ahead of the times in terms of what was actually possible. And I reattempted this natural language thing with a product called ChatSpot that I did back 20...swyx [00:02:12]: Yeah, this is your first post-ChatGPT project.Dharmesh [00:02:14]: I saw it come out. Yeah. And so I've always been kind of fascinated by this natural language interface to software. Because, you know, as software developers, myself included, we've always said, oh, we build intuitive, easy-to-use applications. And it's not intuitive at all, right? Because what we're doing is... We're taking the mental model that's in our head of what we're trying to accomplish with said piece of software and translating that into a series of touches and swipes and clicks and things like that. And there's nothing natural or intuitive about it. And so natural language interfaces, for the first time, you know, whatever the thought is you have in your head and expressed in whatever language that you normally use to talk to yourself in your head, you can just sort of emit that and have software do something. And I thought that was kind of a breakthrough, which it has been. And it's gone. So that's where I first started getting into the journey. I started because now it actually works, right? So once we got ChatGPT and you can take, even with a few-shot example, convert something into structured, even back in the ChatGP 3.5 days, it did a decent job in a few-shot example, convert something to structured text if you knew what kinds of intents you were going to have. And so that happened. And that ultimately became a HubSpot project. But then agents intrigued me because I'm like, okay, well, that's the next step here. So chat's great. Love Chat UX. But if we want to do something even more meaningful, it felt like the next kind of advancement is not this kind of, I'm chatting with some software in a kind of a synchronous back and forth model, is that software is going to do things for me in kind of a multi-step way to try and accomplish some goals. So, yeah, that's when I first got started. It's like, okay, what would that look like? Yeah. And I've been obsessed ever since, by the way.Alessio [00:03:55]: Which goes back to your first experience with it, which is like you're offline. Yeah. And you want to do a task. You don't need to do it right now. You just want to queue it up for somebody to do it for you. Yes. As you think about agents, like, let's start at the easy question, which is like, how do you define an agent? Maybe. You mean the hardest question in the universe? Is that what you mean?Dharmesh [00:04:12]: You said you have an irritating take. I do have an irritating take. I think, well, some number of people have been irritated, including within my own team. So I have a very broad definition for agents, which is it's AI-powered software that accomplishes a goal. Period. That's it. And what irritates people about it is like, well, that's so broad as to be completely non-useful. And I understand that. I understand the criticism. But in my mind, if you kind of fast forward months, I guess, in AI years, the implementation of it, and we're already starting to see this, and we'll talk about this, different kinds of agents, right? So I think in addition to having a usable definition, and I like yours, by the way, and we should talk more about that, that you just came out with, the classification of agents actually is also useful, which is, is it autonomous or non-autonomous? Does it have a deterministic workflow? Does it have a non-deterministic workflow? Is it working synchronously? Is it working asynchronously? Then you have the different kind of interaction modes. Is it a chat agent, kind of like a customer support agent would be? You're having this kind of back and forth. Is it a workflow agent that just does a discrete number of steps? So there's all these different flavors of agents. So if I were to draw it in a Venn diagram, I would draw a big circle that says, this is agents, and then I have a bunch of circles, some overlapping, because they're not mutually exclusive. And so I think that's what's interesting, and we're seeing development along a bunch of different paths, right? So if you look at the first implementation of agent frameworks, you look at Baby AGI and AutoGBT, I think it was, not Autogen, that's the Microsoft one. They were way ahead of their time because they assumed this level of reasoning and execution and planning capability that just did not exist, right? So it was an interesting thought experiment, which is what it was. Even the guy that, I'm an investor in Yohei's fund that did Baby AGI. It wasn't ready, but it was a sign of what was to come. And so the question then is, when is it ready? And so lots of people talk about the state of the art when it comes to agents. I'm a pragmatist, so I think of the state of the practical. It's like, okay, well, what can I actually build that has commercial value or solves actually some discrete problem with some baseline of repeatability or verifiability?swyx [00:06:22]: There was a lot, and very, very interesting. I'm not irritated by it at all. Okay. As you know, I take a... There's a lot of anthropological view or linguistics view. And in linguistics, you don't want to be prescriptive. You want to be descriptive. Yeah. So you're a goals guy. That's the key word in your thing. And other people have other definitions that might involve like delegated trust or non-deterministic work, LLM in the loop, all that stuff. The other thing I was thinking about, just the comment on Baby AGI, LGBT. Yeah. In that piece that you just read, I was able to go through our backlog and just kind of track the winter of agents and then the summer now. Yeah. And it's... We can tell the whole story as an oral history, just following that thread. And it's really just like, I think, I tried to explain the why now, right? Like I had, there's better models, of course. There's better tool use with like, they're just more reliable. Yep. Better tools with MCP and all that stuff. And I'm sure you have opinions on that too. Business model shift, which you like a lot. I just heard you talk about RAS with MFM guys. Yep. Cost is dropping a lot. Yep. Inference is getting faster. There's more model diversity. Yep. Yep. I think it's a subtle point. It means that like, you have different models with different perspectives. You don't get stuck in the basin of performance of a single model. Sure. You can just get out of it by just switching models. Yep. Multi-agent research and RL fine tuning. So I just wanted to let you respond to like any of that.Dharmesh [00:07:44]: Yeah. A couple of things. Connecting the dots on the kind of the definition side of it. So we'll get the irritation out of the way completely. I have one more, even more irritating leap on the agent definition thing. So here's the way I think about it. By the way, the kind of word agent, I looked it up, like the English dictionary definition. The old school agent, yeah. Is when you have someone or something that does something on your behalf, like a travel agent or a real estate agent acts on your behalf. It's like proxy, which is a nice kind of general definition. So the other direction I'm sort of headed, and it's going to tie back to tool calling and MCP and things like that, is if you, and I'm not a biologist by any stretch of the imagination, but we have these single-celled organisms, right? Like the simplest possible form of what one would call life. But it's still life. It just happens to be single-celled. And then you can combine cells and then cells become specialized over time. And you have much more sophisticated organisms, you know, kind of further down the spectrum. In my mind, at the most fundamental level, you can almost think of having atomic agents. What is the simplest possible thing that's an agent that can still be called an agent? What is the equivalent of a kind of single-celled organism? And the reason I think that's useful is right now we're headed down the road, which I think is very exciting around tool use, right? That says, okay, the LLMs now can be provided a set of tools that it calls to accomplish whatever it needs to accomplish in the kind of furtherance of whatever goal it's trying to get done. And I'm not overly bothered by it, but if you think about it, if you just squint a little bit and say, well, what if everything was an agent? And what if tools were actually just atomic agents? Because then it's turtles all the way down, right? Then it's like, oh, well, all that's really happening with tool use is that we have a network of agents that know about each other through something like an MMCP and can kind of decompose a particular problem and say, oh, I'm going to delegate this to this set of agents. And why do we need to draw this distinction between tools, which are functions most of the time? And an actual agent. And so I'm going to write this irritating LinkedIn post, you know, proposing this. It's like, okay. And I'm not suggesting we should call even functions, you know, call them agents. But there is a certain amount of elegance that happens when you say, oh, we can just reduce it down to one primitive, which is an agent that you can combine in complicated ways to kind of raise the level of abstraction and accomplish higher order goals. Anyway, that's my answer. I'd say that's a success. Thank you for coming to my TED Talk on agent definitions.Alessio [00:09:54]: How do you define the minimum viable agent? Do you already have a definition for, like, where you draw the line between a cell and an atom? Yeah.Dharmesh [00:10:02]: So in my mind, it has to, at some level, use AI in order for it to—otherwise, it's just software. It's like, you know, we don't need another word for that. And so that's probably where I draw the line. So then the question, you know, the counterargument would be, well, if that's true, then lots of tools themselves are actually not agents because they're just doing a database call or a REST API call or whatever it is they're doing. And that does not necessarily qualify them, which is a fair counterargument. And I accept that. It's like a good argument. I still like to think about—because we'll talk about multi-agent systems, because I think—so we've accepted, which I think is true, lots of people have said it, and you've hopefully combined some of those clips of really smart people saying this is the year of agents, and I completely agree, it is the year of agents. But then shortly after that, it's going to be the year of multi-agent systems or multi-agent networks. I think that's where it's going to be headed next year. Yeah.swyx [00:10:54]: Opening eyes already on that. Yeah. My quick philosophical engagement with you on this. I often think about kind of the other spectrum, the other end of the cell spectrum. So single cell is life, multi-cell is life, and you clump a bunch of cells together in a more complex organism, they become organs, like an eye and a liver or whatever. And then obviously we consider ourselves one life form. There's not like a lot of lives within me. I'm just one life. And now, obviously, I don't think people don't really like to anthropomorphize agents and AI. Yeah. But we are extending our consciousness and our brain and our functionality out into machines. I just saw you were a Bee. Yeah. Which is, you know, it's nice. I have a limitless pendant in my pocket.Dharmesh [00:11:37]: I got one of these boys. Yeah.swyx [00:11:39]: I'm testing it all out. You know, got to be early adopters. But like, we want to extend our personal memory into these things so that we can be good at the things that we're good at. And, you know, machines are good at it. Machines are there. So like, my definition of life is kind of like going outside of my own body now. I don't know if you've ever had like reflections on that. Like how yours. How our self is like actually being distributed outside of you. Yeah.Dharmesh [00:12:01]: I don't fancy myself a philosopher. But you went there. So yeah, I did go there. I'm fascinated by kind of graphs and graph theory and networks and have been for a long, long time. And to me, we're sort of all nodes in this kind of larger thing. It just so happens that we're looking at individual kind of life forms as they exist right now. But so the idea is when you put a podcast out there, there's these little kind of nodes you're putting out there of like, you know, conceptual ideas. Once again, you have varying kind of forms of those little nodes that are up there and are connected in varying and sundry ways. And so I just think of myself as being a node in a massive, massive network. And I'm producing more nodes as I put content or ideas. And, you know, you spend some portion of your life collecting dots, experiences, people, and some portion of your life then connecting dots from the ones that you've collected over time. And I found that really interesting things happen and you really can't know in advance how those dots are necessarily going to connect in the future. And that's, yeah. So that's my philosophical take. That's the, yes, exactly. Coming back.Alessio [00:13:04]: Yep. Do you like graph as an agent? Abstraction? That's been one of the hot topics with LandGraph and Pydantic and all that.Dharmesh [00:13:11]: I do. The thing I'm more interested in terms of use of graphs, and there's lots of work happening on that now, is graph data stores as an alternative in terms of knowledge stores and knowledge graphs. Yeah. Because, you know, so I've been in software now 30 plus years, right? So it's not 10,000 hours. It's like 100,000 hours that I've spent doing this stuff. And so I've grew up with, so back in the day, you know, I started on mainframes. There was a product called IMS from IBM, which is basically an index database, what we'd call like a key value store today. Then we've had relational databases, right? We have tables and columns and foreign key relationships. We all know that. We have document databases like MongoDB, which is sort of a nested structure keyed by a specific index. We have vector stores, vector embedding database. And graphs are interesting for a couple of reasons. One is, so it's not classically structured in a relational way. When you say structured database, to most people, they're thinking tables and columns and in relational database and set theory and all that. Graphs still have structure, but it's not the tables and columns structure. And you could wonder, and people have made this case, that they are a better representation of knowledge for LLMs and for AI generally than other things. So that's kind of thing number one conceptually, and that might be true, I think is possibly true. And the other thing that I really like about that in the context of, you know, I've been in the context of data stores for RAG is, you know, RAG, you say, oh, I have a million documents, I'm going to build the vector embeddings, I'm going to come back with the top X based on the semantic match, and that's fine. All that's very, very useful. But the reality is something gets lost in the chunking process and the, okay, well, those tend, you know, like, you don't really get the whole picture, so to speak, and maybe not even the right set of dimensions on the kind of broader picture. And it makes intuitive sense to me that if we did capture it properly in a graph form, that maybe that feeding into a RAG pipeline will actually yield better results for some use cases, I don't know, but yeah.Alessio [00:15:03]: And do you feel like at the core of it, there's this difference between imperative and declarative programs? Because if you think about HubSpot, it's like, you know, people and graph kind of goes hand in hand, you know, but I think maybe the software before was more like primary foreign key based relationship, versus now the models can traverse through the graph more easily.Dharmesh [00:15:22]: Yes. So I like that representation. There's something. It's just conceptually elegant about graphs and just from the representation of it, they're much more discoverable, you can kind of see it, there's observability to it, versus kind of embeddings, which you can't really do much with as a human. You know, once they're in there, you can't pull stuff back out. But yeah, I like that kind of idea of it. And the other thing that's kind of, because I love graphs, I've been long obsessed with PageRank from back in the early days. And, you know, one of the kind of simplest algorithms in terms of coming up, you know, with a phone, everyone's been exposed to PageRank. And the idea is that, and so I had this other idea for a project, not a company, and I have hundreds of these, called NodeRank, is to be able to take the idea of PageRank and apply it to an arbitrary graph that says, okay, I'm going to define what authority looks like and say, okay, well, that's interesting to me, because then if you say, I'm going to take my knowledge store, and maybe this person that contributed some number of chunks to the graph data store has more authority on this particular use case or prompt that's being submitted than this other one that may, or maybe this one was more. popular, or maybe this one has, whatever it is, there should be a way for us to kind of rank nodes in a graph and sort them in some, some useful way. Yeah.swyx [00:16:34]: So I think that's generally useful for, for anything. I think the, the problem, like, so even though at my conferences, GraphRag is super popular and people are getting knowledge, graph religion, and I will say like, it's getting space, getting traction in two areas, conversation memory, and then also just rag in general, like the, the, the document data. Yeah. It's like a source. Most ML practitioners would say that knowledge graph is kind of like a dirty word. The graph database, people get graph religion, everything's a graph, and then they, they go really hard into it and then they get a, they get a graph that is too complex to navigate. Yes. And so like the, the, the simple way to put it is like you at running HubSpot, you know, the power of graphs, the way that Google has pitched them for many years, but I don't suspect that HubSpot itself uses a knowledge graph. No. Yeah.Dharmesh [00:17:26]: So when is it over engineering? Basically? It's a great question. I don't know. So the question now, like in AI land, right, is the, do we necessarily need to understand? So right now, LLMs for, for the most part are somewhat black boxes, right? We sort of understand how the, you know, the algorithm itself works, but we really don't know what's going on in there and, and how things come out. So if a graph data store is able to produce the outcomes we want, it's like, here's a set of queries I want to be able to submit and then it comes out with useful content. Maybe the underlying data store is as opaque as a vector embeddings or something like that, but maybe it's fine. Maybe we don't necessarily need to understand it to get utility out of it. And so maybe if it's messy, that's okay. Um, that's, it's just another form of lossy compression. Uh, it's just lossy in a way that we just don't completely understand in terms of, because it's going to grow organically. Uh, and it's not structured. It's like, ah, we're just gonna throw a bunch of stuff in there. Let the, the equivalent of the embedding algorithm, whatever they called in graph land. Um, so the one with the best results wins. I think so. Yeah.swyx [00:18:26]: Or is this the practical side of me is like, yeah, it's, if it's useful, we don't necessarilyDharmesh [00:18:30]: need to understand it.swyx [00:18:30]: I have, I mean, I'm happy to push back as long as you want. Uh, it's not practical to evaluate like the 10 different options out there because it takes time. It takes people, it takes, you know, resources, right? Set. That's the first thing. Second thing is your evals are typically on small things and some things only work at scale. Yup. Like graphs. Yup.Dharmesh [00:18:46]: Yup. That's, yeah, no, that's fair. And I think this is one of the challenges in terms of implementation of graph databases is that the most common approach that I've seen developers do, I've done it myself, is that, oh, I've got a Postgres database or a MySQL or whatever. I can represent a graph with a very set of tables with a parent child thing or whatever. And that sort of gives me the ability, uh, why would I need anything more than that? And the answer is, well, if you don't need anything more than that, you don't need anything more than that. But there's a high chance that you're sort of missing out on the actual value that, uh, the graph representation gives you. Which is the ability to traverse the graph, uh, efficiently in ways that kind of going through the, uh, traversal in a relational database form, even though structurally you have the data, practically you're not gonna be able to pull it out in, in useful ways. Uh, so you wouldn't like represent a social graph, uh, in, in using that kind of relational table model. It just wouldn't scale. It wouldn't work.swyx [00:19:36]: Uh, yeah. Uh, I think we want to move on to MCP. Yeah. But I just want to, like, just engineering advice. Yeah. Uh, obviously you've, you've, you've run, uh, you've, you've had to do a lot of projects and run a lot of teams. Do you have a general rule for over-engineering or, you know, engineering ahead of time? You know, like, because people, we know premature engineering is the root of all evil. Yep. But also sometimes you just have to. Yep. When do you do it? Yes.Dharmesh [00:19:59]: It's a great question. This is, uh, a question as old as time almost, which is what's the right and wrong levels of abstraction. That's effectively what, uh, we're answering when we're trying to do engineering. I tend to be a pragmatist, right? So here's the thing. Um, lots of times doing something the right way. Yeah. It's like a marginal increased cost in those cases. Just do it the right way. And this is what makes a, uh, a great engineer or a good engineer better than, uh, a not so great one. It's like, okay, all things being equal. If it's going to take you, you know, roughly close to constant time anyway, might as well do it the right way. Like, so do things well, then the question is, okay, well, am I building a framework as the reusable library? To what degree, uh, what am I anticipating in terms of what's going to need to change in this thing? Uh, you know, along what dimension? And then I think like a business person in some ways, like what's the return on calories, right? So, uh, and you look at, um, energy, the expected value of it's like, okay, here are the five possible things that could happen, uh, try to assign probabilities like, okay, well, if there's a 50% chance that we're going to go down this particular path at some day, like, or one of these five things is going to happen and it costs you 10% more to engineer for that. It's basically, it's something that yields a kind of interest compounding value. Um, as you get closer to the time of, of needing that versus having to take on debt, which is when you under engineer it, you're taking on debt. You're going to have to pay off when you do get to that eventuality where something happens. One thing as a pragmatist, uh, so I would rather under engineer something than over engineer it. If I were going to err on the side of something, and here's the reason is that when you under engineer it, uh, yes, you take on tech debt, uh, but the interest rate is relatively known and payoff is very, very possible, right? Which is, oh, I took a shortcut here as a result of which now this thing that should have taken me a week is now going to take me four weeks. Fine. But if that particular thing that you thought might happen, never actually, you never have that use case transpire or just doesn't, it's like, well, you just save yourself time, right? And that has value because you were able to do other things instead of, uh, kind of slightly over-engineering it away, over-engineering it. But there's no perfect answers in art form in terms of, uh, and yeah, we'll, we'll bring kind of this layers of abstraction back on the code generation conversation, which we'll, uh, I think I have later on, butAlessio [00:22:05]: I was going to ask, we can just jump ahead quickly. Yeah. Like, as you think about vibe coding and all that, how does the. Yeah. Percentage of potential usefulness change when I feel like we over-engineering a lot of times it's like the investment in syntax, it's less about the investment in like arc exacting. Yep. Yeah. How does that change your calculus?Dharmesh [00:22:22]: A couple of things, right? One is, um, so, you know, going back to that kind of ROI or a return on calories, kind of calculus or heuristic you think through, it's like, okay, well, what is it going to cost me to put this layer of abstraction above the code that I'm writing now, uh, in anticipating kind of future needs. If the cost of fixing, uh, or doing under engineering right now. Uh, we'll trend towards zero that says, okay, well, I don't have to get it right right now because even if I get it wrong, I'll run the thing for six hours instead of 60 minutes or whatever. It doesn't really matter, right? Like, because that's going to trend towards zero to be able, the ability to refactor a code. Um, and because we're going to not that long from now, we're going to have, you know, large code bases be able to exist, uh, you know, as, as context, uh, for a code generation or a code refactoring, uh, model. So I think it's going to make it, uh, make the case for under engineering, uh, even stronger. Which is why I take on that cost. You just pay the interest when you get there, it's not, um, just go on with your life vibe coded and, uh, come back when you need to. Yeah.Alessio [00:23:18]: Sometimes I feel like there's no decision-making in some things like, uh, today I built a autosave for like our internal notes platform and I literally just ask them cursor. Can you add autosave? Yeah. I don't know if it's over under engineer. Yep. I just vibe coded it. Yep. And I feel like at some point we're going to get to the point where the models kindDharmesh [00:23:36]: of decide where the right line is, but this is where the, like the, in my mind, the danger is, right? So there's two sides to this. One is the cost of kind of development and coding and things like that stuff that, you know, we talk about. But then like in your example, you know, one of the risks that we have is that because adding a feature, uh, like a save or whatever the feature might be to a product as that price tends towards zero, are we going to be less discriminant about what features we add as a result of making more product products more complicated, which has a negative impact on the user and navigate negative impact on the business. Um, and so that's the thing I worry about if it starts to become too easy, are we going to be. Too promiscuous in our, uh, kind of extension, adding product extensions and things like that. It's like, ah, why not add X, Y, Z or whatever back then it was like, oh, we only have so many engineering hours or story points or however you measure things. Uh, that least kept us in check a little bit. Yeah.Alessio [00:24:22]: And then over engineering, you're like, yeah, it's kind of like you're putting that on yourself. Yeah. Like now it's like the models don't understand that if they add too much complexity, it's going to come back to bite them later. Yep. So they just do whatever they want to do. Yeah. And I'm curious where in the workflow that's going to be, where it's like, Hey, this is like the amount of complexity and over-engineering you can do before you got to ask me if we should actually do it versus like do something else.Dharmesh [00:24:45]: So you know, we've already, let's like, we're leaving this, uh, in the code generation world, this kind of compressed, um, cycle time. Right. It's like, okay, we went from auto-complete, uh, in the GitHub co-pilot to like, oh, finish this particular thing and hit tab to a, oh, I sort of know your file or whatever. I can write out a full function to you to now I can like hold a bunch of the context in my head. Uh, so we can do app generation, which we have now with lovable and bolt and repletage. Yeah. Association and other things. So then the question is, okay, well, where does it naturally go from here? So we're going to generate products. Make sense. We might be able to generate platforms as though I want a platform for ERP that does this, whatever. And that includes the API's includes the product and the UI, and all the things that make for a platform. There's no nothing that says we would stop like, okay, can you generate an entire software company someday? Right. Uh, with the platform and the monetization and the go-to-market and the whatever. And you know, that that's interesting to me in terms of, uh, you know, what, when you take it to almost ludicrous levels. of abstract.swyx [00:25:39]: It's like, okay, turn it to 11. You mentioned vibe coding, so I have to, this is a blog post I haven't written, but I'm kind of exploring it. Is the junior engineer dead?Dharmesh [00:25:49]: I don't think so. I think what will happen is that the junior engineer will be able to, if all they're bringing to the table is the fact that they are a junior engineer, then yes, they're likely dead. But hopefully if they can communicate with carbon-based life forms, they can interact with product, if they're willing to talk to customers, they can take their kind of basic understanding of engineering and how kind of software works. I think that has value. So I have a 14-year-old right now who's taking Python programming class, and some people ask me, it's like, why is he learning coding? And my answer is, is because it's not about the syntax, it's not about the coding. What he's learning is like the fundamental thing of like how things work. And there's value in that. I think there's going to be timeless value in systems thinking and abstractions and what that means. And whether functions manifested as math, which he's going to get exposed to regardless, or there are some core primitives to the universe, I think, that the more you understand them, those are what I would kind of think of as like really large dots in your life that will have a higher gravitational pull and value to them that you'll then be able to. So I want him to collect those dots, and he's not resisting. So it's like, okay, while he's still listening to me, I'm going to have him do things that I think will be useful.swyx [00:26:59]: You know, part of one of the pitches that I evaluated for AI engineer is a term. And the term is that maybe the traditional interview path or career path of software engineer goes away, which is because what's the point of lead code? Yeah. And, you know, it actually matters more that you know how to work with AI and to implement the things that you want. Yep.Dharmesh [00:27:16]: That's one of the like interesting things that's happened with generative AI. You know, you go from machine learning and the models and just that underlying form, which is like true engineering, right? Like the actual, what I call real engineering. I don't think of myself as a real engineer, actually. I'm a developer. But now with generative AI. We call it AI and it's obviously got its roots in machine learning, but it just feels like fundamentally different to me. Like you have the vibe. It's like, okay, well, this is just a whole different approach to software development to so many different things. And so I'm wondering now, it's like an AI engineer is like, if you were like to draw the Venn diagram, it's interesting because the cross between like AI things, generative AI and what the tools are capable of, what the models do, and this whole new kind of body of knowledge that we're still building out, it's still very young, intersected with kind of classic engineering, software engineering. Yeah.swyx [00:28:04]: I just described the overlap as it separates out eventually until it's its own thing, but it's starting out as a software. Yeah.Alessio [00:28:11]: That makes sense. So to close the vibe coding loop, the other big hype now is MCPs. Obviously, I would say Cloud Desktop and Cursor are like the two main drivers of MCP usage. I would say my favorite is the Sentry MCP. I can pull in errors and then you can just put the context in Cursor. How do you think about that abstraction layer? Does it feel... Does it feel almost too magical in a way? Do you think it's like you get enough? Because you don't really see how the server itself is then kind of like repackaging theDharmesh [00:28:41]: information for you? I think MCP as a standard is one of the better things that's happened in the world of AI because a standard needed to exist and absent a standard, there was a set of things that just weren't possible. Now, we can argue whether it's the best possible manifestation of a standard or not. Does it do too much? Does it do too little? I get that, but it's just simple enough to both be useful and unobtrusive. It's understandable and adoptable by mere mortals, right? It's not overly complicated. You know, a reasonable engineer can put a stand up an MCP server relatively easily. The thing that has me excited about it is like, so I'm a big believer in multi-agent systems. And so that's going back to our kind of this idea of an atomic agent. So imagine the MCP server, like obviously it calls tools, but the way I think about it, so I'm working on my current passion project is agent.ai. And we'll talk more about that in a little bit. More about the, I think we should, because I think it's interesting not to promote the project at all, but there's some interesting ideas in there. One of which is around, we're going to need a mechanism for, if agents are going to collaborate and be able to delegate, there's going to need to be some form of discovery and we're going to need some standard way. It's like, okay, well, I just need to know what this thing over here is capable of. We're going to need a registry, which Anthropic's working on. I'm sure others will and have been doing directories of, and there's going to be a standard around that too. How do you build out a directory of MCP servers? I think that's going to unlock so many things just because, and we're already starting to see it. So I think MCP or something like it is going to be the next major unlock because it allows systems that don't know about each other, don't need to, it's that kind of decoupling of like Sentry and whatever tools someone else was building. And it's not just about, you know, Cloud Desktop or things like, even on the client side, I think we're going to see very interesting consumers of MCP, MCP clients versus just the chat body kind of things. Like, you know, Cloud Desktop and Cursor and things like that. But yeah, I'm very excited about MCP in that general direction.swyx [00:30:39]: I think the typical cynical developer take, it's like, we have OpenAPI. Yeah. What's the new thing? I don't know if you have a, do you have a quick MCP versus everything else? Yeah.Dharmesh [00:30:49]: So it's, so I like OpenAPI, right? So just a descriptive thing. It's OpenAPI. OpenAPI. Yes, that's what I meant. So it's basically a self-documenting thing. We can do machine-generated, lots of things from that output. It's a structured definition of an API. I get that, love it. But MCPs sort of are kind of use case specific. They're perfect for exactly what we're trying to use them for around LLMs in terms of discovery. It's like, okay, I don't necessarily need to know kind of all this detail. And so right now we have, we'll talk more about like MCP server implementations, but We will? I think, I don't know. Maybe we won't. At least it's in my head. It's like a back processor. But I do think MCP adds value above OpenAPI. It's, yeah, just because it solves this particular thing. And if we had come to the world, which we have, like, it's like, hey, we already have OpenAPI. It's like, if that were good enough for the universe, the universe would have adopted it already. There's a reason why MCP is taking office because marginally adds something that was missing before and doesn't go too far. And so that's why the kind of rate of adoption, you folks have written about this and talked about it. Yeah, why MCP won. Yeah. And it won because the universe decided that this was useful and maybe it gets supplanted by something else. Yeah. And maybe we discover, oh, maybe OpenAPI was good enough the whole time. I doubt that.swyx [00:32:09]: The meta lesson, this is, I mean, he's an investor in DevTools companies. I work in developer experience at DevRel in DevTools companies. Yep. Everyone wants to own the standard. Yeah. I'm sure you guys have tried to launch your own standards. Actually, it's Houseplant known for a standard, you know, obviously inbound marketing. But is there a standard or protocol that you ever tried to push? No.Dharmesh [00:32:30]: And there's a reason for this. Yeah. Is that? And I don't mean, need to mean, speak for the people of HubSpot, but I personally. You kind of do. I'm not smart enough. That's not the, like, I think I have a. You're smart. Not enough for that. I'm much better off understanding the standards that are out there. And I'm more on the composability side. Let's, like, take the pieces of technology that exist out there, combine them in creative, unique ways. And I like to consume standards. I don't like to, and that's not that I don't like to create them. I just don't think I have the, both the raw wattage or the credibility. It's like, okay, well, who the heck is Dharmesh, and why should we adopt a standard he created?swyx [00:33:07]: Yeah, I mean, there are people who don't monetize standards, like OpenTelemetry is a big standard, and LightStep never capitalized on that.Dharmesh [00:33:15]: So, okay, so if I were to do a standard, there's two things that have been in my head in the past. I was one around, a very, very basic one around, I don't even have the domain, I have a domain for everything, for open marketing. Because the issue we had in HubSpot grew up in the marketing space. There we go. There was no standard around data formats and things like that. It doesn't go anywhere. But the other one, and I did not mean to go here, but I'm going to go here. It's called OpenGraph. I know the term was already taken, but it hasn't been used for like 15 years now for its original purpose. But what I think should exist in the world is right now, our information, all of us, nodes are in the social graph at Meta or the professional graph at LinkedIn. Both of which are actually relatively closed in actually very annoying ways. Like very, very closed, right? Especially LinkedIn. Especially LinkedIn. I personally believe that if it's my data, and if I would get utility out of it being open, I should be able to make my data open or publish it in whatever forms that I choose, as long as I have control over it as opt-in. So the idea is around OpenGraph that says, here's a standard, here's a way to publish it. I should be able to go to OpenGraph.org slash Dharmesh dot JSON and get it back. And it's like, here's your stuff, right? And I can choose along the way and people can write to it and I can prove. And there can be an entire system. And if I were to do that, I would do it as a... Like a public benefit, non-profit-y kind of thing, as this is a contribution to society. I wouldn't try to commercialize that. Have you looked at AdProto? What's that? AdProto.swyx [00:34:43]: It's the protocol behind Blue Sky. Okay. My good friend, Dan Abramov, who was the face of React for many, many years, now works there. And he actually did a talk that I can send you, which basically kind of tries to articulate what you just said. But he does, he loves doing these like really great analogies, which I think you'll like. Like, you know, a lot of our data is behind a handle, behind a domain. Yep. So he's like, all right, what if we flip that? What if it was like our handle and then the domain? Yep. So, and that's really like your data should belong to you. Yep. And I should not have to wait 30 days for my Twitter data to export. Yep.Dharmesh [00:35:19]: you should be able to at least be able to automate it or do like, yes, I should be able to plug it into an agentic thing. Yeah. Yes. I think we're... Because so much of our data is... Locked up. I think the trick here isn't that standard. It is getting the normies to care.swyx [00:35:37]: Yeah. Because normies don't care.Dharmesh [00:35:38]: That's true. But building on that, normies don't care. So, you know, privacy is a really hot topic and an easy word to use, but it's not a binary thing. Like there are use cases where, and we make these choices all the time, that I will trade, not all privacy, but I will trade some privacy for some productivity gain or some benefit to me that says, oh, I don't care about that particular data being online if it gives me this in return, or I don't mind sharing this information with this company.Alessio [00:36:02]: If I'm getting, you know, this in return, but that sort of should be my option. I think now with computer use, you can actually automate some of the exports. Yes. Like something we've been doing internally is like everybody exports their LinkedIn connections. Yep. And then internally, we kind of merge them together to see how we can connect our companies to customers or things like that.Dharmesh [00:36:21]: And not to pick on LinkedIn, but since we're talking about it, but they feel strongly enough on the, you know, do not take LinkedIn data that they will block even browser use kind of things or whatever. They go to great, great lengths, even to see patterns of usage. And it says, oh, there's no way you could have, you know, gotten that particular thing or whatever without, and it's, so it's, there's...swyx [00:36:42]: Wasn't there a Supreme Court case that they lost? Yeah.Dharmesh [00:36:45]: So the one they lost was around someone that was scraping public data that was on the public internet. And that particular company had not signed any terms of service or whatever. It's like, oh, I'm just taking data that's on, there was no, and so that's why they won. But now, you know, the question is around, can LinkedIn... I think they can. Like, when you use, as a user, you use LinkedIn, you are signing up for their terms of service. And if they say, well, this kind of use of your LinkedIn account that violates our terms of service, they can shut your account down, right? They can. And they, yeah, so, you know, we don't need to make this a discussion. By the way, I love the company, don't get me wrong. I'm an avid user of the product. You know, I've got... Yeah, I mean, you've got over a million followers on LinkedIn, I think. Yeah, I do. And I've known people there for a long, long time, right? And I have lots of respect. And I understand even where the mindset originally came from of this kind of members-first approach to, you know, a privacy-first. I sort of get that. But sometimes you sort of have to wonder, it's like, okay, well, that was 15, 20 years ago. There's likely some controlled ways to expose some data on some member's behalf and not just completely be a binary. It's like, no, thou shalt not have the data.swyx [00:37:54]: Well, just pay for sales navigator.Alessio [00:37:57]: Before we move to the next layer of instruction, anything else on MCP you mentioned? Let's move back and then I'll tie it back to MCPs.Dharmesh [00:38:05]: So I think the... Open this with agent. Okay, so I'll start with... Here's my kind of running thesis, is that as AI and agents evolve, which they're doing very, very quickly, we're going to look at them more and more. I don't like to anthropomorphize. We'll talk about why this is not that. Less as just like raw tools and more like teammates. They'll still be software. They should self-disclose as being software. I'm totally cool with that. But I think what's going to happen is that in the same way you might collaborate with a team member on Slack or Teams or whatever you use, you can imagine a series of agents that do specific things just like a team member might do, that you can delegate things to. You can collaborate. You can say, hey, can you take a look at this? Can you proofread that? Can you try this? You can... Whatever it happens to be. So I think it is... I will go so far as to say it's inevitable that we're going to have hybrid teams someday. And what I mean by hybrid teams... So back in the day, hybrid teams were, oh, well, you have some full-time employees and some contractors. Then it was like hybrid teams are some people that are in the office and some that are remote. That's the kind of form of hybrid. The next form of hybrid is like the carbon-based life forms and agents and AI and some form of software. So let's say we temporarily stipulate that I'm right about that over some time horizon that eventually we're going to have these kind of digitally hybrid teams. So if that's true, then the question you sort of ask yourself is that then what needs to exist in order for us to get the full value of that new model? It's like, okay, well... You sort of need to... It's like, okay, well, how do I... If I'm building a digital team, like, how do I... Just in the same way, if I'm interviewing for an engineer or a designer or a PM, whatever, it's like, well, that's why we have professional networks, right? It's like, oh, they have a presence on likely LinkedIn. I can go through that semi-structured, structured form, and I can see the experience of whatever, you know, self-disclosed. But, okay, well, agents are going to need that someday. And so I'm like, okay, well, this seems like a thread that's worth pulling on. That says, okay. So I... So agent.ai is out there. And it's LinkedIn for agents. It's LinkedIn for agents. It's a professional network for agents. And the more I pull on that thread, it's like, okay, well, if that's true, like, what happens, right? It's like, oh, well, they have a profile just like anyone else, just like a human would. It's going to be a graph underneath, just like a professional network would be. It's just that... And you can have its, you know, connections and follows, and agents should be able to post. That's maybe how they do release notes. Like, oh, I have this new version. Whatever they decide to post, it should just be able to... Behave as a node on the network of a professional network. As it turns out, the more I think about that and pull on that thread, the more and more things, like, start to make sense to me. So it may be more than just a pure professional network. So my original thought was, okay, well, it's a professional network and agents as they exist out there, which I think there's going to be more and more of, will kind of exist on this network and have the profile. But then, and this is always dangerous, I'm like, okay, I want to see a world where thousands of agents are out there in order for the... Because those digital employees, the digital workers don't exist yet in any meaningful way. And so then I'm like, oh, can I make that easier for, like... And so I have, as one does, it's like, oh, I'll build a low-code platform for building agents. How hard could that be, right? Like, very hard, as it turns out. But it's been fun. So now, agent.ai has 1.3 million users. 3,000 people have actually, you know, built some variation of an agent, sometimes just for their own personal productivity. About 1,000 of which have been published. And the reason this comes back to MCP for me, so imagine that and other networks, since I know agent.ai. So right now, we have an MCP server for agent.ai that exposes all the internally built agents that we have that do, like, super useful things. Like, you know, I have access to a Twitter API that I can subsidize the cost. And I can say, you know, if you're looking to build something for social media, these kinds of things, with a single API key, and it's all completely free right now, I'm funding it. That's a useful way for it to work. And then we have a developer to say, oh, I have this idea. I don't have to worry about open AI. I don't have to worry about, now, you know, this particular model is better. It has access to all the models with one key. And we proxy it kind of behind the scenes. And then expose it. So then we get this kind of community effect, right? That says, oh, well, someone else may have built an agent to do X. Like, I have an agent right now that I built for myself to do domain valuation for website domains because I'm obsessed with domains, right? And, like, there's no efficient market for domains. There's no Zillow for domains right now that tells you, oh, here are what houses in your neighborhood sold for. It's like, well, why doesn't that exist? We should be able to solve that problem. And, yes, you're still guessing. Fine. There should be some simple heuristic. So I built that. It's like, okay, well, let me go look for past transactions. You say, okay, I'm going to type in agent.ai, agent.com, whatever domain. What's it actually worth? I'm looking at buying it. It can go and say, oh, which is what it does. It's like, I'm going to go look at are there any published domain transactions recently that are similar, either use the same word, same top-level domain, whatever it is. And it comes back with an approximate value, and it comes back with its kind of rationale for why it picked the value and comparable transactions. Oh, by the way, this domain sold for published. Okay. So that agent now, let's say, existed on the web, on agent.ai. Then imagine someone else says, oh, you know, I want to build a brand-building agent for startups and entrepreneurs to come up with names for their startup. Like a common problem, every startup is like, ah, I don't know what to call it. And so they type in five random words that kind of define whatever their startup is. And you can do all manner of things, one of which is like, oh, well, I need to find the domain for it. What are possible choices? Now it's like, okay, well, it would be nice to know if there's an aftermarket price for it, if it's listed for sale. Awesome. Then imagine calling this valuation agent. It's like, okay, well, I want to find where the arbitrage is, where the agent valuation tool says this thing is worth $25,000. It's listed on GoDaddy for $5,000. It's close enough. Let's go do that. Right? And that's a kind of composition use case that in my future state. Thousands of agents on the network, all discoverable through something like MCP. And then you as a developer of agents have access to all these kind of Lego building blocks based on what you're trying to solve. Then you blend in orchestration, which is getting better and better with the reasoning models now. Just describe the problem that you have. Now, the next layer that we're all contending with is that how many tools can you actually give an LLM before the LLM breaks? That number used to be like 15 or 20 before you kind of started to vary dramatically. And so that's the thing I'm thinking about now. It's like, okay, if I want to... If I want to expose 1,000 of these agents to a given LLM, obviously I can't give it all 1,000. Is there some intermediate layer that says, based on your prompt, I'm going to make a best guess at which agents might be able to be helpful for this particular thing? Yeah.Alessio [00:44:37]: Yeah, like RAG for tools. Yep. I did build the Latent Space Researcher on agent.ai. Okay. Nice. Yeah, that seems like, you know, then there's going to be a Latent Space Scheduler. And then once I schedule a research, you know, and you build all of these things. By the way, my apologies for the user experience. You realize I'm an engineer. It's pretty good.swyx [00:44:56]: I think it's a normie-friendly thing. Yeah. That's your magic. HubSpot does the same thing.Alessio [00:45:01]: Yeah, just to like quickly run through it. You can basically create all these different steps. And these steps are like, you know, static versus like variable-driven things. How did you decide between this kind of like low-code-ish versus doing, you know, low-code with code backend versus like not exposing that at all? Any fun design decisions? Yeah. And this is, I think...Dharmesh [00:45:22]: I think lots of people are likely sitting in exactly my position right now, coming through the choosing between deterministic. Like if you're like in a business or building, you know, some sort of agentic thing, do you decide to do a deterministic thing? Or do you go non-deterministic and just let the alum handle it, right, with the reasoning models? The original idea and the reason I took the low-code stepwise, a very deterministic approach. A, the reasoning models did not exist at that time. That's thing number one. Thing number two is if you can get... If you know in your head... If you know in your head what the actual steps are to accomplish whatever goal, why would you leave that to chance? There's no upside. There's literally no upside. Just tell me, like, what steps do you need executed? So right now what I'm playing with... So one thing we haven't talked about yet, and people don't talk about UI and agents. Right now, the primary interaction model... Or they don't talk enough about it. I know some people have. But it's like, okay, so we're used to the chatbot back and forth. Fine. I get that. But I think we're going to move to a blend of... Some of those things are going to be synchronous as they are now. But some are going to be... Some are going to be async. It's just going to put it in a queue, just like... And this goes back to my... Man, I talk fast. But I have this... I only have one other speed. It's even faster. So imagine it's like if you're working... So back to my, oh, we're going to have these hybrid digital teams. Like, you would not go to a co-worker and say, I'm going to ask you to do this thing, and then sit there and wait for them to go do it. Like, that's not how the world works. So it's nice to be able to just, like, hand something off to someone. It's like, okay, well, maybe I expect a response in an hour or a day or something like that.Dharmesh [00:46:52]: In terms of when things need to happen. So the UI around agents. So if you look at the output of agent.ai agents right now, they are the simplest possible manifestation of a UI, right? That says, oh, we have inputs of, like, four different types. Like, we've got a dropdown, we've got multi-select, all the things. It's like back in HTML, the original HTML 1.0 days, right? Like, you're the smallest possible set of primitives for a UI. And it just says, okay, because we need to collect some information from the user, and then we go do steps and do things. And generate some output in HTML or markup are the two primary examples. So the thing I've been asking myself, if I keep going down that path. So people ask me, I get requests all the time. It's like, oh, can you make the UI sort of boring? I need to be able to do this, right? And if I keep pulling on that, it's like, okay, well, now I've built an entire UI builder thing. Where does this end? And so I think the right answer, and this is what I'm going to be backcoding once I get done here, is around injecting a code generation UI generation into, the agent.ai flow, right? As a builder, you're like, okay, I'm going to describe the thing that I want, much like you would do in a vibe coding world. But instead of generating the entire app, it's going to generate the UI that exists at some point in either that deterministic flow or something like that. It says, oh, here's the thing I'm trying to do. Go generate the UI for me. And I can go through some iterations. And what I think of it as a, so it's like, I'm going to generate the code, generate the code, tweak it, go through this kind of prompt style, like we do with vibe coding now. And at some point, I'm going to be happy with it. And I'm going to hit save. And that's going to become the action in that particular step. It's like a caching of the generated code that I can then, like incur any inference time costs. It's just the actual code at that point.Alessio [00:48:29]: Yeah, I invested in a company called E2B, which does code sandbox. And they powered the LM arena web arena. So it's basically the, just like you do LMS, like text to text, they do the same for like UI generation. So if you're asking a model, how do you do it? But yeah, I think that's kind of where.Dharmesh [00:48:45]: That's the thing I'm really fascinated by. So the early LLM, you know, we're understandably, but laughably bad at simple arithmetic, right? That's the thing like my wife, Normies would ask us, like, you call this AI, like it can't, my son would be like, it's just stupid. It can't even do like simple arithmetic. And then like we've discovered over time that, and there's a reason for this, right? It's like, it's a large, there's, you know, the word language is in there for a reason in terms of what it's been trained on. It's not meant to do math, but now it's like, okay, well, the fact that it has access to a Python interpreter that I can actually call at runtime, that solves an entire body of problems that it wasn't trained to do. And it's basically a form of delegation. And so the thought that's kind of rattling around in my head is that that's great. So it's, it's like took the arithmetic problem and took it first. Now, like anything that's solvable through a relatively concrete Python program, it's able to do a bunch of things that I couldn't do before. Can we get to the same place with UI? I don't know what the future of UI looks like in a agentic AI world, but maybe let the LLM handle it, but not in the classic sense. Maybe it generates it on the fly, or maybe we go through some iterations and hit cache or something like that. So it's a little bit more predictable. Uh, I don't know, but yeah.Alessio [00:49:48]: And especially when is the human supposed to intervene? So, especially if you're composing them, most of them should not have a UI because then they're just web hooking to somewhere else. I just want to touch back. I don't know if you have more comments on this.swyx [00:50:01]: I was just going to ask when you, you said you got, you're going to go back to code. What
Luca Casanato, member of the Deno core team, delves into the intricacies of debugging applications using Deno and OpenTelemetry. Discover how Deno's native integration with OpenTelemetry enhances application performance monitoring, simplifies instrumentation compared to Node.js, and unlocks new insights for developers! Links https://lcas.dev https://x.com/lcasdev https://github.com/lucacasonato https://mastodon.social/@lcasdev https://www.linkedin.com/in/luca-casonato-15946b156 We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Let us know by sending an email to our producer, Emily, at emily.kochanekketner@logrocket.com (mailto:emily.kochanekketner@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understand where your users are struggling by trying it for free at [LogRocket.com]. Try LogRocket for free today.(https://logrocket.com/signup/?pdr) Special Guest: Luca Casonato.
In this episode of The Eric Ries Show, I sit down with Marten Mickos, a serial tech CEO who has been at the forefront of some of the most transformative moments in open-source technology. From leading MySQL through its groundbreaking journey to guiding HackerOne as a pioneering bug bounty platform, Marten's career is a masterclass in building innovative, trust-driven organizations.Our wide-ranging conversation explores Marten's remarkable journey through tech leadership, touching on his experiences building game-changing companies and, more recently, his work coaching emerging CEOs. We dive deep into the world of open source, company culture, and the nuanced art of leadership.In our conversation today, we talk about the following topics: • How MySQL revolutionized open-source databases and became Facebook's database• The strategic decision to make MySQL open source and leverage Linux distributions• The art of building a beloved open-source project while creating a profitable business model• How a lawsuit solidified MySQL's position in the open-source database market• The role of transparency and direct feedback in building organizational trust• Why Marten was drawn to HackerOne's disruptive approach to cybersecurity• Marten's transition to coaching new CEOs • Marten's unique "contrast framework" for making complex decisions• And much more!—Brought to you by:• Wilson Sonsini – Wilson Sonsini is the innovation economy's law firm. Learn more.• Gusto – Gusto is an easy payroll and benefits software built for small businesses. Get 3 months free.—Where to find Marten Mickos: • LinkedIn: https://www.linkedin.com/in/martenmickos/• Bluesky: https://bsky.app/profile/martenmickos.bsky.social—Where to find Eric:• Newsletter:https://ericries.carrd.co/ • Podcast:https://ericriesshow.com/ • YouTube:https://www.youtube.com/@theericriesshow —In This Episode We Cover:(00:00) Intro(03:15) The first time Eric used MySQL(07:10) The origins of MySQL and how Marten got involved (13:22) Why MySQL pivoted to open source to leverage the power of Linux distros(17:03) Open source vs. closed (18:56) Building profitable open-source companies (24:52) The fearless company culture at MySQL and the Progress lawsuit(29:30) The value of not cutting any corners (33:35) How a dolphin became part of the MySQL logo (35:55) What it was like to build a company of true believers(38:47) Marten's management approach emphasizes kindness and direct feedback (42:12) Marten's hiring philosophy(45:14) Why MySQL sold to Sun Microsystems and tried to avoid Oracle (50:24) How Oracle has made MySQL even better(52:22) Why Marten decided to lead at HackerOne(55:41) An overview of HackerOne(59:31) How HackerOne got started and landed the Department of Defense contract(1:03:19) The trust-building power of transparency(1:08:30) Marten's successor and the state of HackerOne now(1:09:23) Marten's work coaching CEOs(1:14:20) Common issues CEOs struggle with (1:16:45) Marten's contrast framework (1:26:12) The book of Finnish poetry that inspired Marten's love of polarities—You can find the transcript and references at https://www.ericriesshow.com/—Production and marketing byhttps://penname.co/.Eric may be an investor in the companies discussed.
AWS Morning Brief for the week of March 17th, with Corey Quinn. Links:Amazon Bedrock now supports multi-agent collaborationAmazon RDS for MySQL announces Extended Support minor 5.7.44-RDS.20250213Amazon Route 53 Traffic Flow introduces a new visual editor to improve DNS policy editingApplication Load Balancer announces integration with Amazon VPC IPAMAnnouncing the end of support for Node.js 14.x and 16.x in AWS CDKWatch the recordings from AWS Developer Day 2025How GoDaddy built a category generation system at scale with batch inference for Amazon BedrockFormula 1® unlocks the most competitive season yet with AWSSecure cloud innovation starts at re:Inforce 2025
In this episode, Jake and Michael discuss circles of influence and information, eloquently handling return of single values from the database, and monitoring tools for your applications.
In this episode, Jeremy Maldonado shares his experiences and insights on server management, highlighting the importance of learning from mistakes, the power of automation, and finding balance between Linux and Windows environments. He discusses the challenges and rewards of managing servers, the pivotal role of Ansible in streamlining operations, and the confidence required to maintain a reliable infrastructure. Jeremy encourages listeners to view setbacks as opportunities for growth while reminding us to be kind to ourselves throughout our professional journeys.
Jake and Michael discuss those features you ship that nobody uses but everybody has feedback for, testing a system where the valid state can change based on user input, and compliance auditing and adherence.
AWS Morning Brief for the week of February 17, with Corey Quinn. Links:Amazon DynamoDB now supports auto-approval of quota adjustmentsAmazon Elastic Block Store (EBS) now adds full snapshot size information in Console and APIAmazon RDS for MySQL announces Extended Support minor 5.7.44-RDS.20250103Amazon Redshift Serverless announces reduction in IP Address Requirements to 3 per SubnetAWS Deadline Cloud now supports Adobe After Effects in Service-Managed FleetsAWS Network Load Balancer now supports removing availability zonesAWS CloudTrail network activity events for VPC endpoints now generally availableHarness Amazon Bedrock Agents to Manage SAP InstancesTimestamp writes for write hedging in Amazon DynamoDBUpdating AWS SDK defaults – AWS STS service endpoint and Retry StrategyLearning AWS best practices from Amazon Q in the ConsoleAutomating Cost Optimization Governance with AWS ConfigAmazon Q Developer in chat applications rename - Summary of changes - AWS Chatbot
In this episode, Jake and Michael discuss the blockbuster trade of Luka Dončić to the the Los Angeles Lakers in exchange for Anthony Davis, the just-announced Laracon US, and pitch our talks for the very same conference.
News includes the exciting release of Oban Web as open source with newly added MySQL support, nine new ElixirConf 2024 videos have been published, a new full-stack web framework called Hologram that transpiles Elixir to JavaScript was announced, PhoenixTest gained Playwright driver support for enhanced testing capabilities, Protoss reached feature-complete status as it moves to version 1.0, and several Elixir conferences were announced including Code BEAM Lite Stockholm and GigCityElixir, and more! Show Notes online - http://podcast.thinkingelixir.com/238 (http://podcast.thinkingelixir.com/238) Elixir Community News https://oban.pro/articles/oss-web-and-new-oban (https://oban.pro/articles/oss-web-and-new-oban?utm_source=thinkingelixir&utm_medium=shownotes) – Oban Web has been officially released as OpenSource, including MySQL support in Oban v2.19 and Oban Web v2.11. https://www.youtube.com/playlist?list=PLqj39LCvnOWbW2Zli4LurDGc6lL5ij-9Y (https://www.youtube.com/playlist?list=PLqj39LCvnOWbW2Zli4LurDGc6lL5ij-9Y?utm_source=thinkingelixir&utm_medium=shownotes) – Nine new ElixirConf 2024 videos have been published and added to the official YouTube playlist. https://hologram.page/ (https://hologram.page/?utm_source=thinkingelixir&utm_medium=shownotes) – Introduction of Hologram, a new full stack isomorphic Elixir web framework that transpiles Elixir to JavaScript for client-side code. https://github.com/bartblast/hologram (https://github.com/bartblast/hologram?utm_source=thinkingelixir&utm_medium=shownotes) – The GitHub repository for Hologram, currently at version 0.2.0. https://hexdocs.pm/phoenixtestplaywright/PhoenixTest.Playwright.html (https://hexdocs.pm/phoenix_test_playwright/PhoenixTest.Playwright.html?utm_source=thinkingelixir&utm_medium=shownotes) – PhoenixTest now has a Playwright driver, enabling three layers of Phoenix testing with a common assertion layer. https://github.com/ityonemo/protoss (https://github.com/ityonemo/protoss?utm_source=thinkingelixir&utm_medium=shownotes) – Protoss, a library for powerful Elixir protocols, is now feature-complete and moving to version 1.0. Looking for maintainer. https://ashweekly.substack.com/p/ash-weekly-issue-1 (https://ashweekly.substack.com/p/ash-weekly-issue-1?utm_source=thinkingelixir&utm_medium=shownotes) – Launch of Ash Weekly newsletter to keep up with Ash Framework updates and news. https://ash-project.github.io/ash_phoenix/nested-forms.html (https://ash-project.github.io/ash_phoenix/nested-forms.html?utm_source=thinkingelixir&utm_medium=shownotes) – AshPhoenix update featuring improved handling for nested forms. https://sessionize.com/code-beam-lite-stockholm-2025 (https://sessionize.com/code-beam-lite-stockholm-2025?utm_source=thinkingelixir&utm_medium=shownotes) – Call for speakers open until February 20th for Code BEAM Lite Stockholm, happening June 2nd 2025. NervesConf EU and Goatmire Elixir announced for September 10-12 in Varberg, Sweden. https://www.gigcityelixir.com/ (https://www.gigcityelixir.com/?utm_source=thinkingelixir&utm_medium=shownotes) – GigCityElixir conference announced in Chattanooga, TN, May 9-10, preceded by NervesConf on May 8th. Do you have some Elixir news to share? Tell us at @ThinkingElixir (https://twitter.com/ThinkingElixir) or email at show@thinkingelixir.com (mailto:show@thinkingelixir.com) Find us online - Message the show - Bluesky (https://bsky.app/profile/thinkingelixir.com) - Message the show - X (https://x.com/ThinkingElixir) - Message the show on Fediverse - @ThinkingElixir@genserver.social (https://genserver.social/ThinkingElixir) - Email the show - show@thinkingelixir.com (mailto:show@thinkingelixir.com) - Mark Ericksen on X - @brainlid (https://x.com/brainlid) - Mark Ericksen on Bluesky - @brainlid.bsky.social (https://bsky.app/profile/brainlid.bsky.social) - Mark Ericksen on Fediverse - @brainlid@genserver.social (https://genserver.social/brainlid) - David Bernheisel on Bluesky - @david.bernheisel.com (https://bsky.app/profile/david.bernheisel.com) - David Bernheisel on Fediverse - @dbern@genserver.social (https://genserver.social/dbern)
In this episode, I'm joined by Jesmar Canol, COO of ProxySQL, to explore the journey behind the creation of this open source solution that has become a game-changer for database management. From his early days in IT to addressing the challenges that database administrators (DBAs) face daily, Jesmar shares the story of how ProxySQL evolved from a side project into a vital tool for empowering database teams around the world. We discuss the complexities of managing MySQL and PostgreSQL infrastructures, ProxySQL's unique approach to query routing, load balancing, and its ability to maintain high availability even in the most demanding environments. Jesmar explains why ProxySQL's open-source model is critical in fostering trust and transparency, and how it helps organizations adapt to the growing demands for cloud-native and on-premise database solutions. Jesmar also offers insights into the challenges of running a distributed team, the evolution of database management in an era of increasing automation, and the emerging trends shaping the future of this space. Whether you're a seasoned DBA, a tech leader, or simply curious about the transformative power of open source solutions, this episode is packed with valuable takeaways.
What if your organization could unlock the full potential of AI without ever compromising on privacy or sharing sensitive data? In this episode of Tech Talks Daily, I am joined by Alexander Alten, Co-Founder and CEO of Scalytics, to explore how he is building the next-generation infrastructure layer for AI agents. Alexander brings a wealth of expertise, having led data and product teams at industry giants like Cloudera, Allianz, and Healthgrades. With a background in startups such as X-Warp and Infinite Devices, he has a proven track record of developing customer-centric, data-driven solutions that not only disrupt conventional norms but also fuel measurable growth. During our conversation at the IT Press Tour in Malta, Alexander introduces Scalytics Connect, a modern AI data platform designed to accelerate insights while preserving privacy. He unpacks the challenges of breaking down data silos and explains why centralizing data may not always be the optimal solution. We also demystify federated learning, shedding light on its potential to empower businesses, particularly in regulated industries, to collaborate on AI models without exposing their data. The discussion extends to the value of open-source technologies and why they often emerge as long-term winners, citing examples like MySQL, Postgres, and WordPress. Alexander shares how Scalytics leverages open-source principles to provide scalable and transparent machine learning solutions for businesses looking to outperform in an increasingly data-driven world. As AI continues to redefine the way we work and innovate, Alexander's insights provide a roadmap for navigating the complexities of decentralized machine learning, privacy-first AI, and scalable technology. Could his approach to AI and data collaboration be the key to unlocking your organization's potential? Tune in to find out, and don't forget to share your thoughts on the future of AI-powered innovation.