POPULARITY
Software Engineering Radio - The Podcast for Professional Software Developers
Will McGugan, the CEO and founder of Textualize, speaks with host Gregory M. Kapfhammer about how to use packages such as Rich and Textual to build text-based user interfaces (TUIs) and command-line interfaces (CLIs) in Python. Along with discussing the design idioms that enable developers to create TUIs in Python, they consider practical strategies for efficiently rendering the components of a TUI. They also explore the subtle idiosyncrasies of implementing performant TUI frameworks like Textual and Rich and introduce the steps that developers would take to create their own CLI or TUI. This episode is sponsored by Fly.io.
Today's episode is with Paul Klein, founder of Browserbase. We talked about building browser infrastructure for AI agents, the future of agent authentication, and their open source framework Stagehand.* [00:00:00] Introductions* [00:04:46] AI-specific challenges in browser infrastructure* [00:07:05] Multimodality in AI-Powered Browsing* [00:12:26] Running headless browsers at scale* [00:18:46] Geolocation when proxying* [00:21:25] CAPTCHAs and Agent Auth* [00:28:21] Building “User take over” functionality* [00:33:43] Stagehand: AI web browsing framework* [00:38:58] OpenAI's Operator and computer use agents* [00:44:44] Surprising use cases of Browserbase* [00:47:18] Future of browser automation and market competition* [00:53:11] Being a solo founderTranscriptAlessio [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol.ai.swyx [00:00:12]: Hey, and today we are very blessed to have our friends, Paul Klein, for the fourth, the fourth, CEO of Browserbase. Welcome.Paul [00:00:21]: Thanks guys. Yeah, I'm happy to be here. I've been lucky to know both of you for like a couple of years now, I think. So it's just like we're hanging out, you know, with three ginormous microphones in front of our face. It's totally normal hangout.swyx [00:00:34]: Yeah. We've actually mentioned you on the podcast, I think, more often than any other Solaris tenant. Just because like you're one of the, you know, best performing, I think, LLM tool companies that have started up in the last couple of years.Paul [00:00:50]: Yeah, I mean, it's been a whirlwind of a year, like Browserbase is actually pretty close to our first birthday. So we are one years old. And going from, you know, starting a company as a solo founder to... To, you know, having a team of 20 people, you know, a series A, but also being able to support hundreds of AI companies that are building AI applications that go out and automate the web. It's just been like, really cool. It's been happening a little too fast. I think like collectively as an AI industry, let's just take a week off together. I took my first vacation actually two weeks ago, and Operator came out on the first day, and then a week later, DeepSeat came out. And I'm like on vacation trying to chill. I'm like, we got to build with this stuff, right? So it's been a breakneck year. But I'm super happy to be here and like talk more about all the stuff we're seeing. And I'd love to hear kind of what you guys are excited about too, and share with it, you know?swyx [00:01:39]: Where to start? So people, you've done a bunch of podcasts. I think I strongly recommend Jack Bridger's Scaling DevTools, as well as Turner Novak's The Peel. And, you know, I'm sure there's others. So you covered your Twilio story in the past, talked about StreamClub, you got acquired to Mux, and then you left to start Browserbase. So maybe we just start with what is Browserbase? Yeah.Paul [00:02:02]: Browserbase is the web browser for your AI. We're building headless browser infrastructure, which are browsers that run in a server environment that's accessible to developers via APIs and SDKs. It's really hard to run a web browser in the cloud. You guys are probably running Chrome on your computers, and that's using a lot of resources, right? So if you want to run a web browser or thousands of web browsers, you can't just spin up a bunch of lambdas. You actually need to use a secure containerized environment. You have to scale it up and down. It's a stateful system. And that infrastructure is, like, super painful. And I know that firsthand, because at my last company, StreamClub, I was CTO, and I was building our own internal headless browser infrastructure. That's actually why we sold the company, is because Mux really wanted to buy our headless browser infrastructure that we'd built. And it's just a super hard problem. And I actually told my co-founders, I would never start another company unless it was a browser infrastructure company. And it turns out that's really necessary in the age of AI, when AI can actually go out and interact with websites, click on buttons, fill in forms. You need AI to do all of that work in an actual browser running somewhere on a server. And BrowserBase powers that.swyx [00:03:08]: While you're talking about it, it occurred to me, not that you're going to be acquired or anything, but it occurred to me that it would be really funny if you became the Nikita Beer of headless browser companies. You just have one trick, and you make browser companies that get acquired.Paul [00:03:23]: I truly do only have one trick. I'm screwed if it's not for headless browsers. I'm not a Go programmer. You know, I'm in AI grant. You know, browsers is an AI grant. But we were the only company in that AI grant batch that used zero dollars on AI spend. You know, we're purely an infrastructure company. So as much as people want to ask me about reinforcement learning, I might not be the best guy to talk about that. But if you want to ask about headless browser infrastructure at scale, I can talk your ear off. So that's really my area of expertise. And it's a pretty niche thing. Like, nobody has done what we're doing at scale before. So we're happy to be the experts.swyx [00:03:59]: You do have an AI thing, stagehand. We can talk about the sort of core of browser-based first, and then maybe stagehand. Yeah, stagehand is kind of the web browsing framework. Yeah.What is Browserbase? Headless Browser Infrastructure ExplainedAlessio [00:04:10]: Yeah. Yeah. And maybe how you got to browser-based and what problems you saw. So one of the first things I worked on as a software engineer was integration testing. Sauce Labs was kind of like the main thing at the time. And then we had Selenium, we had Playbrite, we had all these different browser things. But it's always been super hard to do. So obviously you've worked on this before. When you started browser-based, what were the challenges? What were the AI-specific challenges that you saw versus, there's kind of like all the usual running browser at scale in the cloud, which has been a problem for years. What are like the AI unique things that you saw that like traditional purchase just didn't cover? Yeah.AI-specific challenges in browser infrastructurePaul [00:04:46]: First and foremost, I think back to like the first thing I did as a developer, like as a kid when I was writing code, I wanted to write code that did stuff for me. You know, I wanted to write code to automate my life. And I do that probably by using curl or beautiful soup to fetch data from a web browser. And I think I still do that now that I'm in the cloud. And the other thing that I think is a huge challenge for me is that you can't just create a web site and parse that data. And we all know that now like, you know, taking HTML and plugging that into an LLM, you can extract insights, you can summarize. So it was very clear that now like dynamic web scraping became very possible with the rise of large language models or a lot easier. And that was like a clear reason why there's been more usage of headless browsers, which are necessary because a lot of modern websites don't expose all of their page content via a simple HTTP request. You know, they actually do require you to run this type of code for a specific time. JavaScript on the page to hydrate this. Airbnb is a great example. You go to airbnb.com. A lot of that content on the page isn't there until after they run the initial hydration. So you can't just scrape it with a curl. You need to have some JavaScript run. And a browser is that JavaScript engine that's going to actually run all those requests on the page. So web data retrieval was definitely one driver of starting BrowserBase and the rise of being able to summarize that within LLM. Also, I was familiar with if I wanted to automate a website, I could write one script and that would work for one website. It was very static and deterministic. But the web is non-deterministic. The web is always changing. And until we had LLMs, there was no way to write scripts that you could write once that would run on any website. That would change with the structure of the website. Click the login button. It could mean something different on many different websites. And LLMs allow us to generate code on the fly to actually control that. So I think that rise of writing the generic automation scripts that can work on many different websites, to me, made it clear that browsers are going to be a lot more useful because now you can automate a lot more things without writing. If you wanted to write a script to book a demo call on 100 websites, previously, you had to write 100 scripts. Now you write one script that uses LLMs to generate that script. That's why we built our web browsing framework, StageHand, which does a lot of that work for you. But those two things, web data collection and then enhanced automation of many different websites, it just felt like big drivers for more browser infrastructure that would be required to power these kinds of features.Alessio [00:07:05]: And was multimodality also a big thing?Paul [00:07:08]: Now you can use the LLMs to look, even though the text in the dome might not be as friendly. Maybe my hot take is I was always kind of like, I didn't think vision would be as big of a driver. For UI automation, I felt like, you know, HTML is structured text and large language models are good with structured text. But it's clear that these computer use models are often vision driven, and they've been really pushing things forward. So definitely being multimodal, like rendering the page is required to take a screenshot to give that to a computer use model to take actions on a website. And it's just another win for browser. But I'll be honest, that wasn't what I was thinking early on. I didn't even think that we'd get here so fast with multimodality. I think we're going to have to get back to multimodal and vision models.swyx [00:07:50]: This is one of those things where I forgot to mention in my intro that I'm an investor in Browserbase. And I remember that when you pitched to me, like a lot of the stuff that we have today, we like wasn't on the original conversation. But I did have my original thesis was something that we've talked about on the podcast before, which is take the GPT store, the custom GPT store, all the every single checkbox and plugin is effectively a startup. And this was the browser one. I think the main hesitation, I think I actually took a while to get back to you. The main hesitation was that there were others. Like you're not the first hit list browser startup. It's not even your first hit list browser startup. There's always a question of like, will you be the category winner in a place where there's a bunch of incumbents, to be honest, that are bigger than you? They're just not targeted at the AI space. They don't have the backing of Nat Friedman. And there's a bunch of like, you're here in Silicon Valley. They're not. I don't know.Paul [00:08:47]: I don't know if that's, that was it, but like, there was a, yeah, I mean, like, I think I tried all the other ones and I was like, really disappointed. Like my background is from working at great developer tools, companies, and nothing had like the Vercel like experience. Um, like our biggest competitor actually is partly owned by private equity and they just jacked up their prices quite a bit. And the dashboard hasn't changed in five years. And I actually used them at my last company and tried them and I was like, oh man, like there really just needs to be something that's like the experience of these great infrastructure companies, like Stripe, like clerk, like Vercel that I use in love, but oriented towards this kind of like more specific category, which is browser infrastructure, which is really technically complex. Like a lot of stuff can go wrong on the internet when you're running a browser. The internet is very vast. There's a lot of different configurations. Like there's still websites that only work with internet explorer out there. How do you handle that when you're running your own browser infrastructure? These are the problems that we have to think about and solve at BrowserBase. And it's, it's certainly a labor of love, but I built this for me, first and foremost, I know it's super cheesy and everyone says that for like their startups, but it really, truly was for me. If you look at like the talks I've done even before BrowserBase, and I'm just like really excited to try and build a category defining infrastructure company. And it's, it's rare to have a new category of infrastructure exists. We're here in the Chroma offices and like, you know, vector databases is a new category of infrastructure. Is it, is it, I mean, we can, we're in their office, so, you know, we can, we can debate that one later. That is one.Multimodality in AI-Powered Browsingswyx [00:10:16]: That's one of the industry debates.Paul [00:10:17]: I guess we go back to the LLMOS talk that Karpathy gave way long ago. And like the browser box was very clearly there and it seemed like the people who were building in this space also agreed that browsers are a core primitive of infrastructure for the LLMOS that's going to exist in the future. And nobody was building something there that I wanted to use. So I had to go build it myself.swyx [00:10:38]: Yeah. I mean, exactly that talk that, that honestly, that diagram, every box is a startup and there's the code box and then there's the. The browser box. I think at some point they will start clashing there. There's always the question of the, are you a point solution or are you the sort of all in one? And I think the point solutions tend to win quickly, but then the only ones have a very tight cohesive experience. Yeah. Let's talk about just the hard problems of browser base you have on your website, which is beautiful. Thank you. Was there an agency that you used for that? Yeah. Herb.paris.Paul [00:11:11]: They're amazing. Herb.paris. Yeah. It's H-E-R-V-E. I highly recommend for developers. Developer tools, founders to work with consumer agencies because they end up building beautiful things and the Parisians know how to build beautiful interfaces. So I got to give prep.swyx [00:11:24]: And chat apps, apparently are, they are very fast. Oh yeah. The Mistral chat. Yeah. Mistral. Yeah.Paul [00:11:31]: Late chat.swyx [00:11:31]: Late chat. And then your videos as well, it was professionally shot, right? The series A video. Yeah.Alessio [00:11:36]: Nico did the videos. He's amazing. Not the initial video that you shot at the new one. First one was Austin.Paul [00:11:41]: Another, another video pretty surprised. But yeah, I mean, like, I think when you think about how you talk about your company. You have to think about the way you present yourself. It's, you know, as a developer, you think you evaluate a company based on like the API reliability and the P 95, but a lot of developers say, is the website good? Is the message clear? Do I like trust this founder? I'm building my whole feature on. So I've tried to nail that as well as like the reliability of the infrastructure. You're right. It's very hard. And there's a lot of kind of foot guns that you run into when running headless browsers at scale. Right.Competing with Existing Headless Browser Solutionsswyx [00:12:10]: So let's pick one. You have eight features here. Seamless integration. Scalability. Fast or speed. Secure. Observable. Stealth. That's interesting. Extensible and developer first. What comes to your mind as like the top two, three hardest ones? Yeah.Running headless browsers at scalePaul [00:12:26]: I think just running headless browsers at scale is like the hardest one. And maybe can I nerd out for a second? Is that okay? I heard this is a technical audience, so I'll talk to the other nerds. Whoa. They were listening. Yeah. They're upset. They're ready. The AGI is angry. Okay. So. So how do you run a browser in the cloud? Let's start with that, right? So let's say you're using a popular browser automation framework like Puppeteer, Playwright, and Selenium. Maybe you've written a code, some code locally on your computer that opens up Google. It finds the search bar and then types in, you know, search for Latent Space and hits the search button. That script works great locally. You can see the little browser open up. You want to take that to production. You want to run the script in a cloud environment. So when your laptop is closed, your browser is doing something. The browser is doing something. Well, I, we use Amazon. You can see the little browser open up. You know, the first thing I'd reach for is probably like some sort of serverless infrastructure. I would probably try and deploy on a Lambda. But Chrome itself is too big to run on a Lambda. It's over 250 megabytes. So you can't easily start it on a Lambda. So you maybe have to use something like Lambda layers to squeeze it in there. Maybe use a different Chromium build that's lighter. And you get it on the Lambda. Great. It works. But it runs super slowly. It's because Lambdas are very like resource limited. They only run like with one vCPU. You can run one process at a time. Remember, Chromium is super beefy. It's barely running on my MacBook Air. I'm still downloading it from a pre-run. Yeah, from the test earlier, right? I'm joking. But it's big, you know? So like Lambda, it just won't work really well. Maybe it'll work, but you need something faster. Your users want something faster. Okay. Well, let's put it on a beefier instance. Let's get an EC2 server running. Let's throw Chromium on there. Great. Okay. I can, that works well with one user. But what if I want to run like 10 Chromium instances, one for each of my users? Okay. Well, I might need two EC2 instances. Maybe 10. All of a sudden, you have multiple EC2 instances. This sounds like a problem for Kubernetes and Docker, right? Now, all of a sudden, you're using ECS or EKS, the Kubernetes or container solutions by Amazon. You're spending up and down containers, and you're spending a whole engineer's time on kind of maintaining this stateful distributed system. Those are some of the worst systems to run because when it's a stateful distributed system, it means that you are bound by the connections to that thing. You have to keep the browser open while someone is working with it, right? That's just a painful architecture to run. And there's all this other little gotchas with Chromium, like Chromium, which is the open source version of Chrome, by the way. You have to install all these fonts. You want emojis working in your browsers because your vision model is looking for the emoji. You need to make sure you have the emoji fonts. You need to make sure you have all the right extensions configured, like, oh, do you want ad blocking? How do you configure that? How do you actually record all these browser sessions? Like it's a headless browser. You can't look at it. So you need to have some sort of observability. Maybe you're recording videos and storing those somewhere. It all kind of adds up to be this just giant monster piece of your project when all you wanted to do was run a lot of browsers in production for this little script to go to google.com and search. And when I see a complex distributed system, I see an opportunity to build a great infrastructure company. And we really abstract that away with Browserbase where our customers can use these existing frameworks, Playwright, Publisher, Selenium, or our own stagehand and connect to our browsers in a serverless-like way. And control them, and then just disconnect when they're done. And they don't have to think about the complex distributed system behind all of that. They just get a browser running anywhere, anytime. Really easy to connect to.swyx [00:15:55]: I'm sure you have questions. My standard question with anything, so essentially you're a serverless browser company, and there's been other serverless things that I'm familiar with in the past, serverless GPUs, serverless website hosting. That's where I come from with Netlify. One question is just like, you promised to spin up thousands of servers. You promised to spin up thousands of browsers in milliseconds. I feel like there's no real solution that does that yet. And I'm just kind of curious how. The only solution I know, which is to kind of keep a kind of warm pool of servers around, which is expensive, but maybe not so expensive because it's just CPUs. So I'm just like, you know. Yeah.Browsers as a Core Primitive in AI InfrastructurePaul [00:16:36]: You nailed it, right? I mean, how do you offer a serverless-like experience with something that is clearly not serverless, right? And the answer is, you need to be able to run... We run many browsers on single nodes. We use Kubernetes at browser base. So we have many pods that are being scheduled. We have to predictably schedule them up or down. Yes, thousands of browsers in milliseconds is the best case scenario. If you hit us with 10,000 requests, you may hit a slower cold start, right? So we've done a lot of work on predictive scaling and being able to kind of route stuff to different regions where we have multiple regions of browser base where we have different pools available. You can also pick the region you want to go to based on like lower latency, round trip, time latency. It's very important with these types of things. There's a lot of requests going over the wire. So for us, like having a VM like Firecracker powering everything under the hood allows us to be super nimble and spin things up or down really quickly with strong multi-tenancy. But in the end, this is like the complex infrastructural challenges that we have to kind of deal with at browser base. And we have a lot more stuff on our roadmap to allow customers to have more levers to pull to exchange, do you want really fast browser startup times or do you want really low costs? And if you're willing to be more flexible on that, we may be able to kind of like work better for your use cases.swyx [00:17:44]: Since you used Firecracker, shouldn't Fargate do that for you or did you have to go lower level than that? We had to go lower level than that.Paul [00:17:51]: I find this a lot with Fargate customers, which is alarming for Fargate. We used to be a giant Fargate customer. Actually, the first version of browser base was ECS and Fargate. And unfortunately, it's a great product. I think we were actually the largest Fargate customer in our region for a little while. No, what? Yeah, seriously. And unfortunately, it's a great product, but I think if you're an infrastructure company, you actually have to have a deeper level of control over these primitives. I think it's the same thing is true with databases. We've used other database providers and I think-swyx [00:18:21]: Yeah, serverless Postgres.Paul [00:18:23]: Shocker. When you're an infrastructure company, you're on the hook if any provider has an outage. And I can't tell my customers like, hey, we went down because so-and-so went down. That's not acceptable. So for us, we've really moved to bringing things internally. It's kind of opposite of what we preach. We tell our customers, don't build this in-house, but then we're like, we build a lot of stuff in-house. But I think it just really depends on what is in the critical path. We try and have deep ownership of that.Alessio [00:18:46]: On the distributed location side, how does that work for the web where you might get sort of different content in different locations, but the customer is expecting, you know, if you're in the US, I'm expecting the US version. But if you're spinning up my browser in France, I might get the French version. Yeah.Paul [00:19:02]: Yeah. That's a good question. Well, generally, like on the localization, there is a thing called locale in the browser. You can set like what your locale is. If you're like in the ENUS browser or not, but some things do IP, IP based routing. And in that case, you may want to have a proxy. Like let's say you're running something in the, in Europe, but you want to make sure you're showing up from the US. You may want to use one of our proxy features so you can turn on proxies to say like, make sure these connections always come from the United States, which is necessary too, because when you're browsing the web, you're coming from like a, you know, data center IP, and that can make things a lot harder to browse web. So we do have kind of like this proxy super network. Yeah. We have a proxy for you based on where you're going, so you can reliably automate the web. But if you get scheduled in Europe, that doesn't happen as much. We try and schedule you as close to, you know, your origin that you're trying to go to. But generally you have control over the regions you can put your browsers in. So you can specify West one or East one or Europe. We only have one region of Europe right now, actually. Yeah.Alessio [00:19:55]: What's harder, the browser or the proxy? I feel like to me, it feels like actually proxying reliably at scale. It's much harder than spending up browsers at scale. I'm curious. It's all hard.Paul [00:20:06]: It's layers of hard, right? Yeah. I think it's different levels of hard. I think the thing with the proxy infrastructure is that we work with many different web proxy providers and some are better than others. Some have good days, some have bad days. And our customers who've built browser infrastructure on their own, they have to go and deal with sketchy actors. Like first they figure out their own browser infrastructure and then they got to go buy a proxy. And it's like you can pay in Bitcoin and it just kind of feels a little sus, right? It's like you're buying drugs when you're trying to get a proxy online. We have like deep relationships with these counterparties. We're able to audit them and say, is this proxy being sourced ethically? Like it's not running on someone's TV somewhere. Is it free range? Yeah. Free range organic proxies, right? Right. We do a level of diligence. We're SOC 2. So we have to understand what is going on here. But then we're able to make sure that like we route around proxy providers not working. There's proxy providers who will just, the proxy will stop working all of a sudden. And then if you don't have redundant proxying on your own browsers, that's hard down for you or you may get some serious impacts there. With us, like we intelligently know, hey, this proxy is not working. Let's go to this one. And you can kind of build a network of multiple providers to really guarantee the best uptime for our customers. Yeah. So you don't own any proxies? We don't own any proxies. You're right. The team has been saying who wants to like take home a little proxy server, but not yet. We're not there yet. You know?swyx [00:21:25]: It's a very mature market. I don't think you should build that yourself. Like you should just be a super customer of them. Yeah. Scraping, I think, is the main use case for that. I guess. Well, that leads us into CAPTCHAs and also off, but let's talk about CAPTCHAs. You had a little spiel that you wanted to talk about CAPTCHA stuff.Challenges of Scaling Browser InfrastructurePaul [00:21:43]: Oh, yeah. I was just, I think a lot of people ask, if you're thinking about proxies, you're thinking about CAPTCHAs too. I think it's the same thing. You can go buy CAPTCHA solvers online, but it's the same buying experience. It's some sketchy website, you have to integrate it. It's not fun to buy these things and you can't really trust that the docs are bad. What Browserbase does is we integrate a bunch of different CAPTCHAs. We do some stuff in-house, but generally we just integrate with a bunch of known vendors and continually monitor and maintain these things and say, is this working or not? Can we route around it or not? These are CAPTCHA solvers. CAPTCHA solvers, yeah. Not CAPTCHA providers, CAPTCHA solvers. Yeah, sorry. CAPTCHA solvers. We really try and make sure all of that works for you. I think as a dev, if I'm buying infrastructure, I want it all to work all the time and it's important for us to provide that experience by making sure everything does work and monitoring it on our own. Yeah. Right now, the world of CAPTCHAs is tricky. I think AI agents in particular are very much ahead of the internet infrastructure. CAPTCHAs are designed to block all types of bots, but there are now good bots and bad bots. I think in the future, CAPTCHAs will be able to identify who a good bot is, hopefully via some sort of KYC. For us, we've been very lucky. We have very little to no known abuse of Browserbase because we really look into who we work with. And for certain types of CAPTCHA solving, we only allow them on certain types of plans because we want to make sure that we can know what people are doing, what their use cases are. And that's really allowed us to try and be an arbiter of good bots, which is our long term goal. I want to build great relationships with people like Cloudflare so we can agree, hey, here are these acceptable bots. We'll identify them for you and make sure we flag when they come to your website. This is a good bot, you know?Alessio [00:23:23]: I see. And Cloudflare said they want to do more of this. So they're going to set by default, if they think you're an AI bot, they're going to reject. I'm curious if you think this is something that is going to be at the browser level or I mean, the DNS level with Cloudflare seems more where it should belong. But I'm curious how you think about it.Paul [00:23:40]: I think the web's going to change. You know, I think that the Internet as we have it right now is going to change. And we all need to just accept that the cat is out of the bag. And instead of kind of like wishing the Internet was like it was in the 2000s, we can have free content line that wouldn't be scraped. It's just it's not going to happen. And instead, we should think about like, one, how can we change? How can we change the models of, you know, information being published online so people can adequately commercialize it? But two, how do we rebuild applications that expect that AI agents are going to log in on their behalf? Those are the things that are going to allow us to kind of like identify good and bad bots. And I think the team at Clerk has been doing a really good job with this on the authentication side. I actually think that auth is the biggest thing that will prevent agents from accessing stuff, not captchas. And I think there will be agent auth in the future. I don't know if it's going to happen from an individual company, but actually authentication providers that have a, you know, hidden login as agent feature, which will then you put in your email, you'll get a push notification, say like, hey, your browser-based agent wants to log into your Airbnb. You can approve that and then the agent can proceed. That really circumvents the need for captchas or logging in as you and sharing your password. I think agent auth is going to be one way we identify good bots going forward. And I think a lot of this captcha solving stuff is really short-term problems as the internet kind of reorients itself around how it's going to work with agents browsing the web, just like people do. Yeah.Managing Distributed Browser Locations and Proxiesswyx [00:24:59]: Stitch recently was on Hacker News for talking about agent experience, AX, which is a thing that Netlify is also trying to clone and coin and talk about. And we've talked about this on our previous episodes before in a sense that I actually think that's like maybe the only part of the tech stack that needs to be kind of reinvented for agents. Everything else can stay the same, CLIs, APIs, whatever. But auth, yeah, we need agent auth. And it's mostly like short-lived, like it should not, it should be a distinct, identity from the human, but paired. I almost think like in the same way that every social network should have your main profile and then your alt accounts or your Finsta, it's almost like, you know, every, every human token should be paired with the agent token and the agent token can go and do stuff on behalf of the human token, but not be presumed to be the human. Yeah.Paul [00:25:48]: It's like, it's, it's actually very similar to OAuth is what I'm thinking. And, you know, Thread from Stitch is an investor, Colin from Clerk, Octaventures, all investors in browser-based because like, I hope they solve this because they'll make browser-based submission more possible. So we don't have to overcome all these hurdles, but I think it will be an OAuth-like flow where an agent will ask to log in as you, you'll approve the scopes. Like it can book an apartment on Airbnb, but it can't like message anybody. And then, you know, the agent will have some sort of like role-based access control within an application. Yeah. I'm excited for that.swyx [00:26:16]: The tricky part is just, there's one, one layer of delegation here, which is like, you're authoring my user's user or something like that. I don't know if that's tricky or not. Does that make sense? Yeah.Paul [00:26:25]: You know, actually at Twilio, I worked on the login identity and access. Management teams, right? So like I built Twilio's login page.swyx [00:26:31]: You were an intern on that team and then you became the lead in two years? Yeah.Paul [00:26:34]: Yeah. I started as an intern in 2016 and then I was the tech lead of that team. How? That's not normal. I didn't have a life. He's not normal. Look at this guy. I didn't have a girlfriend. I just loved my job. I don't know. I applied to 500 internships for my first job and I got rejected from every single one of them except for Twilio and then eventually Amazon. And they took a shot on me and like, I was getting paid money to write code, which was my dream. Yeah. Yeah. I'm very lucky that like this coding thing worked out because I was going to be doing it regardless. And yeah, I was able to kind of spend a lot of time on a team that was growing at a company that was growing. So it informed a lot of this stuff here. I think these are problems that have been solved with like the SAML protocol with SSO. I think it's a really interesting stuff with like WebAuthn, like these different types of authentication, like schemes that you can use to authenticate people. The tooling is all there. It just needs to be tweaked a little bit to work for agents. And I think the fact that there are companies that are already. Providing authentication as a service really sets it up. Well, the thing that's hard is like reinventing the internet for agents. We don't want to rebuild the internet. That's an impossible task. And I think people often say like, well, we'll have this second layer of APIs built for agents. I'm like, we will for the top use cases, but instead of we can just tweak the internet as is, which is on the authentication side, I think we're going to be the dumb ones going forward. Unfortunately, I think AI is going to be able to do a lot of the tasks that we do online, which means that it will be able to go to websites, click buttons on our behalf and log in on our behalf too. So with this kind of like web agent future happening, I think with some small structural changes, like you said, it feels like it could all slot in really nicely with the existing internet.Handling CAPTCHAs and Agent Authenticationswyx [00:28:08]: There's one more thing, which is the, your live view iframe, which lets you take, take control. Yeah. Obviously very key for operator now, but like, was, is there anything interesting technically there or that the people like, well, people always want this.Paul [00:28:21]: It was really hard to build, you know, like, so, okay. Headless browsers, you don't see them, right. They're running. They're running in a cloud somewhere. You can't like look at them. And I just want to really make, it's a weird name. I wish we came up with a better name for this thing, but you can't see them. Right. But customers don't trust AI agents, right. At least the first pass. So what we do with our live view is that, you know, when you use browser base, you can actually embed a live view of the browser running in the cloud for your customer to see it working. And that's what the first reason is the build trust, like, okay, so I have this script. That's going to go automate a website. I can embed it into my web application via an iframe and my customer can watch. I think. And then we added two way communication. So now not only can you watch the browser kind of being operated by AI, if you want to pause and actually click around type within this iframe that's controlling a browser, that's also possible. And this is all thanks to some of the lower level protocol, which is called the Chrome DevTools protocol. It has a API called start screencast, and you can also send mouse clicks and button clicks to a remote browser. And this is all embeddable within iframes. You have a browser within a browser, yo. And then you simulate the screen, the click on the other side. Exactly. And this is really nice often for, like, let's say, a capture that can't be solved. You saw this with Operator, you know, Operator actually uses a different approach. They use VNC. So, you know, you're able to see, like, you're seeing the whole window here. What we're doing is something a little lower level with the Chrome DevTools protocol. It's just PNGs being streamed over the wire. But the same thing is true, right? Like, hey, I'm running a window. Pause. Can you do something in this window? Human. Okay, great. Resume. Like sometimes 2FA tokens. Like if you get that text message, you might need a person to type that in. Web agents need human-in-the-loop type workflows still. You still need a person to interact with the browser. And building a UI to proxy that is kind of hard. You may as well just show them the whole browser and say, hey, can you finish this up for me? And then let the AI proceed on afterwards. Is there a future where I stream my current desktop to browser base? I don't think so. I think we're very much cloud infrastructure. Yeah. You know, but I think a lot of the stuff we're doing, we do want to, like, build tools. Like, you know, we'll talk about the stage and, you know, web agent framework in a second. But, like, there's a case where a lot of people are going desktop first for, you know, consumer use. And I think cloud is doing a lot of this, where I expect to see, you know, MCPs really oriented around the cloud desktop app for a reason, right? Like, I think a lot of these tools are going to run on your computer because it makes... I think it's breaking out. People are putting it on a server. Oh, really? Okay. Well, sweet. We'll see. We'll see that. I was surprised, though, wasn't I? I think that the browser company, too, with Dia Browser, it runs on your machine. You know, it's going to be...swyx [00:30:50]: What is it?Paul [00:30:51]: So, Dia Browser, as far as I understand... I used to use Arc. Yeah. I haven't used Arc. But I'm a big fan of the browser company. I think they're doing a lot of cool stuff in consumer. As far as I understand, it's a browser where you have a sidebar where you can, like, chat with it and it can control the local browser on your machine. So, if you imagine, like, what a consumer web agent is, which it lives alongside your browser, I think Google Chrome has Project Marina, I think. I almost call it Project Marinara for some reason. I don't know why. It's...swyx [00:31:17]: No, I think it's someone really likes the Waterworld. Oh, I see. The classic Kevin Costner. Yeah.Paul [00:31:22]: Okay. Project Marinara is a similar thing to the Dia Browser, in my mind, as far as I understand it. You have a browser that has an AI interface that will take over your mouse and keyboard and control the browser for you. Great for consumer use cases. But if you're building applications that rely on a browser and it's more part of a greater, like, AI app experience, you probably need something that's more like infrastructure, not a consumer app.swyx [00:31:44]: Just because I have explored a little bit in this area, do people want branching? So, I have the state. Of whatever my browser's in. And then I want, like, 100 clones of this state. Do people do that? Or...Paul [00:31:56]: People don't do it currently. Yeah. But it's definitely something we're thinking about. I think the idea of forking a browser is really cool. Technically, kind of hard. We're starting to see this in code execution, where people are, like, forking some, like, code execution, like, processes or forking some tool calls or branching tool calls. Haven't seen it at the browser level yet. But it makes sense. Like, if an AI agent is, like, using a website and it's not sure what path it wants to take to crawl this website. To find the information it's looking for. It would make sense for it to explore both paths in parallel. And that'd be a very, like... A road not taken. Yeah. And hopefully find the right answer. And then say, okay, this was actually the right one. And memorize that. And go there in the future. On the roadmap. For sure. Don't make my roadmap, please. You know?Alessio [00:32:37]: How do you actually do that? Yeah. How do you fork? I feel like the browser is so stateful for so many things.swyx [00:32:42]: Serialize the state. Restore the state. I don't know.Paul [00:32:44]: So, it's one of the reasons why we haven't done it yet. It's hard. You know? Like, to truly fork, it's actually quite difficult. The naive way is to open the same page in a new tab and then, like, hope that it's at the same thing. But if you have a form halfway filled, you may have to, like, take the whole, you know, container. Pause it. All the memory. Duplicate it. Restart it from there. It could be very slow. So, we haven't found a thing. Like, the easy thing to fork is just, like, copy the page object. You know? But I think there needs to be something a little bit more robust there. Yeah.swyx [00:33:12]: So, MorphLabs has this infinite branch thing. Like, wrote a custom fork of Linux or something that let them save the system state and clone it. MorphLabs, hit me up. I'll be a customer. Yeah. That's the only. I think that's the only way to do it. Yeah. Like, unless Chrome has some special API for you. Yeah.Paul [00:33:29]: There's probably something we'll reverse engineer one day. I don't know. Yeah.Alessio [00:33:32]: Let's talk about StageHand, the AI web browsing framework. You have three core components, Observe, Extract, and Act. Pretty clean landing page. What was the idea behind making a framework? Yeah.Stagehand: AI web browsing frameworkPaul [00:33:43]: So, there's three frameworks that are very popular or already exist, right? Puppeteer, Playwright, Selenium. Those are for building hard-coded scripts to control websites. And as soon as I started to play with LLMs plus browsing, I caught myself, you know, code-genning Playwright code to control a website. I would, like, take the DOM. I'd pass it to an LLM. I'd say, can you generate the Playwright code to click the appropriate button here? And it would do that. And I was like, this really should be part of the frameworks themselves. And I became really obsessed with SDKs that take natural language as part of, like, the API input. And that's what StageHand is. StageHand exposes three APIs, and it's a super set of Playwright. So, if you go to a page, you may want to take an action, click on the button, fill in the form, etc. That's what the act command is for. You may want to extract some data. This one takes a natural language, like, extract the winner of the Super Bowl from this page. You can give it a Zod schema, so it returns a structured output. And then maybe you're building an API. You can do an agent loop, and you want to kind of see what actions are possible on this page before taking one. You can do observe. So, you can observe the actions on the page, and it will generate a list of actions. You can guide it, like, give me actions on this page related to buying an item. And you can, like, buy it now, add to cart, view shipping options, and pass that to an LLM, an agent loop, to say, what's the appropriate action given this high-level goal? So, StageHand isn't a web agent. It's a framework for building web agents. And we think that agent loops are actually pretty close to the application layer because every application probably has different goals or different ways it wants to take steps. I don't think I've seen a generic. Maybe you guys are the experts here. I haven't seen, like, a really good AI agent framework here. Everyone kind of has their own special sauce, right? I see a lot of developers building their own agent loops, and they're using tools. And I view StageHand as the browser tool. So, we expose act, extract, observe. Your agent can call these tools. And from that, you don't have to worry about it. You don't have to worry about generating playwright code performantly. You don't have to worry about running it. You can kind of just integrate these three tool calls into your agent loop and reliably automate the web.swyx [00:35:48]: A special shout-out to Anirudh, who I met at your dinner, who I think listens to the pod. Yeah. Hey, Anirudh.Paul [00:35:54]: Anirudh's a man. He's a StageHand guy.swyx [00:35:56]: I mean, the interesting thing about each of these APIs is they're kind of each startup. Like, specifically extract, you know, Firecrawler is extract. There's, like, Expand AI. There's a whole bunch of, like, extract companies. They just focus on extract. I'm curious. Like, I feel like you guys are going to collide at some point. Like, right now, it's friendly. Everyone's in a blue ocean. At some point, it's going to be valuable enough that there's some turf battle here. I don't think you have a dog in a fight. I think you can mock extract to use an external service if they're better at it than you. But it's just an observation that, like, in the same way that I see each option, each checkbox in the side of custom GBTs becoming a startup or each box in the Karpathy chart being a startup. Like, this is also becoming a thing. Yeah.Paul [00:36:41]: I mean, like, so the way StageHand works is that it's MIT-licensed, completely open source. You bring your own API key to your LLM of choice. You could choose your LLM. We don't make any money off of the extract or really. We only really make money if you choose to run it with our browser. You don't have to. You can actually use your own browser, a local browser. You know, StageHand is completely open source for that reason. And, yeah, like, I think if you're building really complex web scraping workflows, I don't know if StageHand is the tool for you. I think it's really more if you're building an AI agent that needs a few general tools or if it's doing a lot of, like, web automation-intensive work. But if you're building a scraping company, StageHand is not your thing. You probably want something that's going to, like, get HTML content, you know, convert that to Markdown, query it. That's not what StageHand does. StageHand is more about reliability. I think we focus a lot on reliability and less so on cost optimization and speed at this point.swyx [00:37:33]: I actually feel like StageHand, so the way that StageHand works, it's like, you know, page.act, click on the quick start. Yeah. It's kind of the integration test for the code that you would have to write anyway, like the Puppeteer code that you have to write anyway. And when the page structure changes, because it always does, then this is still the test. This is still the test that I would have to write. Yeah. So it's kind of like a testing framework that doesn't need implementation detail.Paul [00:37:56]: Well, yeah. I mean, Puppeteer, Playwright, and Slenderman were all designed as testing frameworks, right? Yeah. And now people are, like, hacking them together to automate the web. I would say, and, like, maybe this is, like, me being too specific. But, like, when I write tests, if the page structure changes. Without me knowing, I want that test to fail. So I don't know if, like, AI, like, regenerating that. Like, people are using StageHand for testing. But it's more for, like, usability testing, not, like, testing of, like, does the front end, like, has it changed or not. Okay. But generally where we've seen people, like, really, like, take off is, like, if they're using, you know, something. If they want to build a feature in their application that's kind of like Operator or Deep Research, they're using StageHand to kind of power that tool calling in their own agent loop. Okay. Cool.swyx [00:38:37]: So let's go into Operator, the first big agent launch of the year from OpenAI. Seems like they have a whole bunch scheduled. You were on break and your phone blew up. What's your just general view of computer use agents is what they're calling it. The overall category before we go into Open Operator, just the overall promise of Operator. I will observe that I tried it once. It was okay. And I never tried it again.OpenAI's Operator and computer use agentsPaul [00:38:58]: That tracks with my experience, too. Like, I'm a huge fan of the OpenAI team. Like, I think that I do not view Operator as the company. I'm not a company killer for browser base at all. I think it actually shows people what's possible. I think, like, computer use models make a lot of sense. And I'm actually most excited about computer use models is, like, their ability to, like, really take screenshots and reasoning and output steps. I think that using mouse click or mouse coordinates, I've seen that proved to be less reliable than I would like. And I just wonder if that's the right form factor. What we've done with our framework is anchor it to the DOM itself, anchor it to the actual item. So, like, if it's clicking on something, it's clicking on that thing, you know? Like, it's more accurate. No matter where it is. Yeah, exactly. Because it really ties in nicely. And it can handle, like, the whole viewport in one go, whereas, like, Operator can only handle what it sees. Can you hover? Is hovering a thing that you can do? I don't know if we expose it as a tool directly, but I'm sure there's, like, an API for hovering. Like, move mouse to this position. Yeah, yeah, yeah. I think you can trigger hover, like, via, like, the JavaScript on the DOM itself. But, no, I think, like, when we saw computer use, everyone's eyes lit up because they realized, like, wow, like, AI is going to actually automate work for people. And I think seeing that kind of happen from both of the labs, and I'm sure we're going to see more labs launch computer use models, I'm excited to see all the stuff that people build with it. I think that I'd love to see computer use power, like, controlling a browser on browser base. And I think, like, Open Operator, which was, like, our open source version of OpenAI's Operator, was our first take on, like, how can we integrate these models into browser base? And we handle the infrastructure and let the labs do the models. I don't have a sense that Operator will be released as an API. I don't know. Maybe it will. I'm curious to see how well that works because I think it's going to be really hard for a company like OpenAI to do things like support CAPTCHA solving or, like, have proxies. Like, I think it's hard for them structurally. Imagine this New York Times headline, OpenAI CAPTCHA solving. Like, that would be a pretty bad headline, this New York Times headline. Browser base solves CAPTCHAs. No one cares. No one cares. And, like, our investors are bored. Like, we're all okay with this, you know? We're building this company knowing that the CAPTCHA solving is short-lived until we figure out how to authenticate good bots. I think it's really hard for a company like OpenAI, who has this brand that's so, so good, to balance with, like, the icky parts of web automation, which it can be kind of complex to solve. I'm sure OpenAI knows who to call whenever they need you. Yeah, right. I'm sure they'll have a great partnership.Alessio [00:41:23]: And is Open Operator just, like, a marketing thing for you? Like, how do you think about resource allocation? So, you can spin this up very quickly. And now there's all this, like, open deep research, just open all these things that people are building. We started it, you know. You're the original Open. We're the original Open operator, you know? Is it just, hey, look, this is a demo, but, like, we'll help you build out an actual product for yourself? Like, are you interested in going more of a product route? That's kind of the OpenAI way, right? They started as a model provider and then…Paul [00:41:53]: Yeah, we're not interested in going the product route yet. I view Open Operator as a model provider. It's a reference project, you know? Let's show people how to build these things using the infrastructure and models that are out there. And that's what it is. It's, like, Open Operator is very simple. It's an agent loop. It says, like, take a high-level goal, break it down into steps, use tool calling to accomplish those steps. It takes screenshots and feeds those screenshots into an LLM with the step to generate the right action. It uses stagehand under the hood to actually execute this action. It doesn't use a computer use model. And it, like, has a nice interface using the live view that we talked about, the iframe, to embed that into an application. So I felt like people on launch day wanted to figure out how to build their own version of this. And we turned that around really quickly to show them. And I hope we do that with other things like deep research. We don't have a deep research launch yet. I think David from AOMNI actually has an amazing open deep research that he launched. It has, like, 10K GitHub stars now. So he's crushing that. But I think if people want to build these features natively into their application, they need good reference projects. And I think Open Operator is a good example of that.swyx [00:42:52]: I don't know. Actually, I'm actually pretty bullish on API-driven operator. Because that's the only way that you can sort of, like, once it's reliable enough, obviously. And now we're nowhere near. But, like, give it five years. It'll happen, you know. And then you can sort of spin this up and browsers are working in the background and you don't necessarily have to know. And it just is booking restaurants for you, whatever. I can definitely see that future happening. I had this on the landing page here. This might be a slightly out of order. But, you know, you have, like, sort of three use cases for browser base. Open Operator. Or this is the operator sort of use case. It's kind of like the workflow automation use case. And it completes with UiPath in the sort of RPA category. Would you agree with that? Yeah, I would agree with that. And then there's Agents we talked about already. And web scraping, which I imagine would be the bulk of your workload right now, right?Paul [00:43:40]: No, not at all. I'd say actually, like, the majority is browser automation. We're kind of expensive for web scraping. Like, I think that if you're building a web scraping product, if you need to do occasional web scraping or you have to do web scraping that works every single time, you want to use browser automation. Yeah. You want to use browser-based. But if you're building web scraping workflows, what you should do is have a waterfall. You should have the first request is a curl to the website. See if you can get it without even using a browser. And then the second request may be, like, a scraping-specific API. There's, like, a thousand scraping APIs out there that you can use to try and get data. Scraping B. Scraping B is a great example, right? Yeah. And then, like, if those two don't work, bring out the heavy hitter. Like, browser-based will 100% work, right? It will load the page in a real browser, hydrate it. I see.swyx [00:44:21]: Because a lot of people don't render to JS.swyx [00:44:25]: Yeah, exactly.Paul [00:44:26]: So, I mean, the three big use cases, right? Like, you know, automation, web data collection, and then, you know, if you're building anything agentic that needs, like, a browser tool, you want to use browser-based.Alessio [00:44:35]: Is there any use case that, like, you were super surprised by that people might not even think about? Oh, yeah. Or is it, yeah, anything that you can share? The long tail is crazy. Yeah.Surprising use cases of BrowserbasePaul [00:44:44]: One of the case studies on our website that I think is the most interesting is this company called Benny. So, the way that it works is if you're on food stamps in the United States, you can actually get rebates if you buy certain things. Yeah. You buy some vegetables. You submit your receipt to the government. They'll give you a little rebate back. Say, hey, thanks for buying vegetables. It's good for you. That process of submitting that receipt is very painful. And the way Benny works is you use their app to take a photo of your receipt, and then Benny will go submit that receipt for you and then deposit the money into your account. That's actually using no AI at all. It's all, like, hard-coded scripts. They maintain the scripts. They've been doing a great job. And they build this amazing consumer app. But it's an example of, like, all these, like, tedious workflows that people have to do to kind of go about their business. And they're doing it for the sake of their day-to-day lives. And I had never known about, like, food stamp rebates or the complex forms you have to do to fill them. But the world is powered by millions and millions of tedious forms, visas. You know, Emirate Lighthouse is a customer, right? You know, they do the O1 visa. Millions and millions of forms are taking away humans' time. And I hope that Browserbase can help power software that automates away the web forms that we don't need anymore. Yeah.swyx [00:45:49]: I mean, I'm very supportive of that. I mean, forms. I do think, like, government itself is a big part of it. I think the government itself should embrace AI more to do more sort of human-friendly form filling. Mm-hmm. But I'm not optimistic. I'm not holding my breath. Yeah. We'll see. Okay. I think I'm about to zoom out. I have a little brief thing on computer use, and then we can talk about founder stuff, which is, I tend to think of developer tooling markets in impossible triangles, where everyone starts in a niche, and then they start to branch out. So I already hinted at a little bit of this, right? We mentioned more. We mentioned E2B. We mentioned Firecrawl. And then there's Browserbase. So there's, like, all this stuff of, like, have serverless virtual computer that you give to an agent and let them do stuff with it. And there's various ways of connecting it to the internet. You can just connect to a search API, like SERP API, whatever other, like, EXA is another one. That's what you're searching. You can also have a JSON markdown extractor, which is Firecrawl. Or you can have a virtual browser like Browserbase, or you can have a virtual machine like Morph. And then there's also maybe, like, a virtual sort of code environment, like Code Interpreter. So, like, there's just, like, a bunch of different ways to tackle the problem of give a computer to an agent. And I'm just kind of wondering if you see, like, everyone's just, like, happily coexisting in their respective niches. And as a developer, I just go and pick, like, a shopping basket of one of each. Or do you think that you eventually, people will collide?Future of browser automation and market competitionPaul [00:47:18]: I think that currently it's not a zero-sum market. Like, I think we're talking about... I think we're talking about all of knowledge work that people do that can be automated online. All of these, like, trillions of hours that happen online where people are working. And I think that there's so much software to be built that, like, I tend not to think about how these companies will collide. I just try to solve the problem as best as I can and make this specific piece of infrastructure, which I think is an important primitive, the best I possibly can. And yeah. I think there's players that are actually going to like it. I think there's players that are going to launch, like, over-the-top, you know, platforms, like agent platforms that have all these tools built in, right? Like, who's building the rippling for agent tools that has the search tool, the browser tool, the operating system tool, right? There are some. There are some. There are some, right? And I think in the end, what I have seen as my time as a developer, and I look at all the favorite tools that I have, is that, like, for tools and primitives with sufficient levels of complexity, you need to have a solution that's really bespoke to that primitive, you know? And I am sufficiently convinced that the browser is complex enough to deserve a primitive. Obviously, I have to. I'm the founder of BrowserBase, right? I'm talking my book. But, like, I think maybe I can give you one spicy take against, like, maybe just whole OS running. I think that when I look at computer use when it first came out, I saw that the majority of use cases for computer use were controlling a browser. And do we really need to run an entire operating system just to control a browser? I don't think so. I don't think that's necessary. You know, BrowserBase can run browsers for way cheaper than you can if you're running a full-fledged OS with a GUI, you know, operating system. And I think that's just an advantage of the browser. It is, like, browsers are little OSs, and you can run them very efficiently if you orchestrate it well. And I think that allows us to offer 90% of the, you know, functionality in the platform needed at 10% of the cost of running a full OS. Yeah.Open Operator: Browserbase's Open-Source Alternativeswyx [00:49:16]: I definitely see the logic in that. There's a Mark Andreessen quote. I don't know if you know this one. Where he basically observed that the browser is turning the operating system into a poorly debugged set of device drivers, because most of the apps are moved from the OS to the browser. So you can just run browsers.Paul [00:49:31]: There's a place for OSs, too. Like, I think that there are some applications that only run on Windows operating systems. And Eric from pig.dev in this upcoming YC batch, or last YC batch, like, he's building all run tons of Windows operating systems for you to control with your agent. And like, there's some legacy EHR systems that only run on Internet-controlled systems. Yeah.Paul [00:49:54]: I think that's it. I think, like, there are use cases for specific operating systems for specific legacy software. And like, I'm excited to see what he does with that. I just wanted to give a shout out to the pig.dev website.swyx [00:50:06]: The pigs jump when you click on them. Yeah. That's great.Paul [00:50:08]: Eric, he's the former co-founder of banana.dev, too.swyx [00:50:11]: Oh, that Eric. Yeah. That Eric. Okay. Well, he abandoned bananas for pigs. I hope he doesn't start going around with pigs now.Alessio [00:50:18]: Like he was going around with bananas. A little toy pig. Yeah. Yeah. I love that. What else are we missing? I think we covered a lot of, like, the browser-based product history, but. What do you wish people asked you? Yeah.Paul [00:50:29]: I wish people asked me more about, like, what will the future of software look like? Because I think that's really where I've spent a lot of time about why do browser-based. Like, for me, starting a company is like a means of last resort. Like, you shouldn't start a company unless you absolutely have to. And I remain convinced that the future of software is software that you're going to click a button and it's going to do stuff on your behalf. Right now, software. You click a button and it maybe, like, calls it back an API and, like, computes some numbers. It, like, modifies some text, whatever. But the future of software is software using software. So, I may log into my accounting website for my business, click a button, and it's going to go load up my Gmail, search my emails, find the thing, upload the receipt, and then comment it for me. Right? And it may use it using APIs, maybe a browser. I don't know. I think it's a little bit of both. But that's completely different from how we've built software so far. And that's. I think that future of software has different infrastructure requirements. It's going to require different UIs. It's going to require different pieces of infrastructure. I think the browser infrastructure is one piece that fits into that, along with all the other categories you mentioned. So, I think that it's going to require developers to think differently about how they've built software for, you know
We compiled our favorite clips on developer tools and developer experience (DevX). We discuss why DevX has become essential for developer-focused companies and how it drives adoption to grow your product. Learn what makes developers a unique and discerning customer base, and hear practical strategies for designing exceptional tools and platforms. Our guests also share lessons learned from their own experiences—whether in creating frictionless integrations, maintaining a strong feedback culture, or enabling internal platform adoption. Through compelling stories and actionable advice, this episode is packed with lessons on how to build products that developers love. Playlist of Full Episodes from This Compilation: https://www.youtube.com/playlist?list=PL31JETR9AR0FV-46VR4G_n6xi4WdXEx-2 Inside the episode... The importance of developer experience and why it's a priority for developer-facing companies. Key differences between building developer tools and end-user applications. How DevX differs from DevRel and the synergy between the two. Metrics for measuring the success of developer tools: adoption, satisfaction, and revenue. Insights into abstraction ladders and balancing complexity and power. Customer research strategies for validating assumptions and prioritizing features. Stripe's culture of craftsmanship and creating “surprisingly great” experiences. The importance of dogfooding and feedback loops in building trusted platforms. Balancing enablement and avoiding gatekeeping in internal platform adoption. Maintaining consistency and quality across APIs, CLIs, and other resources. Mentioned in this episode Stripe Doppler Heroku Abstraction ladders Developer feedback loops Unlock the full potential of your product team with Integral's player coaches, experts in lean, human-centered design. Visit integral.io/convergence for a free Product Success Lab workshop to gain clarity and confidence in tackling any product design or engineering challenge. Subscribe to the Convergence podcast wherever you get podcasts including video episodes to get updated on the other crucial conversations that we'll post on YouTube at youtube.com/@convergencefmpodcast Learn something? Give us a 5 star review and like the podcast on YouTube. It's how we grow. Follow the Pod Linkedin: https://www.linkedin.com/company/convergence-podcast/ X: https://twitter.com/podconvergence Instagram: @podconvergence
With the number of libraries available to Go developers these days, you'd think building a CLI app was now a trivial matter. But like many things in software development, it depends. In this episode, we explore the challenges that arose during one team's journey towards a production-ready CLI.
With the number of libraries available to Go developers these days, you'd think building a CLI app was now a trivial matter. But like many things in software development, it depends. In this episode, we explore the challenges that arose during one team's journey towards a production-ready CLI.
Cé go bhfuil muid ar fad ag súil le toghchán spleodrach transa an Atlantach i mí na Samhna idir Trump agus Harris, seans go mbeidh olltoghchán idir lámha againn féin, agus Harris dár gcuid féin ag smaoineamh ar dhul sa tseans. Ach, an é seo an t-am is fearr le toghchán a shocrú? Cad é is féidir le Simon Harris foghlaim ó na Taoisigh a chuaigh roimhe maidir le tábhacht ‘timing' toghcháin? Labhair John Downing, comhfhreagaí polaitiúil an Irish Independent, le Tessa Fleming faoinam is fearr le toghchán a reáchtáil. Foclóir: Oll-toghchán – general election - Geilleagar - economy - A loit – to hurt - Boithrín na smaointe – memory lane - Gearchéim – crisis - Clis – crash - Go hachrannach – acrimoniously - Eisceachtúil – exceptional - Teipeanna – failures - Triúracht - Troika - Cuimsitheach - comprehensive - I mbarr a réime – at his peak - Tapaigh an deis – seize the opportunity - Iarrthóirí – candidates - Comhghleacaí – colleagues - Focal scoir – to conclude - See omnystudio.com/listener for privacy information.
#278: In today's tech landscape, developers often find themselves caught in the middle of a debate that never seems to age: GUI or CLI? While the tools and interfaces we use may evolve, the core question remains. How do we balance the efficiency and familiarity of graphical user interfaces (GUIs) with the raw power and flexibility of command-line interfaces (CLIs)? In this episode, Darin and Viktor discuss a blog post by Ian Miell titled In Praise of Low Tech DevEx. In Praise of Low Tech DevEx https://blog.container-solutions.com/in-praise-of-low-tech-devex YouTube channel: https://youtube.com/devopsparadox Review the podcast on Apple Podcasts: https://www.devopsparadox.com/review-podcast/ Slack: https://www.devopsparadox.com/slack/ Connect with us at: https://www.devopsparadox.com/contact/
Dans la vie, on passe notre temps à coller des étiquettes sur les gens comme lorsqu'on aimait coller des gommettes à la maternelle quand on était petit·e. Untel est timide, untel est bordélique, untel est rigolo, etc. Mais l'étiquette la plus répandue et sans doute la plus difficile à porter, c'est celle de la susceptibilité. Elle donne l'impression qu'on ne peut jamais rien nous dire, et en retour, on n'ose plus élever la voix pour s'exprimer contre les choses qui nous dérangent. Mais, c'est quoi, en fait, une personne susceptible ? Dans cet épisode, la journaliste Eloïse Renou s'attaque à l'épineuse question de sa propre susceptibilité, et s'appuie sur le témoignage d'Hélina, qui part en vrille à la moindre remarque. La neuropsychologue Harmony Duclos définit la susceptibilité comme une “émotion sociale" plutôt que comme un trait de caractère. Ensemble, elles discutent de confiance en soi, d'égo surdimensionné, des six émotions de base, et du fait de rejeter la faute sur les autres. Elles se demandent comment arrêter de souffrir de cette étiquette et pourquoi on a soi-même si facilement tendance à la poser sur nos proches.Pour aller plus loin : Le mémoire “La démarche scientifique pour restaurer l'estime de soi : une expérimentation adaptée en CLIS” de Viviane François, sur le portail DUMAS du CNRSL'article “Whatever people say I am, that's what I am: Social labeling as a social marketing tool” de Gert Cornelissen, paru en 2007 dans le International Journal of Research in Marketing.L'article “La théorie de l'étiquetage modifiée, ou l'« analyse stigmatique » revisitée” de Lionel Lacaze, paru en 2008 dans la Nouvelle revue de psychosociologieL'article “Quelques disqualifications. Le sentiment ou ressenti d'incompétence” d'Héloïse de Visscher, paru en 2013 dans les cahiers internationaux de psychologie socialeEloïse Renou a tourné et écrit cet épisode. La réalisation sonore est signée Renaud Wattine. Le générique est réalisé par Clémence Reliat, à partir d'un extrait d'En Sommeil de Jaune. Lena Coutrot est la productrice d'Émotions.Suivez Louie Media sur Instagram, Facebook, Twitter. Si vous aussi vous voulez nous raconter votre histoire, écrivez-nous en remplissant ce formulaire. Et si vous souhaitez soutenir Louie, n'hésitez pas à vous abonner au Club. Hébergé par Acast. Visitez acast.com/privacy pour plus d'informations.
Glauber Costa is the founder of Turso - a fully managed SQLite database platform.Glauber shares how to make great CLIs, the story of Turso's pivot. Their pricing. And the importance of moving fast. Links:Turso - https://turso.tech/Glauber's Twitter - https://twitter.com/glcstThis episode is sponsored by WorkOS. If you're thinking about selling to enterprise customers, WorkOS can help you add enterprise features like Single Sign On and audit logs.
This Friday we're doing a special crossover event in SF with of SemiAnalysis (previous guest!), and we will do a live podcast on site. RSVP here. Also join us on June 25-27 for the biggest AI Engineer conference of the year!Replicate is one of the most popular AI inference providers, reporting over 2 million users as of their $40m Series B with a16z. But how did they get there? The Definitive Replicate Story (warts and all)Their overnight success took 5 years of building, and it all started with arXiv Vanity, which was a 2017 vacation project that scrapes arXiv PDFs and re-renders them into semantic web pages that reflow nicely with better typography and whitespace. From there, Ben and Andreas' idea was to build tools to make ML research more robust and reproducible by making it easy to share code artefacts alongside papers. They had previously created Fig, which made it easy to spin up dev environments; it was eventually acquired by Docker and turned into `docker-compose`, the industry standard way to define services from containerized applications. 2019: CogThe first iteration of Replicate was a Fig-equivalent for ML workloads which they called Cog; it made it easy for researchers to package all their work and share it with peers for review and reproducibility. But they found that researchers were terrible users: they'd do all this work for a paper, publish it, and then never return to it again. “We talked to a bunch of researchers and they really wanted that.... But how the hell is this a business, you know, like how are we even going to make any money out of this? …So we went and talked to a bunch of companies trying to sell them something which didn't exist. So we're like, hey, do you want a way to share research inside your company so that other researchers or say like the product manager can test out the machine learning model? They're like, maybe. Do you want like a deployment platform for deploying models? Do you want a central place for versioning models? We were trying to think of lots of different products we could sell that were related to this thing…So we then got halfway through our YC batch. We hadn't built a product. We had no users. We had no idea what our business was going to be because we couldn't get anybody to like buy something which didn't exist. And actually there was quite a way through our, I think it was like two thirds the way through our YC batch or something. And we're like, okay, well we're kind of screwed now because we don't have anything to show at demo day.”The team graduated YCombinator with no customers, no product and nothing to demo - which was fine because demo day got canceled as the YC W'20 class graduated right into the pandemic. The team spent the next year exploring and building Covid tools.2021: CLIP + GAN = PixRayBy 2021, OpenAI released CLIP. Overnight dozens of Discord servers got spun up to hack on CLIP + GANs. Unlike academic researchers, this community was constantly releasing new checkpoints and builds of models. PixRay was one of the first models being built on Replicate, and it quickly started taking over the community. Chris Dixon has a famous 2010 post titled “The next big thing will start out looking like a toy”; image generation would have definitely felt like a toy in 2021, but it gave Replicate its initial boost.2022: Stable DiffusionIn August 2022 Stable Diffusion came out, and all the work they had been doing to build this infrastructure for CLIP / GANs models became the best way for people to share their StableDiffusion fine-tunes:And like the first week we saw people making animation models out of it. We saw people make game texture models that use circular convolutions to make repeatable textures. We saw a few weeks later, people were fine tuning it so you could put your face in these models and all of these other ways. […] So tons of product builders wanted to build stuff with it. And we were just sitting in there in the middle, as the interface layer between all these people who wanted to build, and all these machine learning experts who were building cool models. And that's really where it took off. Incredible supply, incredible demand, and we were just in the middle.(Stable Diffusion also spawned Latent Space as a newsletter)The landing page paved the cowpath for the intense interest in diffusion model APIs.2023: Llama & other multimodal LLMsBy 2023, Replicate's growing visibility in the Stable Diffusion indie hacker community came from top AI hackers like Pieter Levels and Danny Postmaa, each making millions off their AI apps:Meta then released LLaMA 1 and 2 (our coverage of it), greatly pushing forward the SOTA open source model landscape. Demand for text LLMs and other modalities rose, and Replicate broadened its focus accordingly, culminating in a $18m Series A and $40m Series B from a16z (at a $350m valuation).Building standards for the AI worldNow that the industry is evolving from toys to enterprise use cases, all these companies are working to set standards for their own space. We cover this at ~45 mins in the podcast. Some examples:* LangChain has been trying to establish "chain” as the standard mental models when putting multiple prompts and models together, and the “LangChain Expression Language” to go with it. (Our episode with Harrison)* LLamaHub for packaging RAG utilities. (Our episode with Jerry)* Ollama's Modelfile to define runtimes for different model architectures. These are usually targeted at local inference. * Cog (by Replicate) to create environments to which you can easily attach CUDA devices and make it easy to spin up inference on remote servers. * GGUF as the filetype ggml-based executors. None of them have really broken out yet, but this is going to become a fiercer competition as the market matures. Full Video PodcastAs a reminder, all Latent Space pods now come in full video on our YouTube, with bonus content that we cut for time!Show Notes* Ben Firshman* Replicate* Free $10 credit for Latent Space readers* Andreas Jansson (Ben's co-founder)* Charlie Holtz (Replicate's Hacker in Residence)* Fig (now Docker Compose)* Command Line Interface Guidelines (clig)* Apple Human Interface Guidelines* arXiv Vanity* Open Interpreter* PixRay* SF Compute* Big Sleep by Advadnoun* VQGAN-CLIP by Rivers Have WingsTimestamps* [00:00:00] Introductions* [00:01:17] Low latency is all you need* [00:04:08] Evolution of CLIs* [00:05:59] How building ArxivVanity led to Replicate* [00:11:37] Making ML research replicable with containers* [00:17:22] Doing YC in 2020 and pivoting to tools for COVID* [00:20:22] Launching the first version of Replicate* [00:25:51] Embracing the generative image community* [00:28:04] Getting reverse engineered into an API product* [00:31:25] Growing to 2 million users* [00:34:29] Indie vs Enterprise customers* [00:37:09] How Unsplash uses Replicate* [00:38:29] Learnings from Docker that went into Cog* [00:45:25] Creating AI standards* [00:50:05] Replicate's compute availability* [00:53:55] Fixing GPU waste* [01:00:39] What's open source AI?* [01:04:46] Building for AI engineers* [01:06:41] Hiring at ReplicateThis summary covers the full range of topics discussed throughout the episode, providing a comprehensive overview of the content and insights shared.TranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO in Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI.Swyx [00:00:14]: Hey, and today we have Ben Firshman in the studio. Welcome Ben.Ben [00:00:18]: Hey, good to be here.Swyx [00:00:19]: Ben, you're a co-founder and CEO of Replicate. Before that, you were most notably founder of Fig, which became Docker Compose. You also did a couple of other things before that, but that's what a lot of people know you for. What should people know about you that, you know, outside of your, your sort of LinkedIn profile?Ben [00:00:35]: Yeah. Good question. I think I'm a builder and tinkerer, like in a very broad sense. And I love using my hands to make things. So like I work on, you know, things may be a bit closer to tech, like electronics. I also like build things out of wood and I like fix cars and I fix my bike and build bicycles and all this kind of stuff. And there's so much, I think I've learned from transferable skills, from just like working in the real world to building things, building things in software. And you know, it's so much about being a builder, both in real life and, and in software that crosses over.Swyx [00:01:11]: Is there a real world analogy that you use often when you're thinking about like a code architecture or problem?Ben [00:01:17]: I like to build software tools as if they were something real. So I wrote this thing called the command line interface guidelines, which was a bit like sort of the Mac human interface guidelines, but for command line interfaces, I did it with the guy I created Docker Compose with and a few other people. And I think something in there, I think I described that your command line interface should feel like a big iron machine where you pull a lever and it goes clunk and like things should respond within like 50 milliseconds as if it was like a real life thing. And like another analogy here is like in the real life, you know, when you press a button on an electronic device and it's like a soft switch and you press it and nothing happens and there's no physical feedback of anything happening, then like half a second later, something happens. Like that's how a lot of software feels, but instead like software should feel more like something that's real where you touch, you pull a physical lever and the physical lever moves, you know, and I've taken that lesson of kind of human interface to, to software a ton. You know, it's all about kind of low latency of feeling, things feeling really solid and robust, both the command lines and, and user interfaces as well.Swyx [00:02:22]: And how did you operationalize that for Fig or Docker?Ben [00:02:27]: A lot of it's just low latency. Actually, we didn't do it very well for Fig in the first place. We used Python, which was a big mistake where Python's really hard to get booting up fast because you have to load up the whole Python runtime before it can run anything. Okay. Go is much better at this where like Go just instantly starts.Swyx [00:02:45]: You have to be under 500 milliseconds to start up?Ben [00:02:48]: Yeah, effectively. I mean, I mean, you know, perception of human things being immediate is, you know, something like a hundred milliseconds. So anything like that is, is yeah, good enough.Swyx [00:02:57]: Yeah. Also, I should mention, since we're talking about your side projects, well, one thing is I am maybe one of a few fellow people who have actually written something about CLI design principles because I was in charge of the Netlify CLI back in the day and had many thoughts. One of my fun thoughts, I'll just share it in case you have thoughts, is I think CLIs are effectively starting points for scripts that are then run. And the moment one of the script's preconditions are not fulfilled, typically they end. So the CLI developer will just exit the program. And the way that I designed, I really wanted to create the Netlify dev workflow was for it to be kind of a state machine that would resolve itself. If it detected a precondition wasn't fulfilled, it would actually delegate to a subprogram that would then fulfill that precondition, asking for more info or waiting until a condition is fulfilled. Then it would go back to the original flow and continue that. I don't know if that was ever tried or is there a more formal definition of it? Because I just came up with it randomly. But it felt like the beginnings of AI in the sense that when you run a CLI command, you have an intent to do something and you may not have given the CLI all the things that it needs to do, to execute that intent. So that was my two cents.Ben [00:04:08]: Yeah, that reminds me of a thing we sort of thought about when writing the CLI guidelines, where CLIs were designed in a world where the CLI was really a programming environment and it's primarily designed for machines to use all of these commands and scripts. Whereas over time, the CLI has evolved to humans. It was back in a world where the primary way of using computers was writing shell scripts effectively. We've transitioned to a world where actually humans are using CLI programs much more than they used to. And the current sort of best practices about how Unix was designed, there's lots of design documents about Unix from the 70s and 80s, where they say things like, command line commands should not output anything on success. It should be completely silent, which makes sense if you're using it in a shell script. But if a user is using that, it just looks like it's broken. If you type copy and it just doesn't say anything, you assume that it didn't work as a new user. I think what's really interesting about the CLI is that it's actually a really good, to your point, it's a really good user interface where it can be like a conversation, where it feels like you're, instead of just like you telling the computer to do this thing and either silently succeeding or saying, no, you did, failed, it can guide you in the right direction and tell you what your intent might be, and that kind of thing in a way that's actually, it's almost more natural to a CLI than it is in a graphical user interface because it feels like this back and forth with the computer, almost funnily like a language model. So I think there's some interesting intersection of CLIs and language models actually being very sort of closely related and a good fit for each other.Swyx [00:05:59]: Yeah, I'll say one of the surprises from last year, I worked on a coding agent, but I think the most successful coding agent of my cohort was Open Interpreter, which was a CLI implementation. And I have chronically, even as a CLI person, I have chronically underestimated the CLI as a useful interface. You also developed ArchiveVanity, which you recently retired after a glorious seven years.Ben [00:06:22]: Something like that.Swyx [00:06:23]: Which is nice, I guess, HTML PDFs.Ben [00:06:27]: Yeah, that was actually the start of where Replicate came from. Okay, we can tell that story. So when I quit Docker, I got really interested in science infrastructure, just as like a problem area, because it is like science has created so much progress in the world. The fact that we're, you know, can talk to each other on a podcast and we use computers and the fact that we're alive is probably thanks to medical research, you know. But science is just like completely archaic and broken and it's like 19th century processes that just happen to be copied to the internet rather than take into account that, you know, we can transfer information at the speed of light now. And the whole way science is funded and all this kind of thing is all kind of very broken. And there's just so much potential for making science work better. And I realized that I wasn't a scientist and I didn't really have any time to go and get a PhD and become a researcher, but I'm a tool builder and I could make existing scientists better at their job. And if I could make like a bunch of scientists a little bit better at their job, maybe that's the kind of equivalent of being a researcher. So one particular thing I dialed in on is just how science is disseminated in that all of these PDFs, quite often behind paywalls, you know, on the internet.Swyx [00:07:34]: And that's a whole thing because it's funded by national grants, government grants, then they're put behind paywalls. Yeah, exactly.Ben [00:07:40]: That's like a whole, yeah, I could talk for hours about that. But the particular thing we got dialed in on was, interestingly, these PDFs are also, there's a bunch of open science that happens as well. So math, physics, computer science, machine learning, notably, is all published on the archive, which is actually a surprisingly old institution.Swyx [00:08:00]: Some random Cornell.Ben [00:08:01]: Yeah, it was just like somebody in Cornell who started a mailing list in the 80s. And then when the web was invented, they built a web interface around it. Like it's super old.Swyx [00:08:11]: And it's like kind of like a user group thing, right? That's why they're all these like numbers and stuff.Ben [00:08:15]: Yeah, exactly. Like it's a bit like something, yeah. That's where all basically all of math, physics and computer science happens. But it's still PDFs published to this thing. Yeah, which is just so infuriating. The web was invented at CERN, a physics institution, to share academic writing. Like there are figure tags, there are like author tags, there are heading tags, there are site tags. You know, hyperlinks are effectively citations because you want to link to another academic paper. But instead, you have to like copy and paste these things and try and get around paywalls. Like it's absurd, you know. And now we have like social media and things, but still like academic papers as PDFs, you know. This is not what the web was for. So anyway, I got really frustrated with that. And I went on vacation with my old friend Andreas. So we were, we used to work together in London on a startup, at somebody else's startup. And we were just on vacation in Greece for fun. And he was like trying to read a machine learning paper on his phone, you know, like we had to like zoom in and like scroll line by line on the PDF. And he was like, this is f*****g stupid. So I was like, I know, like this is something we discovered our mutual hatred for this, you know. And we spent our vacation sitting by the pool, like making latex to HTML, like converters, making the first version of Archive Vanity. Anyway, that was up then a whole thing. And the story, we shut it down recently because they caught the eye of Archive. They were like, oh, this is great. We just haven't had the time to work on this. And what's tragic about the Archive, it's like this project of Cornell that's like, they can barely scrounge together enough money to survive. I think it might be better funded now than it was when we were, we were collaborating with them. And compared to these like scientific journals, it's just that this is actually where the work happens. But they just have a fraction of the money that like these big scientific journals have, which is just so tragic. But anyway, they were like, yeah, this is great. We can't afford to like do it, but do you want to like as a volunteer integrate arXiv Vanity into arXiv?Swyx [00:10:05]: Oh, you did the work.Ben [00:10:06]: We didn't do the work. We started doing the work. We did some. I think we worked on this for like a few months to actually get it integrated into arXiv. And then we got like distracted by Replicate. So a guy called Dan picked up the work and made it happen. Like somebody who works on one of the, the piece of the libraries that powers arXiv Vanity. Okay.Swyx [00:10:26]: And the relationship with arXiv Sanity?Ben [00:10:28]: None.Swyx [00:10:30]: Did you predate them? I actually don't know the lineage.Ben [00:10:32]: We were after, we both were both users of arXiv Sanity, which is like a sort of arXiv...Ben [00:10:37]: Which is Andre's RecSys on top of arXiv.Ben [00:10:40]: Yeah. Yeah. And we were both users of that. And I think we were trying to come up with a working name for arXiv and Andreas just like cracked a joke of like, oh, let's call it arXiv Vanity. Let's make the papers look nice. Yeah. Yeah. And that was the working name and it just stuck.Swyx [00:10:52]: Got it.Ben [00:10:53]: Got it.Alessio [00:10:54]: Yeah. And then from there, tell us more about why you got distracted, right? So Replicate, maybe it feels like an overnight success to a lot of people, but you've been building this since 2019. Yeah.Ben [00:11:04]: So what prompted the start?Alessio [00:11:05]: And we've been collaborating for even longer.Ben [00:11:07]: So we created arXiv Vanity in 2017. So in some sense, we've been doing this almost like six, seven years now, a classic seven year.Swyx [00:11:16]: Overnight success.Ben [00:11:17]: Yeah. Yes. We did arXiv Vanity and then worked on a bunch of like surrounding projects. I was still like really interested in science publishing at that point. And I'm trying to remember, because I tell a lot of like the condensed story to people because I can't really tell like a seven year history. So I'm trying to figure out like the right. Oh, we got room. The right length.Swyx [00:11:35]: We want to nail the definitive Replicate story here.Ben [00:11:37]: One thing that's really interesting about these machine learning papers is that these machine learning papers are published on arXiv and a lot of them are actual fundamental research. So like should be like prose describing a theory. But a lot of them are just running pieces of software that like a machine learning researcher made that did something, you know, it was like an image classification model or something. And they managed to make an image classification model that was better than the existing state of the art. And they've made an actual running piece of software that does image segmentation. And then what they had to do is they then had to take that piece of software and write it up as prose and math in a PDF. And what's frustrating about that is like if you want to. So this was like Andreas is, Andreas was a machine learning engineer at Spotify. And some of his job was like he did pure research as well. Like he did a PhD and he was doing a lot of stuff internally. But part of his job was also being an engineer and taking some of these existing things that people have made and published and trying to apply them to actual problems at Spotify. And he was like, you know, you get given a paper which like describes roughly how the model works. It's probably listing lots of crucial information. There's sometimes code on GitHub. More and more there's code on GitHub. But back then it was kind of relatively rare. But it's quite often just like scrappy research code and didn't actually run. And, you know, there was maybe the weights that were on Google Drive, but they accidentally deleted the weights of Google Drive, you know, and it was like really hard to like take this stuff and actually use it for real things. We just started talking together about like his problems at Spotify and I connected this back to my work at Docker as well. I was like, oh, this is what we created containers for. You know, we solved this problem for normal software by putting the thing inside a container so you could ship it around and it kept on running. So we were sort of hypothesizing about like, hmm, what if we put machine learning models inside containers so they could actually be shipped around and they could be defined in like some production ready formats and other researchers could run them to generate baselines and you could people who wanted to actually apply them to real problems in the world could just pick up the container and run it, you know. And we then thought this is quite whether it gets normally in this part of the story I skip forward to be like and then we created cog this container stuff for machine learning models and we created Replicate, the place for people to publish these machine learning models. But there's actually like two or three years between that. The thing we then got dialed into was Andreas was like, what if there was a CI system for machine learning? It's like one of the things he really struggled with as a researcher is generating baselines. So when like he's writing a paper, he needs to like get like five other models that are existing work and get them running.Swyx [00:14:21]: On the same evals.Ben [00:14:22]: Exactly, on the same evals so you can compare apples to apples because you can't trust the numbers in the paper.Swyx [00:14:26]: So you can be Google and just publish them anyway.Ben [00:14:31]: So I think this was coming from the thinking of like there should be containers for machine learning, but why are people going to use that? Okay, maybe we can create a supply of containers by like creating this useful tool for researchers. And the useful tool was like, let's get researchers to package up their models and push them to the central place where we run a standard set of benchmarks across the models so that you can trust those results and you can compare these models apples to apples and for like a researcher for Andreas, like doing a new piece of research, he could trust those numbers and he could like pull down those models, confirm it on his machine, use the standard benchmark to then measure his model and you know, all this kind of stuff. And so we started building that. That's what we applied to YC with, got into YC and we started sort of building a prototype of this. And then this is like where it all starts to fall apart. We were like, okay, that sounds great. And we talked to a bunch of researchers and they really wanted that and that sounds brilliant. That's a great way to create a supply of like models on this research platform. But how the hell is this a business, you know, like how are we even going to make any money out of this? And we're like, oh s**t, that's like the, that's the real unknown here of like what the business is. So we thought it would be a really good idea to like, okay, before we get too deep into this, let's try and like reduce the risk of this turning into a business. So let's try and like research what the business could be for this research tool effectively. So we went and talked to a bunch of companies trying to sell them something which didn't exist. So we're like, hey, do you want a way to share research inside your company so that other researchers or say like the product manager can test out the machine learning model? They're like, maybe. And we were like, do you want like a deployment platform for deploying models? Like, do you want like a central place for versioning models? Like we're trying to think of like lots of different like products we could sell that were like related to this thing. And terrible idea. Like we're not sales people and like people don't want to buy something that doesn't exist. I think some people can pull this off, but we were just like, you know, a bunch of product people, products and engineer people, and we just like couldn't pull this off. So we then got halfway through our YC batch. We hadn't built a product. We had no users. We had no idea what our business was going to be because we couldn't get anybody to like buy something which didn't exist. And actually there was quite a way through our, I think it was like two thirds the way through our YC batch or something. And we're like, okay, well we're kind of screwed now because we don't have anything to show at demo day. And then we then like tried to figure out, okay, what can we build in like two weeks that'll be something. So we like desperately tried to, I can't remember what we've tried to build at that point. And then two weeks before demo day, I just remember it was all, we were going down to Mountain View every week for dinners and we got called on to like an all hands Zoom call, which was super weird. We're like, what's going on? And they were like, don't come to dinner tomorrow. And we realized, we kind of looked at the news and we were like, oh, there's a pandemic going on. We were like so deep in our startup. We were just like completely oblivious to what was going on around us.Swyx [00:17:20]: Was this Jan or Feb 2020?Ben [00:17:22]: This was March 2020. March 2020. 2020.Swyx [00:17:25]: Yeah. Because I remember Silicon Valley at the time was early to COVID. Like they started locking down a lot faster than the rest of the US.Ben [00:17:32]: Yeah, exactly. And I remember, yeah, soon after that, like there was the San Francisco lockdowns and then like the YC batch just like stopped. There wasn't demo day and it was in a sense a blessing for us because we just kind ofSwyx [00:17:43]: In the normal course of events, you're actually allowed to defer to a future demo day. Yeah.Ben [00:17:51]: So we didn't even take any defer because it just kind of didn't happen.Swyx [00:17:55]: So was YC helpful?Ben [00:17:57]: Yes. We completely screwed up the batch and that was our fault. I think the thing that YC has become incredibly valuable for us has been after YC. I think there was a reason why we couldn't, didn't need to do YC to start with because we were quite experienced. We had done some startups before. We were kind of well connected with VCs, you know, it was relatively easy to raise money because we were like a known quantity. You know, if you go to a VC and be like, Hey, I made this piece of-Swyx [00:18:24]: It's Docker Compose for AI.Ben [00:18:26]: Exactly. Yeah. And like, you know, people can pattern match like that and they can have some trust, you know what you're doing. Whereas it's much harder for people straight out of college and that's where like YC sweet spot is like helping people straight out of college who are super promising, like figure out how to do that.Swyx [00:18:40]: No credentials.Ben [00:18:41]: Yeah, exactly. We don't need that. But the thing that's been incredibly useful for us since YC has been, this was actually, I think, so Docker was a YC company and Solomon, the founder of Docker, I think told me this. He was like, a lot of people underestimate the value of YC after you finish the batch. And his biggest regret was like not staying in touch with YC. I might be misattributing this, but I think it was him. And so we made a point of that. And we just stayed in touch with our batch partner, who Jared at YC has been fantastic.Ben [00:19:10]: Jared Friedman. All of like the team at YC, there was the growth team at YC when they were still there and they've been super helpful. And two things have been super helpful about that is like raising money, like they just know exactly how to raise money. And they've been super helpful during that process in all of our rounds, like we've done three rounds since we did YC and they've been super helpful during the whole process. And also just like reaching a ton of customers. So like the magic of YC is that you have all of, like there's thousands of YC companies, I think, on the order of thousands, I think. And they're all of your first customers. And they're like super helpful, super receptive, really want to like try out new things. You have like a warm intro to every one of them basically. And there's this mailing list where you can post about updates to your products, which is like really receptive. And that's just been fantastic for us. Like we've just like got so many of our users and customers through YC. Yeah.Swyx [00:20:00]: Well, so the classic criticism or the sort of, you know, pushback is people don't buy you because you are both from YC. But at least they'll open the email. Right. Like that's the... Okay.Ben [00:20:13]: Yeah. Yeah. Yeah.Swyx [00:20:16]: So that's been a really, really positive experience for us. And sorry, I interrupted with the YC question. Like you were, you make it, you just made it out of the YC, survived the pandemic.Ben [00:20:22]: I'll try and condense this a little bit. Then we started building tools for COVID weirdly. We were like, okay, we don't have a startup. We haven't figured out anything. What's the most useful thing we could be doing right now?Swyx [00:20:32]: Save lives.Ben [00:20:33]: So yeah. Let's try and save lives. I think we failed at that as well. We had a bunch of products that didn't really go anywhere. We kind of worked on, yeah, a bunch of stuff like contact tracing, which turned out didn't really be a useful thing. Sort of Andreas worked on like a door dash for like people delivering food to people who are vulnerable. What else did we do? The meta problem of like helping people direct their efforts to what was most useful and a few other things like that. It didn't really go anywhere. So we're like, okay, this is not really working either. We were considering actually just like doing like work for COVID. We have this decision document early on in our company, which is like, should we become a like government app contracting shop? We decided no.Swyx [00:21:11]: Because you also did work for the gov.uk. Yeah, exactly.Ben [00:21:14]: We had experience like doing some like-Swyx [00:21:17]: And the Guardian and all that.Ben [00:21:18]: Yeah. For like government stuff. And we were just like really good at building stuff. Like we were just like product people. Like I was like the front end product side and Andreas was the back end side. So we were just like a product. And we were working with a designer at the time, a guy called Mark, who did our early designs for Replicate. And we were like, hey, what if we just team up and like become and build stuff? And yeah, we gave up on that in the end for, I can't remember the details. So we went back to machine learning. And then we were like, well, we're not really sure if this is going to work. And one of my most painful experiences from previous startups is shutting them down. Like when you realize it's not really working and having to shut it down, it's like a ton of work and it's people hate you and it's just sort of, you know. So we were like, how can we make something we don't have to shut down? And even better, how can we make something that won't page us in the middle of the night? So we made an open source project. We made a thing which was an open source Weights and Biases, because we had this theory that like people want open source tools. There should be like an open source, like version control, experiment tracking like thing. And it was intuitive to us and we're like, oh, we're software developers and we like command line tools. Like everyone loves command line tools and open source stuff, but machine learning researchers just really didn't care. Like they just wanted to click on buttons. They didn't mind that it was a cloud service. It was all very visual as well, that you need lots of graphs and charts and stuff like this. So it wasn't right. Like it was right. We actually were building something that Andreas made at Spotify for just like saving experiments to cloud storage automatically, but other people didn't really want this. So we kind of gave up on that. And then that was actually originally called Replicate and we renamed that out of the way. So it's now called Keepsake and I think some people still use it. Then we sort of came back, we looped back to our original idea. So we were like, oh, maybe there was a thing in that thing we were originally sort of thinking about of like researchers sharing their work and containers for machine learning models. So we just built that. And at that point we were kind of running out of the YC money. So we were like, okay, this like feels good though. Let's like give this a shot. So that was the point we raised a seed round. We raised seed round. Pre-launch. We raised pre-launch and pre-team. It was an idea basically. We had a little prototype. It was just an idea and a team. But we were like, okay, like, you know, bootstrapping this thing is getting hard. So let's actually raise some money. Then we made Cog and Replicate. It initially didn't have APIs, interestingly. It was just the bit that I was talking about before of helping researchers share their work. So it was a way for researchers to put their work on a webpage such that other people could try it out and so that you could download the Docker container. We cut the benchmarks thing of it because we thought that was just like too complicated. But it had a Docker container that like, you know, Andreas in a past life could download and run with his benchmark and you could compare all these models apples to apples. So that was like the theory behind it. That kind of started to work. It was like still when like, you know, it was long time pre-AI hype and there was lots of interesting stuff going on, but it was very much in like the classic deep learning era. So sort of image segmentation models and sentiment analysis and all these kinds of things, you know, that people were using, that we're using deep learning models for. And we were very much building for research because all of this stuff was happening in research institutions, you know, the sort of people who'd be publishing to archive. So we were creating an accompanying material for their models, basically, you know, they wanted a demo for their models and we were creating a company material for it. What was funny about that is they were like not very good users. Like they were, they were doing great work obviously, but, but the way that research worked is that they, they just made like one thing every six months and they just fired and forget it, forgot it. Like they, they published this piece of paper and like, done, I've, I've published it. So they like output it to Replicate and then they just stopped using Replicate. You know, they were like once every six monthly users and that wasn't great for us, but we stumbled across this early community. This was early 2021 when OpenAI created this, created CLIP and people started smushing CLIP and GANs together to produce image generation models. And this started with, you know, it was just a bunch of like tinkerers on Discord, basically. There was an early model called Big Sleep by Advadnoun. And then there was VQGAN Clip, which was like a bit more popular by Rivers Have Wings. And it was all just people like tinkering on stuff in Colabs and it was very dynamic and it was people just making copies of co-labs and playing around with things and forking in. And to me this, I saw this and I was like, oh, this feels like open source software, like so much more than the research world where like people are publishing these papers.Swyx [00:25:48]: You don't know their real names and it's just like a Discord.Ben [00:25:51]: Yeah, exactly. But crucially, it was like people were tinkering and forking and things were moving really fast and it just felt like this creative, dynamic, collaborative community in a way that research wasn't really, like it was still stuck in this kind of six month publication cycle. So we just kind of latched onto that and started building for this community. And you know, a lot of those early models were published on Replicate. I think the first one that was really primarily on Replicate was one called Pixray, which was sort of mid 2021 and it had a really cool like pixel art output, but it also just like produced general, you know, the sort of, they weren't like crisp in images, but they were quite aesthetically pleasing, like some of these early image generation models. And you know, that was like published primarily on Replicate and then a few other models around that were like published on Replicate. And that's where we really started to find our early community and like where we really found like, oh, we've actually built a thing that people want and they were great users as well. And people really want to try out these models. Lots of people were like running the models on Replicate. We still didn't have APIs though, interestingly, and this is like another like really complicated part of the story. We had no idea what a business model was still at this point. I don't think people could even pay for it. You know, it was just like these web forms where people could run the model.Swyx [00:27:06]: Just for historical interest, which discords were they and how did you find them? Was this the Lion Discord? Yeah, Lion. This is Eleuther.Ben [00:27:12]: Eleuther, yeah. It was the Eleuther one. These two, right? There was a channel where Viki Gangklep, this was early 2021, where Viki Gangklep was set up as a Discord bot. I just remember being completely just like captivated by this thing. I was just like playing around with it all afternoon and like the sort of thing. In Discord. Oh s**t, it's 2am. You know, yeah.Swyx [00:27:33]: This is the beginnings of Midjourney.Ben [00:27:34]: Yeah, exactly. And Stability. It was the start of Midjourney. And you know, it's where that kind of user interface came from. Like what's beautiful about the user interface is like you could see what other people are doing. And you could riff off other people's ideas. And it was just so much fun to just like play around with this in like a channel full of a hundred people. And yeah, that just like completely captivated me and I'm like, okay, this is something, you know. So like we should get these things on Replicate. Yeah, that's where that all came from.Swyx [00:28:00]: And then you moved on to, so was it APIs next or was it Stable Diffusion next?Ben [00:28:04]: It was APIs next. And the APIs happened because one of our users, our web form had like an internal API for making the web form work, like with an API that was called from JavaScript. And somebody like reverse engineered that to start generating images with a script. You know, they did like, you know, Web Inspector Coffee is Carl, like figured out what the API request was. And it wasn't secured or anything.Swyx [00:28:28]: Of course not.Ben [00:28:29]: They started generating a bunch of images and like we got tons of traffic and like what's going on? And I think like a sort of usual reaction to that would be like, hey, you're abusing our API and to shut them down. And instead we're like, oh, this is interesting. Like people want to run these models. So we documented the API in a Notion document, like our internal API in a Notion document and like message this person being like, hey, you seem to have found our API. Here's the documentation. That'll be like a thousand bucks a month, please, with a straight form, like we just click some buttons to make. And they were like, sure, that sounds great. So that was our first customer.Swyx [00:29:05]: A thousand bucks a month.Ben [00:29:07]: It was a surprising amount of money. That's not casual. It was on the order of a thousand bucks a month.Swyx [00:29:11]: So was it a business?Ben [00:29:13]: It was the creator of PixRay. Like it was, he generated NFT art. And so he like made a bunch of art with these models and was, you know, selling these NFTs effectively. And I think lots of people in his community were doing similar things. And like he then referred us to other people who were also generating NFTs and he joined us with models. We started our API business. Yeah. Then we like made an official API and actually like added some billing to it. So it wasn't just like a fixed fee.Swyx [00:29:40]: And now people think of you as the host and models API business. Yeah, exactly.Ben [00:29:44]: But that just turned out to be our business, you know, but what ended up being beautiful about this is it was really fulfilling. Like the original goal of what we wanted to do is that we wanted to make this research that people were making accessible to like other people and for it to be used in the real world. And this was like the just like ultimately the right way to do it because all of these people making these generative models could publish them to replicate and they wanted a place to publish it. And software engineers, you know, like myself, like I'm not a machine learning expert, but I want to use this stuff, could just run these models with a single line of code. And we thought, oh, maybe the Docker image is enough, but it's actually super hard to get the Docker image running on a GPU and stuff. So it really needed to be the hosted API for this to work and to make it accessible to software engineers. And we just like wound our way to this. Yeah.Swyx [00:30:30]: Two years to the first paying customer. Yeah, exactly.Alessio [00:30:33]: Did you ever think about becoming Midjourney during that time? You have like so much interest in image generation.Swyx [00:30:38]: I mean, you're doing fine for the record, but, you know, it was right there, you were playing with it.Ben [00:30:46]: I don't think it was our expertise. Like I think our expertise was DevTools rather than like Midjourney is almost like a consumer products, you know? Yeah. So I don't think it was our expertise. It certainly occurred to us. I think at the time we were thinking about like, oh, maybe we could hire some of these people in this community and make great models and stuff like this. But we ended up more being at the tooling. Like I think like before I was saying, like I'm not really a researcher, but I'm more like the tool builder, the behind the scenes. And I think both me and Andreas are like that.Swyx [00:31:09]: I think this is an illustration of the tool builder philosophy. Something where you latch on to in DevTools, which is when you see people behaving weird, it's not their fault, it's yours. And you want to pave the cow paths is what they say, right? Like the unofficial paths that people are making, like make it official and make it easy for them and then maybe charge a bit of money.Alessio [00:31:25]: And now fast forward a couple of years, you have 2 million developers using Replicate. Maybe more. That was the last public number that I found.Ben [00:31:33]: It's 2 million users. Not all those people are developers, but a lot of them are developers, yeah.Alessio [00:31:38]: And then 30,000 paying customers was the number late in space runs on Replicate. So we had a small podcaster and we host a whisper diarization on Replicate. And we're paying. So we're late in space in the 30,000. You raised a $40 million dollars, Series B. I would say that maybe the stable diffusion time, August 22, was like really when the company started to break out. Tell us a bit about that and the community that came out and I know now you're expanding beyond just image generation.Ben [00:32:06]: Yeah, like I think we kind of set ourselves, like we saw there was this really interesting image, generative image world going on. So we kind of, you know, like we're building the tools for that community already, really. And we knew stable diffusion was coming out. We knew it was a really exciting thing, you know, it was the best generative image model so far. I think the thing we underestimated was just like what an inflection point it would be, where it was, I think Simon Willison put it this way, where he said something along the lines of it was a model that was open source and tinkerable and like, you know, it was just good enough and open source and tinkerable such that it just kind of took off in a way that none of the models had before. And like what was really neat about stable diffusion is it was open source so you could like, compared to like Dali, for example, which was like sort of equivalent quality. And like the first week we saw like people making animation models out of it. We saw people make like game texture models that like use circular convolutions to make repeatable textures. We saw, you know, a few weeks later, like people were fine tuning it so you could make, put your face in these models and all of these other-Swyx [00:33:10]: Textual inversion.Ben [00:33:11]: Yep. Yeah, exactly. That happened a bit before that. And all of this sort of innovation was happening all of a sudden. And people were publishing on Replicate because you could just like publish arbitrary models on Replicate. So we had this sort of supply of like interesting stuff being built. But because it was a sufficiently good model, there was also just like a ton of people building with it. They were like, oh, we can build products with this thing. And this was like about the time where people were starting to get really interested in AI. So like tons of product builders wanted to build stuff with it. And we were just like sitting in there in the middle, it's like the interface layer between like all these people who wanted to build and all these like machine learning experts who were building cool models. And that's like really where it took off. We were just sort of incredible supply, incredible demand, and we were just like in the middle. And then, yeah, since then, we've just kind of grown and grown really. And we've been building a lot for like the indie hacker community, these like individual tinkerers, but also startups and a lot of large companies as well who are sort of exploring and building AI things. Then kind of the same thing happened like middle of last year with language models and Lama 2, where the same kind of stable diffusion effect happened with Lama. And Lama 2 was like our biggest week of growth ever because like tons of people wanted to tinker with it and run it. And you know, since then we've just been seeing a ton of growth in language models as well as image models. Yeah. We're just kind of riding a lot of the interest that's going on in AI and all the people building in AI, you know. Yeah.Swyx [00:34:29]: Kudos. Right place, right time. But also, you know, took a while to position for the right place before the wave came. I'm curious if like you have any insights on these different markets. So Peter Levels, notably very loud person, very picky about his tools. I wasn't sure actually if he used you. He does. So you've met him on your Series B blog posts and Danny Post might as well, his competitor all in that wave. What are their needs versus, you know, the more enterprise or B2B type needs? Did you come to a decision point where you're like, okay, you know, how serious are these indie hackers versus like the actual businesses that are bigger and perhaps better customers because they're less churny?Ben [00:35:04]: They're surprisingly similar because I think a lot of people right now want to use and build with AI, but they're not AI experts and they're not infrastructure experts either. So they want to be able to use this stuff without having to like figure out all the internals of the models and, you know, like touch PyTorch and whatever. And they also don't want to be like setting up and booting up servers. And that's the same all the way from like indie hackers just getting started because like obviously you just want to get started as quickly as possible, all the way through to like large companies who want to be able to use this stuff, but don't have like all of the experts on stuff, you know, you know, big companies like Google and so on that do actually have a lot of experts on stuff, but the vast majority of companies don't. And they're all software engineers who want to be able to use this AI stuff, but they just don't know how to use it. And it's like, you really need to be an expert and it takes a long time to like learn the skills to be able to use that. So they're surprisingly similar in that sense. I think it's kind of also unfair of like the indie community, like they're not churning surprisingly, or churny or spiky surprisingly, like they're building real established businesses, which is like, kudos to them, like building these really like large, sustainable businesses, often just as solo developers. And it's kind of remarkable how they can do that actually, and it's in credit to a lot of their like product skills. And you know, we're just like there to help them being like their machine learning team effectively to help them use all of this stuff. A lot of these indie hackers are some of our largest customers, like alongside some of our biggest customers that you would think would be spending a lot more money than them, but yeah.Swyx [00:36:35]: And we should name some of these. So you have them on your landing page, your Buzzfeed, you have Unsplash, Character AI. What do they power? What can you say about their usage?Ben [00:36:43]: Yeah, totally. It's kind of a various things.Swyx [00:36:46]: Well, I mean, I'm naming them because they're on your landing page. So you have logo rights. It's useful for people to, like, I'm not imaginative. I see monkey see monkey do, right? Like if I see someone doing something that I want to do, then I'm like, okay, Replicate's great for that.Ben [00:37:00]: Yeah, yeah, yeah.Swyx [00:37:01]: So that's what I think about case studies on company landing pages is that it's just a way of explaining like, yep, this is something that we are good for. Yeah, totally.Ben [00:37:09]: I mean, it's, these companies are doing things all the way up and down the stack at different levels of sophistication. So like Unsplash, for example, they actually publicly posted this story on Twitter where they're using BLIP to annotate all of the images in their catalog. So you know, they have lots of images in the catalog and they want to create a text description of it so you can search for it. And they're annotating images with, you know, off the shelf, open source model, you know, we have this big library of open source models that you can run. And you know, we've got lots of people are running these open source models off the shelf. And then most of our larger customers are doing more sophisticated stuff. So they're like fine tuning the models, they're running completely custom models on us. A lot of these larger companies are like, using us for a lot of their, you know, inference, but it's like a lot of custom models and them like writing the Python themselves because they've got machine learning experts on the team. And they're using us for like, you know, their inference infrastructure effectively. And so it's like lots of different levels of sophistication where like some people using these off the shelf models. Some people are fine tuning models. So like level, Peter Levels is a great example where a lot of his products are based off like fine tuning, fine tuning image models, for example. And then we've also got like larger customers who are just like using us as infrastructure effectively. So yeah, it's like all things up and down, up and down the stack.Alessio [00:38:29]: Let's talk a bit about COG and the technical layer. So there are a lot of GPU clouds. I think people have different pricing points. And I think everybody tries to offer a different developer experience on top of it, which then lets you charge a premium. Why did you want to create COG?Ben [00:38:46]: You worked at Docker.Alessio [00:38:47]: What were some of the issues with traditional container runtimes? And maybe yeah, what were you surprised with as you built it?Ben [00:38:54]: COG came right from the start, actually, when we were thinking about this, you know, evaluation, the sort of benchmarking system for machine learning researchers, where we wanted researchers to publish their models in a standard format that was guaranteed to keep on running, that you could replicate the results of, like that's where the name came from. And we realized that we needed something like Docker to make that work, you know. And I think it was just like natural from my point of view of like, obviously that should be open source, that we should try and create some kind of open standard here that people can share. Because if more people use this format, then that's great for everyone involved. I think the magic of Docker is not really in the software. It's just like the standard that people have agreed on, like, here are a bunch of keys for a JSON document, basically. And you know, that was the magic of like the metaphor of real containerization as well. It's not the containers that are interesting. It's just like the size and shape of the damn box, you know. And it's a similar thing here, where really we just wanted to get people to agree on like, this is what a machine learning model is. This is how a prediction works. This is what the inputs are, this is what the outputs are. So cog is really just a Docker container that attaches to a CUDA device, if it needs a GPU, that has a open API specification as a label on the Docker image. And the open API specification defines the interface for the machine learning model, like the inputs and outputs effectively, or the params in machine learning terminology. And you know, we just wanted to get people to kind of agree on this thing. And it's like general purpose enough, like we weren't saying like, some of the existing things were like at the graph level, but we really wanted something general purpose enough that you could just put anything inside this and it was like future compatible and it was just like arbitrary software. And you know, it'd be future compatible with like future inference servers and future machine learning model formats and all this kind of stuff. So that was the intent behind it. It just came naturally that we wanted to define this format. And that's been really working for us. Like a bunch of people have been using cog outside of replicates, which is kind of our original intention, like this should be how machine learning is packaged and how people should use it. Like it's common to use cog in situations where like maybe they can't use the SAS service because I don't know, they're in a big company and they're not allowed to use a SAS service, but they can use cog internally still. And like they can download the models from replicates and run them internally in their org, which we've been seeing happen. And that works really well. People who want to build like custom inference pipelines, but don't want to like reinvent the world, they can use cog off the shelf and use it as like a component in their inference pipelines. We've been seeing tons of usage like that and it's just been kind of happening organically. We haven't really been trying, you know, but it's like there if people want it and we've been seeing people use it. So that's great. Yeah. So a lot of it is just sort of philosophical of just like, this is how it should work from my experience at Docker, you know, and there's just a lot of value from like the core being open, I think, and that other people can share it and it's like an integration point. So, you know, if replicate, for example, wanted to work with a testing system, like a CI system or whatever, we can just like interface at the cog level, like that system just needs to put cog models and then you can like test your models on that CI system before they get deployed to replicate. And it's just like a format that everyone, we can get everyone to agree on, you know.Alessio [00:41:55]: What do you think, I guess, Docker got wrong? Because if I look at a Docker Compose and a cog definition, first of all, the cog is kind of like the Dockerfile plus the Compose versus in Docker Compose, you're just exposing the services. And also Docker Compose is very like ports driven versus you have like the actual, you know, predict this is what you have to run.Ben [00:42:16]: Yeah.Alessio [00:42:17]: Any learnings and maybe tips for other people building container based runtimes, like how much should you separate the API services versus the image building or how much you want to build them together?Ben [00:42:29]: I think it was coming from two sides. We were thinking about the design from the point of view of user needs, what are their problems and what problems can we solve for them, but also what the interface should be for a machine learning model. And it was sort of the combination of two things that led us to this design. So the thing I talked about before was a little bit of like the interface around the machine learning model. So we realized that we wanted to be general purpose. We wanted to be at the like JSON, like human readable things rather than the tensor level. So it was like an open API specification that wrapped a Docker container. And that's where that design came from. And it's really just a wrapper around Docker. So we were kind of building on, standing on shoulders there, but Docker is too low level. So it's just like arbitrary software. So we wanted to be able to like have a open API specification that defined the function effectively that is the machine learning model. But also like how that function is written, how that function is run, which is all defined in code and stuff like that. So it's like a bunch of abstraction on top of Docker to make that work. And that's where that design came from. But the core problems we were solving for users was that Docker is really hard to use and productionizing machine learning models is really hard. So on the first part of that, we knew we couldn't use Dockerfiles. Like Dockerfiles are hard enough for software developers to write. I'm saying this with love as somebody who works on Docker and like works on Dockerfiles, but it's really hard to use. And you need to know a bunch about Linux, basically, because you're running a bunch of CLI commands. You need to know a bunch about Linux and best practices and like how apt works and all this kind of stuff. So we're like, OK, we can't get to that level. We need something that machine learning researchers will be able to understand, like people who are used to like Colab notebooks. And what they understand is they're like, I need this version of Python. I need these Python packages. And somebody told me to apt-get install something. You know? If there was sudo in there, I don't really know what that means. So we tried to create a format that was at that level, and that's what cog.yaml is. And we were really kind of trying to imagine like, what is that machine learning researcher going to understand, you know, and trying to build for them. Then the productionizing machine learning models thing is like, OK, how can we package up all of the complexity of like productionizing machine learning models, like picking CUDA versions, like hooking it up to GPUs, writing an inference server, defining a schema, doing batching, all of these just like really gnarly things that everyone does again and again. And just like, you know, provide that as a tool. And that's where that side of it came from. So it's like combining those user needs with, you know, the sort of world need of needing like a common standard for like what a machine learning model is. And that's how we thought about the design. I don't know whether that answers the question.Alessio [00:45:12]: Yeah. So your idea was like, hey, you really want what Docker stands for in terms of standard, but you actually don't want people to do all the work that goes into Docker.Ben [00:45:22]: It needs to be higher level, you know?Swyx [00:45:25]: So I want to, for the listener, you're not the only standard that is out there. As with any standard, there must be 14 of them. You are surprisingly friendly with Olama, who is your former colleagues from Docker, who came out with the model file. Mozilla came out with the Lama file. And then I don't know if this is in the same category even, but I'm just going to throw it in there. Like Hugging Face has the transformers and diffusers library, which is a way of disseminating models that obviously people use. How would you compare your contrast, your approach of Cog versus all these?Ben [00:45:53]: It's kind of complementary, actually, which is kind of neat in that a lot of transformers, for example, is lower level than Cog. So it's a Python library effectively, but you still need to like...Swyx [00:46:04]: Expose them.Ben [00:46:05]: Yeah. You still need to turn that into an inference server. You still need to like install the Python packages and that kind of thing. So lots of replicate models are transformers models and diffusers models inside Cog, you know? So that's like the level that that sits. So it's very complementary in some sense. We're kind of working on integration with Hugging Face such that you can deploy models from Hugging Face into Cog models and stuff like that to replicate. And some of these things like Llamafile and what Llama are working on are also very complementary in that they're doing a lot of the sort of running these things locally on laptops, which is not a thing that works very well with Cog. Like Cog is really designed around servers and attaching to CUDA devices and NVIDIA GPUs and this kind of thing. So we're actually like, you know, figuring out ways that like we can, those things can be interoperable because, you know, they should be and they are quite complementary and that you should be able to like take a model and replicate and run it on your local machine. You should be able to take a model, you know, the machine and run it in the cloud.Swyx [00:47:02]: Is the base layer something like, is it at the like the GGUF level, which by the way, I need to get a primer on like the different formats that have emerged, or is it at the star dot file level, which is model file, Llamafile, whatever, whatever, or is it at the Cog level? I don't know, to be honest.Ben [00:47:16]: And I think this is something we still have to figure out. There's a lot yet, like exactly where those lines are drawn. Don't know exactly. I think this is something we're trying to figure out ourselves, but I think there's certainly a lot of promise about these systems interoperating. We just want things to work together. You know, we want to try and reduce the number of standards. So the more, the more these things can interoperate and, you know
Continue the journey of building a Next.js application as Aurooba explains how to use Supabase to handle all aspects of user authentication in your app, including user accounts, email notifications, and session data. They also dig into server vs client side differences.A full transcript of the episode is available on the website. Watch the video podcast on YouTube and subscribe to our channel and newsletter to hear about episodes (and more) first!Supabase - https://supabase.com/Supabase SSR - https://supabase.com/docs/guides/auth/server-side/creating-a-clientAuthUI - https://supabase.com/docs/guides/auth/auth-helpers/auth-uiGravity Forms - https://www.gravityforms.com/Next.js App Router - https://nextjs.org/docs/appBrian's website – https://www.briancoords.comAurooba's website – https://aurooba.com (00:00) - S2 E08 (00:08) - Intro Rant (01:43) - Decisions and Planning (04:11) - Today's Topic - Authentication (07:06) - The Supabase Admin UI (09:54) - Authentication UI in Supabase (15:35) - UI versus Config Files or CLIs (17:30) - Frontend Preview - What are we building? (20:38) - AuthUI (23:58) - Our package.json and Cookies (26:32) - Folder Structure (29:17) - Setting up a Supabase Client (35:33) - Submitting Forms to Supabase (38:45) - Session Data and Server-side Console (41:24) - Scaffolds and Boilerplates (42:29) - Flexibility vs Effort (48:40) - Next episode
This episode features a discussion with Amar Goel, co-founder and CEO of Bito, a company revolutionizing the software development process through AI-driven tools. Focusing on increasing developer productivity and code quality, Bito integrates ChatGPT into IDEs and CLIs, streamlining the coding process. Amar and I discuss Bito's role in enhancing software development, its key features, and the future trajectory of AI in coding. The conversation also touches on job security in the evolving landscape of AI and software development. Founded by seasoned entrepreneurs and technologists in the heart of Silicon Valley in 2020, Bito has emerged as a pioneering AI-driven software development company. Topics Covered: Amar Goel's background and journey to founding Bito. The inception of Bito and its mission to transform software development with Gen AI. Bito's integration with IDEs for enhanced coding efficiency. How Bito AI assists in understanding, writing, testing, and documenting code. The significance of Bito for non-developers, including product managers. Discussion on Bito's AI models and their selection process for specific tasks. The concept and future development of AI agents in Bito for automating coding workflows. Exploring the impact of generative AI on job security and the software industry. Personal insights and plans for Bito in 2024, focusing on developing AI agents for code review and unit testing. ☑️ Web: Bito AI Official Website☑️ Crunchbase: Bito AI Crunchbase Profile ☑️ Support the Channel by buying a coffee? - https://ko-fi.com/gtwgt ☑️ Technology and Topics Mentioned: Bito, Generative AI, ChatGPT, IDE Integration, CLI Tools, Software Development Efficiency, AI-Powered Coding, Code Quality Enhancement, Job Security in AI Era, DevOps, AI Code Completions, AI Agents, Vector Database, AI Model Selection, Automation in Coding. ☑️ Interested in being on #GTwGT? Contact via Twitter @GTwGTPodcast or visit https://www.gtwgt.com ☑️ Subscribe to YouTube: https://www.youtube.com/@GTwGTPodcast?sub_confirmation=1 ☑️ Subscribe to Spotify: https://open.spotify.com/show/5Y1Fgl4DgGpFd5Z4dHulVX • Web - https://gtwgt.com • Twitter - https://twitter.com/GTwGTPodcast • Apple Podcasts - https://podcasts.apple.com/us/podcast/id1519439787?mt=2&ls=1 ☑️ Music: https://www.bensound.com
Topics covered in this episode: Fixit 2: Meta's next-generation auto-fixing linter FastUI Mail list / newsletter conversation CLIs from type hints Extras Joke Watch on YouTube About the show Sponsored by us! Support our work through: Our courses at Talk Python Training The Complete pytest Course Patreon Supporters Connect with the hosts Michael: @mkennedy@fosstodon.org Brian: @brianokken@fosstodon.org Show: @pythonbytes@fosstodon.org Join us on YouTube at pythonbytes.fm/live to be part of the audience. Usually Tuesdays at 11am PT. Older video versions available there too. Michael #1: Fixit 2: Meta's next-generation auto-fixing linter via Bart Kappenburg Fixit is dead! Long live Fixit 2 – the latest version of our open-source auto-fixing linter. Fixit provides a highly configurable linting framework with support for auto-fixes, custom “local” lint rules, and hierarchical configuration, built on LibCST. Fixit 2 is available today on PyPI. Created by Meta's Python Language Foundation team — a hybrid team of both PEs and traditional SWEs — helps own and maintain the infrastructure and tooling for Python. Interesting comments on this article on Hacker News I wonder if ruff format was already a thing when Fixit was adopted, whether it would exist? Brian #2: FastUI Samuel Colvin “FastUI is a new way to build web application user interfaces defined by declarative Python code.” MK: Reminds me of the code matches DOM style of Flutter. See code samples at the end. Michael #3: Mail list / newsletter conversation I've been tired of Mailchimp for a long time Raising the prices month over month by $100 several months may be the straw But what are the options? Lets ask Mastodon: emailoctopus.com listmonk.app [self hosted, open source] keila.io [self/saas, open source] mailyherald.org [self hosted, open source] sendportal.io [self hosted, open source] brevo.com buttondown.email [django] zoho.com/campaigns/ sendy.co [use your own bulk emailer (e.g. sendgrid or aws ses) convertkit.com mautic.org [open source] constantcontact.com getresponse.com convertkit.com Brian #4: CLIs from type hints From Sander76 Pydantic Argparse “is a Python package built on top of pydantic which provides declarative typed argument parsing using pydantic models.” Clipstick is a “cli-tool based on Pydantic models.” tyro “is a tool for generating command-line interfaces and configuration objects in Python.” tyro includes support for dataclasses and attrs in place of Pydantic Extras Brian: Django 5.0 has been released vim-keybindings-everywhere-the-ultimate-list - submitted by Paul Barry PythonTest (the podcast formerly known as Test & Code, to be read in an undertone similar to the way one used to say “The artist formerly known as Prince”) has moved form testandcode.com to podcast.pythontest.com Plus more guests are listed now. I think I've gone backwards from current to episode 182. I tried to get my kid to help out, unsuccessfully. May have to hire someone to help. grrr. Michael: Essay: Don't Sweat the Ad Blocker Drama A story: my project this weekend, unify my over 20 domains to one host Joke: Honest LinkedIn
It's the fourth and final episode of our series exploring Laravel. Brian takes us through the deployment process using Laravel Forge and AWS. Aurooba discusses "modern" WordPress development and how WordPress solutions like SpinupWP compare to tools like Netlify and Forge.A full transcript of the episode is available on the website. Watch the video podcast on YouTube and subscribe to our channel and newsletter to hear about episodes (and more) first!Suggest an episode - https://suggest.viewsource.fm/All the code - https://github.com/viewSourcePodcast/suggest-episodeTailcolor (Tailwind Color Generator) - https://tailcolor.com/Laravel Forge - https://forge.laravel.com/Spinup WP - https://spinupwp.com/Brian's website – https://www.briancoords.comAurooba's website – https://aurooba.com (00:00) - S02E04 - Laravel pt 4 (00:07) - Our Completed Laravel App (02:34) - Tailwind and Colors (04:56) - AlpineJS and Package Bloat (07:57) - Single Page Apps on Laravel (09:43) - Brian's Three Open Terminals (11:52) - Scaffolds and CLIs in WordPress (15:03) - Handling Build Assets in your Deployment (18:36) - Deployment - Forge (and SpinupWP) (24:25) - Connecting AWS to Forge (27:44) - Automated Git Deployments (31:20) - Git vs SFTP in Managed WordPress Hosting (34:33) - Other cool things like queues (37:14) - Final Thoughts
Dans cet épisode, Emmanuel et Guillaume reviennent sur les nouveautés de l'écosystème Java (Java 21, SDKman, Temurin, JBang, Quarkus, LangChain4J, …) mais aussi sur des sujets plus généraux comme Unicode, WebAssembly, les bases de données vectorielles, et bien d'autres sujets orientés IA (LLM, ChatGPT, Anthropic, …). Enregistré le 20 octobre 2023 Téléchargement de l'épisode LesCastCodeurs-Episode-301.mp3 News Langages Gérer facilement des versions multiples de Java grâce à SDKman https://foojay.io/today/easily-manage-different-java-versions-on-your-machine-with-sdkman/ sdkman support java mais aussi graalVM, jbang, Quarkus, Micronaut etc (les CLIs) la CLI UI est toujours un peu chelou donc cet article est utile pour un rappel Tous les changements de Java 8 à Java 21 https://advancedweb.hu/a-categorized-list-of-all-java-and-jvm-features-since-jdk-8-to-21/ Nous avons déjà partagé ce lien par le passé, mais l'article est mis à jour à chaque release majeure de Java pour couvrir les dernières nouveautés. Et en particulier, Java 21 qui vient de sortir. Eclipse Temurin ne va pas sortir son Java 21 tout de suite https://adoptium.net/en-GB/blog/2023/09/temurin21-delay/ Apparemment, une nouvelle licence pour le TCK (qui valide la compliance) doit être approuvée Oracle semble avoir sorti de nouveaux termes, à quelques jours de la sortie officielle de Java 21 la mise a jour du TCK est arrivée le 9 octobre. comment Microsoft a pu sortir le sien avant? Le Financial Times propose un bel article avec des animations graphiques expliquant le fonctionnement de l'architecture de réseau de neurones de type transformers, utilisé dans les large language model https://ig.ft.com/generative-ai/ LLM via relation entre les mots notion de transformer qui parse les “phrases” entières ce qui capture le contexte discute le beam search vs greedy search pour avoir pas le prochain mot mais l'ensemble de prochains mots parle d'hallucination l'article parle de texte/vector embeddings pour représenter les tokens et leurs relations aux autres il décrit le processus d'attention qui permet aux LLM de comprendre les associations fréquentes entre tokens le sujet des hallucinations est couvert et pour éviter des hallucinations, utilisation du “grounding” The Absolute Minimum Every Software Developer Must Know About Unicode in 2023 https://tonsky.me/blog/unicode/ Un bel article qui explique Unicode, les encodings comme UTF-8 ou UTF-16, les code points, les graphèmes, les problèmes pour mesurer une chaîne de caractères, les normalisation de graphèmes pour la comparaison de chaîne Si vous voulez mieux comprendre Unicode, c'est l'article à lire ! unicode c'est un mapping chiffre - caractère en gros 1,1 millions disponibles dont 15% définis et 11% pour usage privé, il reste de la place. Et non les meojis ne prennent pas beaucoup de place. usage prive est par exemple utilise par apple pour délivrer le logo apple dans les fonts du mac (mais pas ailleurs) UTF est l'encoding du chiffre de l'unicode UTF-32: 4 bytes tout le temps, UTF-8, encodage variable de 1 a 4 bytes (compatible avec ASCII) ; il a aussi un peu de détection d'erreurs (prefix des bytes différents), optimise pour le latin et les textes techniques genre HTML problème principal, on peut pas déterminer la taille en contant les bytes ni aller au milieu d'une chaine directement (variable) UTF-16 utilise 2 ou plus de bytes et est plus sympa pour les caractères asiatiques un caractère c'est en fait un graphème qui peut être fait de plusieurs codepoints : é = e U+0065 + ´ U+0301 ; ☹️ (smiley qui pleure) is U+2639 + U+FE0F D'ailleurs selon le langage “:man-facepalming::skin-tone-3:”.length = 5, 7 (java) ou 17 (rust) ou 1 (swift). Ça dépend de l'encodage de la chaine (UTF-?). ““I know, I'll use a library to do strlen()!” — nobody, ever.” En java utiliser ICU https://github.com/unicode-org/icu Attention java.text.BreakIterator supporte une vieille version d'unicode donc c'est pas bon. Les règles de graphème change a chaque version majeure d'unicode (tous les ans) certains caractères comme Å ont plusieurs représentations d'encodage, donc il ya de la normalisation: NFD qui éclate en pleins de codepoints ou NDC qui regroupe au max normaliser avant de chercher dans les chaines certains unicode sont représentés différemment selon le LOCALE (c'est la life) et ça continue dans l'article JBang permet d'appeler Java depuis Python via un pypi https://jbang.dev/learn/python-with-jbang/ c'est particulièrement interessant pour appeler Java de son Jupyter notebook ça fait un appel a un autre process (mais installe jbang et java au besoin) Librairies Quarkus 3.4 est sorti https://quarkus.io/blog/quarkus-3-4-1-released/ un CVE donc mettez a jour vos Quarkus support de Redis 7.2 plus de granularité sur la desactivation de flyway globalement ou par data source. Depuis l'activation transparente et automatique en 3.3 quarkus update est l'approche recommandée pour mettre à jour. Comment tester si un thread virtuel “pin” https://quarkus.io/blog/virtual-threads-3/ exemple avec quarkus comment générer la stackstrace et un utilitaire JUnit qui fait échouer le test quand le thread pin une série d'articles de Clements sur les threads virtuels et comment les utiliser dans quarkus https://quarkus.io/blog/virtual-thread-1/ À la découverte de LangChain4J, l'orchestration pour l'IA générative en Java https://glaforge.dev/posts/2023/09/25/discovering-langchain4j/ Guillaume nous parle du jeune projet LangChain4J, inspiré du projet Python LangChain, qui permet d'orchestrer différents composants d'une chaine d'IA générative Grâce à ce projet, les développeurs Java ne sont pas en reste, et n'ont pas besoin de se mettre à coder en Python LangChain4J s'intègre avec différentes bases vectorielles comme Chroma ou WeAviate, ainsi qu'une petite base en mémoire fort pratique LangChain4J supporte l'API PaLM de Google, mais aussi OpenAI Il y a différents composants pour charger / découper des documents et pour calculer les vector embeddings des extraits de ces documents Vidéo enregistrée à Devoxx sur ce thème : https://www.youtube.com/watch?v=ioTPfL9cd9k Infrastructure OpenTF devient OpenTofu https://www.linuxfoundation.org/press/announcing-opentofu Dans les Dockerfiles, on peut utiliser la notation “heredocs” exclu fondations et sociétés commerciales, inclues défini des classes de logiciels de non critique a classe 1 et 2 doit faire un risk assessment avant de livrer (pas de bug de sécurité, secure par défaut, security update) de la doc sur le process d'évaluation des risques et un SBOM notamment notifier d'ici 24h d'une vulnerabilité il y a une campagne #fixthecra Des protestations contre l'ouverture des modèles d'IA de Meta https://spectrum.ieee.org/meta-ai ouvrir les modèles et leurs poids permets aux acteurs de bypasser les restrictions (biais etc) donc des gens de Meta protestent contre la politique open source de Meta dans ce domaine l'argument c'est qu'un modele derrière une API peut êtres éteint les partisans de l'avis contraire pointent que contourner les restrictions de ChatGPT ont été triviales jusqu'à présent et que l'obscurité amène a un déficit de transparence, de connaissance du public. va affecté les chercheurs indépendants cela dit ce n'est pas open source pur car les sources et comment le modele est entrainé est peu publié OSI travaille a une définition d'OpenSource AI Un site pour mettre une pause à l'IA: https://pauseai.info/ NOUS RISQUONS DE PERDRE LE CONTRÔLE NOUS RISQUONS L'EXTINCTION DE L'HUMANITÉ NOUS AVONS BESOIN D'UNE PAUSE NOUS DEVONS AGIR IMMÉDIATEMENT Il y a un agenda des manifestations a travers le monde (Londres, Bruxelles, SFO… mais où est Paris?) Twitter/Discord/Facebook/TikTok/LinkedIn Alors qui va gagner la course à l'extinction de l'humanité? la guerre, le réchauffement climatique ou l'IA? Sarah Connor !!! Outils de l'épisode Un querty adapté pour les lettres à accent https://altgr-weur.eu/ (via Thomas Recloux) Conférences Toutes les vidéos de Devoxx Belgique sont disponibles https://www.youtube.com/@DevoxxForever Hacktoberfest, édition 10 https://hacktoberfest.com/ La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 26 octobre 2023 : Codeurs en Seine - Rouen (France) 26-27 octobre 2023 : Agile Tour Bordeaux - Bordeaux (France) 26-29 octobre 2023 : SoCraTes-FR - Orange (France) 30-31 octobre 2023 : Asynconf Event - Paris (France) & Online 2-3 novembre 2023 : Agile Tour Nantes - Nantes (France) 3 novembre 2023 : XCraft - Lyon (France) 7 novembre 2023 : DevFest Sophia-Antipolis - Sophia-Antipolis (France) 10 novembre 2023 : BDX I/O - Bordeaux (France) 15 novembre 2023 : DevFest Strasbourg - Strasbourg (France) 16 novembre 2023 : DevFest Toulouse - Toulouse (France) 18-19 novembre 2023 : Capitole du Libre - Toulouse (France) 23 novembre 2023 : DevOps D-Day #8 - Marseille (France) 23 novembre 2023 : Agile Grenoble - Grenoble (France) 30 novembre 2023 : PrestaShop Developer Conference - Paris (France) 30 novembre 2023 : WHO run the Tech - Rennes (France) 6-7 décembre 2023 : Open Source Experience - Paris (France) 6-8 décembre 2023 : API Days Paris - Paris (France) 7 décembre 2023 : Agile Tour Aix-Marseille - Gardanne (France) 7-8 décembre 2023 : TechRocks Summit - Paris (France) 8 décembre 2023 : DevFest Dijon - Dijon (France) 31 janvier 2024-3 février 2024 : SnowCamp - Grenoble (France) 1 février 2024 : AgiLeMans - Le Mans (France) 15-16 février 2024 : Touraine Tech - Tours (France) 6-7 mars 2024 : FlowCon 2024 - Paris (France) 14-15 mars 2024 : pgDayParis - Paris (France) 19-22 mars 2024 : KubeCon + CloudNativeCon Europe 2024 - Paris (France) 28-29 mars 2024 : SymfonyLive Paris 2024 - Paris (France) 17-19 avril 2024 : Devoxx France - Paris (France) 18-20 avril 2024 : Devoxx Greece - Athens (Greece) 25-26 avril 2024 : MiXiT - Lyon (France) 25-26 avril 2024 : Android Makers - Paris (France) 8-10 mai 2024 : Devoxx UK - London (UK) 24 mai 2024 : AFUP Day Nancy - Nancy (France) 24 mai 2024 : AFUP Day Poitiers - Poitiers (France) 24 mai 2024 : AFUP Day Lille - Lille (France) 24 mai 2024 : AFUP Day Lyon - Lyon (France) 6-7 juin 2024 : DevFest Lille - Lille (France) 19-20 septembre 2024 : API Platform Conference - Lille (France) & Online 7-11 octobre 2024 : Devoxx Belgium - Antwerp (Belgium) 10-11 octobre 2024 : Volcamp - Clermont-Ferrand (France) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via twitter https://twitter.com/lescastcodeurs Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/
Topics covered in this episode: QuickMacHotKey Things I've learned about building CLI tools in Python Warp Terminal (referral code) Python 3.7 EOLed, but I hadn't noticed Extras Joke Watch on YouTube About the show Sponsored by us! Support our work through: Our courses at Talk Python Training Python People Podcast Patreon Supporters Connect with the hosts Michael: @mkennedy@fosstodon.org Brian: @brianokken@fosstodon.org Show: @pythonbytes@fosstodon.org Join us on YouTube at pythonbytes.fm/live to be part of the audience. Usually Tuesdays at 11am PT. Older video versions available there too. Michael #1: QuickMacHotKey This is a set of minimal Python bindings for the undocumented macOS framework APIs that even the most modern, sandboxing-friendly shortcut-binding frameworks use under the hood for actually binding global hotkeys. Thinking of updating my urlify menubar app. Brian #2: Things I've learned about building CLI tools in Python Simon Willison A cool Cookiecutter starter project, if you like Click. Conventions and consistency in commands, arguments, options, and flags. The importance of versioning. Your CLI is an API. Include examples in --help Include --help in documentation. Aside, Typer is also cool, and is built on Click. Michael #3: Warp Terminal (referral code) Really nice reimagining of the terminal Currently macOS only but will be Linux, then Windows New command section & output section mode Blocks can be navigated and searched as a single thing (even if it's 1,000 lines of output) CTRL+R gives a nice history like McFly I've discussed before Completions into popular CLIs (i.e. git) Edit like an editor (even you VIM people
In this repeat episode picked by PodRocket host Paul Mikulskis, Ian Sutherland, Node.js core contributor and Architect and Developer Experience Lead at Neo Financial, joins the pod to talk about zero-dependency CLIs, why they're fun to build, and what they can teach us about developing other applications. Links https://twitter.com/iansu https://github.com/iansu https://iansutherland.ca Tell us what you think of PodRocket We want to hear from you! We want to know what you love and hate about the podcast. What do you want to hear more about? Who do you want to see on the show? Our producers want to know, and if you talk with us, we'll send you a $25 gift card! If you're interested, schedule a call with us (https://podrocket.logrocket.com/contact-us) or you can email producer Kate Trahan at kate@logrocket.com (mailto:kate@logrocket.com) Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket combines frontend monitoring, product analytics, and session replay to help software teams deliver the ideal product experience. Try LogRocket for free today. (https://logrocket.com/signup/?pdr) Special Guest: Ian Sutherland.
Waldemar Hummer, Co-Founder & CTO of LocalStack, joins Corey on Screaming in the Cloud to discuss how LocalStack changed Corey's mind on the futility of mocking clouds locally. Waldemar reveals why LocalStack appeals to both enterprise companies and digital nomads, and explains how both see improvements in their cost predictability as a result. Waldemar also discusses how LocalStack is an open-source company first and foremost, and how they're working with their community to evolve their licensing model. Corey and Waldemar chat about the rising demand for esoteric services, and Waldemar explains how accommodating that has led to an increase of adoption from the big data space. About WaldemarWaldemar is Co-Founder and CTO of LocalStack, where he and his team are building the world-leading platform for local cloud development, based on the hugely popular open source framework with 45k+ stars on Github. Prior to founding LocalStack, Waldemar has held several engineering and management roles at startups as well as large international companies, including Atlassian (Sydney), IBM (New York), and Zurich Insurance. He holds a PhD in Computer Science from TU Vienna.Links Referenced: LocalStack website: https://localstack.cloud/ LocalStack Slack channel: https://slack.localstack.cloud LocalStack Discourse forum: https://discuss.localstack.cloud LocalStack GitHub repository: https://github.com/localstack/localstack TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. Until a bit over a year ago or so, I had a loud and some would say fairly obnoxious opinion around the futility of mocking cloud services locally. This is not to be confused with mocking cloud services on the internet, which is what I do in lieu of having a real personality. And then one day I stopped espousing that opinion, or frankly, any opinion at all. And I'm glad to be able to talk at long last about why that is. My guest today is Waldemar Hummer, CTO and co-founder at LocalStack. Waldemar, it is great to talk to you.Waldemar: Hey, Corey. It's so great to be on the show. Thank you so much for having me. We're big fans of what you do at The Duckbill Group and Last Week in AWS. So really, you know, glad to be here with you today and have this conversation.Corey: It is not uncommon for me to have strong opinions that I espouse—politely to be clear; I'll make fun of companies and not people as a general rule—but sometimes I find that I've not seen the full picture and I no longer stand by an opinion I once held. And you're one of my favorite examples of this because, over the course of a 45-minute call with you and one of your business partners, I went from, “What you're doing is a hilarious misstep and will never work,” to, “Okay, and do you have room for another investor?” And in the interest of full disclosure, the answer to that was yes, and I became one of your angel investors. It's not exactly common for me to do that kind of a hard pivot. And I kind of suspect I'm not the only person who currently holds the opinion that I used to hold, so let's talk a little bit about that. At the very beginning, what is LocalStack and what does it you would say that you folks do?Waldemar: So LocalStack, in a nutshell, is a cloud emulator that runs on your local machine. It's basically like a sandbox environment where you can develop your applications locally. We have currently a range of around 60, 70 services that we provide, things like Lambda Functions, DynamoDB, SQS, like, all the major AWS services. And to your point, it is indeed a pretty large undertaking to actually implement the cloud and run it locally, but with the right approach, it actually turns out that it is feasible and possible, and we've demonstrated this with LocalStack. And I'm glad that we've convinced you to think of it that way as well.Corey: A couple of points that you made during that early conversation really stuck with me. The first is, “Yeah, AWS has two, no three no four-hundred different service offerings. But look at your customer base. How many of those services are customers using in any real depth? And of those services, yeah, the APIs are vast, and very much a sprawling pile of nonsense, but how many of those esoteric features are those folks actually using?” That was half of the argument that won me over.The other half was, “Imagine that you're an enormous company that's an insurance company or a bank. And this year, you're hiring 5000 brand new developers, fresh out of school. Two to 3000 of those developers will still be working here in about a year as they wind up either progressing in other directions, not winding up completing internships, or going back to school after internships, or for a variety of reasons. So, you have that many people that you need to teach how to use cloud in the context that we use cloud, combined with the question of how do you make sure that one of them doesn't make a fun mistake that winds up bankrupting the entire company with a surprise AWS bill?” And those two things combined turned me from, “What you're doing is ridiculous,” to, “Oh, my God. You're absolutely right.”And since then, I've encountered you in a number of my client environments. You were absolutely right. This is something that resonates deeply and profoundly with larger enterprise customers in particular, but also folks who just don't want to wind up being beholden to every time they do a deploy to anything to test something out, yay, I get to spend more money on AWS services.Waldemar: Yeah, totally. That's spot on. So, to your first point, so definitely we have a core set of services that most people are using. So, things like Lambda, DynamoDB, SQS, like, the core serverless, kind of, APIs. And then there's kind of a long tail of more exotic services that we support these days, things like, even like QLDB, the quantum ledger database, or, you know, managed streaming for Kafka.But like, certainly, like, the core 15, 20 services are the ones that are really most used by the majority of people. And then we also, you know, pro offering have some very, sort of, advanced services for different use cases. So, that's to your first point.And second point is, yeah, totally spot on. So LocalStack, like, really enables you to experiment in the sandbox. So, we both see it as an experimentation, also development environment, where you don't need to think about cloud costs. And this, I guess, will be very close to your heart in the work that you're doing, the costs are becoming really predictable as well, right? Because in the cloud, you know, work to different companies before doing LocalStack where we were using AWS resources, and you can end up in a situation where overnight, you accumulate, you know, hundreds of thousands of dollars of AWS bill because you've turned on a certain feature, or some, you know, connectivity into some VPC or networking configuration that just turns out to be costly.Also, one more thing that is worth mentioning, like, we want to encourage, like, frequent testing, and a lot of the cloud's billing and cost structure is focused around, for example, hourly billing of resources, right? And if you have a test that just spins up resources that run for a couple of minutes, you still end up paying the entire hour. And we LocalStack, really, that brings down the cloud builds significantly because you can really test frequently, the cycles become much faster, and it's also again, more efficient, more cost-effective.Corey: There's something useful to be said for, “Well, how do I make sure that I turn off resources when I'm done?” In cloud, it's a bit of a game of guess-and-check. And you turn off things you think are there and you wait a few days and you check the bill again, and you go and turn more things off, and the cycle repeats. Or alternately, wait for the end of the month and wonder in perpetuity why you're being billed 48 cents a month, and not be clear on why. Restarting the laptop is a lot more straightforward.I also want to call out some of my own bias on this where I used to be a big believer in being able to build and deploy and iterate on things locally because well, what happens when I'm in a plane with terrible WiFi? Well, in the before times, I flew an awful lot and was writing a fair bit of, well, cloudy nonsense and I still never found that to be a particular blocker on most of what I was doing. So, it always felt a little bit precious to me when people were talking about, well, what if I can't access the internet to wind up building and deploying these things? It's now 2023. How often does that really happen? But is that a use case that you see a lot of?Waldemar: It's definitely a fair point. And probably, like, 95% of cloud development these days is done in a high internet bandwidth environment, maybe some corporate network where you have really fast internet access. But that's only a subset, I guess, of the world out there, right? So, there might be situations where, you know, you may have bad connectivity. Also, maybe you live in a region—or maybe you're traveling even, right? So, there's a lot more and more people who are just, “Digital nomads,” quote-unquote, right, who just like to work in remote places.Corey: You're absolutely right. My bias is that I live in San Francisco. I have symmetric gigabit internet at home. There's not a lot of scenarios in my day-to-day life—except when I'm, you know, on the train or the bus traveling through the city—because thank you, Verizon—where I have impeded connectivity.Waldemar: Right. Yeah, totally. And I think the other aspect of this is kind of the developers just like to have things locally, right, because it gives them the feeling of you know, better control over the code, like, being able to integrate into their IDEs, setting breakpoints, having these quick cycles of iterations. And again, this is something that there's more and more tooling coming up in the cloud ecosystem, but it's still inherently a remote execution that just, you know, takes the round trip of uploading your code, deploying, and so on, and that's just basically the pain point that we're addressing with LocalStack.Corey: One thing that did surprise me as well was discovering that there was a lot more appetite for this sort of thing in enterprise-scale environments. I mean, some of the reference customers that you have on your website include divisions of the UK Government and 3M—you know, the Post-It note people—as well as a number of other very large environments. And at first, that didn't make a whole lot of sense to me, but then it suddenly made an awful lot of sense because it seems—and please correct me if I'm wrong—that in order to use something like this at scale and use it in a way that isn't, more or less getting it into a point where the administration of it is more trouble than it's worth, you need to progress past a certain point of scale. An individual developer on their side project is likely just going to iterate against AWS itself, whereas a team of thousands of developers might not want to be doing that because they almost certainly have their own workflows that make that process high friction.Waldemar: Yeah, totally. So, what we see a lot is, especially in larger enterprises, dedicated teams, like, developer experience teams, whose main job is to really set up a workflow and environment where developers can be productive, most productive, and this can be, you know, on one side, like, setting up automated pipelines, provisioning maybe AWS sandbox and test accounts. And like some of these teams, when we introduce LocalStack, it's really a game-changer because it becomes much more decoupled and like, you know, distributed. You can basically configure your CI pipeline, just, you know, spin up the container, run your tests, tear down again afterwards. So, you know, it's less dependencies.And also, one aspect to consider is the aspect of cloud approvals. A lot of companies that we work with have, you know, very stringent processes around, even getting access to the clouds. Some SRE team needs to enable their IAM permissions and so on. With LocalStack, you can just get started from day one and just get productive and start testing from the local machine. So, I think those are patterns that we see a lot, in especially larger enterprise environments as well, where, you know, there might be some regulatory barriers and just, you know, process-wise steps as well.Corey: When I started playing with LocalStack myself, one of the things that I found disturbingly irritating is, there's a lot that AWS gets largely right with its AWS command-line utility. You can stuff a whole bunch of different options into the config for different profiles, and all the other tools that I use mostly wind up respecting that config. The few that extend it add custom lines to it, but everything else is mostly well-behaved and ignores the things it doesn't understand. But there is no facility that lets you say, “For this particular profile, use this endpoint for AWS service calls instead of the normal ones in public regions.” In fact, to do that, you effectively have to pass specific endpoint URLs to arguments, and I believe the syntax on that is not globally consistent between different services.It just feels like a living nightmare. At first, I was annoyed that you folks wound up having to ship your own command-line utility to wind up interfacing with this. Like, why don't you just add a profile? And then I tried it myself and, oh, I'm not the only person who knows how this stuff works that has ever looked at this and had that idea. No, it's because AWS is just unfortunate in that respect.Waldemar: That is a very good point. And you're touching upon one of the major pain points that we have, frankly, with the ecosystem. So, there are some pull requests against the AWS open-source repositories for the SDKs and various other tools, where folks—not only LocalStack, but other folks in the community have asked for introducing, for example, an AWS endpoint URL environment variable. These [protocols 00:12:32], unfortunately, were never merged. So, it would definitely make our lives a whole lot easier, but so far, we basically have to maintain these, you know, these wrapper scripts, basically, AWS local, CDK local, which basically just, you know, points the client to local endpoints. It's a good workaround for now, but I would assume and hope that the world's going to change in the upcoming years.Corey: I really hope so because everything else I can think of is just bad. The idea of building a custom wrapper around the AWS command-line utility that winds up checking the profile section, and oh, if this profile is that one, call out to this tool, otherwise it just becomes a pass-through. That has security implications that aren't necessarily terrific, you know, in large enterprise companies that care a lot about security. Yeah, pretend to be a binary you're not is usually the kind of thing that makes people sad when security politely kicks their door in.Waldemar: Yeah, we actually have pretty, like, big hopes for the v3 wave of the SDKs, AWS, because there is some restructuring happening with the endpoint resolution. And also, you can, in your profile, by now have, you know, special resolvers for endpoints. But still the case of just pointing all the SDKs and CLI to a custom endpoint is just not yet resolved. And this is, frankly, quite disappointing, actually.Corey: While we're complaining about the CLI, I'll throw one of my recurring issues with it in. I would love for it to adopt the Linux slash Unix paradigm of having a config.d directory that you can reference from within the primary config file, and then any file within that directory in the proper syntax winds up getting adopted into what becomes a giant composable config file, generated dynamically. The reason being is, I can have entire lists of profiles in separate files that I could then wind up dropping in and out on a client-by-client basis. So, I don't inadvertently expose who some of my clients are, in the event that winds up being part of the way that they have named their AWS accounts.That is one of those things I would love but it feels like it's not a common enough use case for there to be a whole lot of traction around it. And I guess some people would make a fair point if they were to say that the AWS CLI is the most widely deployed AWS open-source project, even though all it does is give money to AWS more efficiently.Waldemar: Yeah. Great point. Yeah, I think, like, how and some way to customize and, like, mingle or mangle your configurations in a more easy fashion would be super useful. I guess it might be a slippery slope to getting, you know, into something like I don't know, Helm for EKS and, like, really, you know, having to maintain a whole templating language for these configs. But certainly agree with you, to just you know, at least having [plug 00:15:18] points for being able to customize the behavior of the SDKs and CLIs would be extremely helpful and valuable.Corey: This is not—unfortunately—my first outing with the idea of trying to have AWS APIs done locally. In fact, almost a decade ago now, I did a build-out at a very large company of a… well, I would say that the build-out was not itself very large—it was about 300 nodes—that were all running Eucalyptus, which before it died on the vine, was imagined as a way of just emulating AWS APIs locally—done in Java, as I recall—and exposing local resources in ways that comported with how AWS did things. So, the idea being that you could write configuration to deploy any infrastructure you wanted in AWS, but also treat your local data center the same way. That idea unfortunately did not survive in the marketplace, which is kind of a shame, on some level. What was it that inspired you folks to wind up building this with an eye towards local development rather than run this as a private cloud in your data center instead?Waldemar: Yeah, very interesting. And I do also have some experience [unintelligible 00:16:29] from my past university days with Eucalyptus and OpenStack also, you know, running some workloads in an on-prem cluster. I think the main difference, first of all, these systems were extremely hard, notoriously hard to set up and maintain, right? So, lots of moving parts: you had your image server, your compute system, and then your messaging subsystems. Lots of moving parts, and wanting to have everything basically much more monolithic and in a single container.And Docker really sort of provides a great platform for us, which is create everything in a single container, spin up locally, make it very lightweight and easy to use. But I think really the first days of LocalStack, the idea was really, was actually with the use case of somebody from our team. Back then, I was working at Atlassian in the data engineering team and we had folks in the team were commuting to work on the train. And it was literally this use case that you mentioned before about being able to work basically offline on your commute. And this is kind of were the first lines of code were written and then kind of the idea evolves from there.We put it into the open-source, and then, kind of, it was growing over the years. But it really started as not having it as an on-prem, like, heavyweight server, but really as a lightweight system that you can easily—that is easily portable across different systems as well.Corey: That is a good question. Very often, when I'm using various tools that are aimed at development use cases, it is very clear that one particular operating system is invariably going to be the first-class citizen and everything else is a best effort. Ehh, it might work; it might not. Does LocalStack feel that way? And if so, what's the operating system that you want to be on?Waldemar: I would say we definitely work best on Mac OS and Linux. It also works really well on Windows, but I think given that some of our tooling in the ecosystem also pretty much geared towards Unix systems, I think those are the platforms it will work well with. Again, on the other hand, Docker is really a platform that helps us a lot being compatible across operating systems and also CPU architectures. We have a multi-arch build now for AMD and ARM64. So, I think in that sense, we're pretty broad in terms of the compatibility spectrum.Corey: I do not have any insight into how the experience goes on Windows, given that I don't use that operating system in anger for, wow, 15 years now, but I will say that it's been top-flight on Mac OS, which is what I spend most of my time. Depressed that I'm using, but for desktop experiences, it seems to work out fairly well. That said, having a focus on Windows seems like it would absolutely be a hard requirement, given that so many developer workstations in very large enterprises tend to skew very Windows-heavy. My hat is off to people who work with Linux and Linux-like systems in environments like that where even line endings becomes psychotically challenging. I don't envy them their problems. And I have nothing but respect for people who can power through it. I never had the patience.Waldemar: Yeah. Same here and definitely, I think everybody has their favorite operating system. For me, it's also been mostly Linux and Mac in the last couple of years. But certainly, we definitely want to be broad in terms of the adoption, and working with large enterprises often you have—you know, we want to fit into the existing landscape and environment that people work in. And we solve this by platform abstractions like Docker, for example, as I mentioned, and also, for example, Python, which is some more toolings within Python is also pretty nicely supported across platforms. But I do feel the same way as you, like, having been working with Windows for quite some time, especially for development purposes.Corey: What have you noticed that your customer usage patterns slash requests has been saying about AWS service adoption? I have to imagine that everyone cares whether you can mock S3 effectively. EC2, DynamoDB, probably. SQS, of course. But beyond the very small baseline level of offering, what have you seen surprising demand for, as I guess, customer implementation of more esoteric services continues to climb?Waldemar: Mm-hm. Yeah, so these days it's actually pretty [laugh] pretty insane the level of coverage we already have for different services, including some very exotic ones, like QLDB as I mentioned, Kafka. We even have Managed Airflow, for example. I mean, a lot of these services are essentially mostly, like, wrappers around the API. This is essentially also what AWS is doing, right? So, they're providing an API that basically provisions some underlying resources, some infrastructure.Some of the more interesting parts, I guess, we've seen is the data or big data ecosystem. So, things like Athena, Glue, we've invested quite a lot of time in, you know, making that available also in LocalStack so you can have your maybe CSV files or JSON files in an S3 bucket and you can query them from Athena with a SQL language, basically, right? And that makes it very—especially these big data-heavy jobs that are very heavyweight on AWS, you can iterate very quickly in LocalStack. So, this is where we're seeing a lot of adoption recently. And then also, obviously, things like, you know, Lambda and ECS, like, all the serverless and containerized applications, but I guess those are the more mainstream ones.Corey: I imagine you probably get your fair share of requests for things like CloudFormation or CloudFront, where, this is great, but can you go ahead and add a very lengthy sleep right here, just because it returns way too fast and we don't want people to get their hopes up when they use the real thing. On some level, it feels like exact replication of the AWS customer experience isn't quite in line with what makes sense from a developer productivity point of view.Waldemar: Yeah, that's a great point. And I'm sure that, like, a lot of code out there is probably littered with sleep statements that is just tailored to the specific timing in AWS. In fact, we recently opened an issue in the AWS Terraform provider repository to add a configuration option to configure the timings that Terraform is using for the resource deployment. So, just as an example, an S3 bucket creation takes 60 seconds, like, more than a minute against [unintelligible 00:22:37] AWS. I guess LocalStack, it's a second basically, right?And AWS Terraform provider has these, like, relatively slow cycles of checking whether the packet has already been created. And we want to get that configurable to actually reduce the time it takes for local development, right? So, we have an open, sort of, feature request, and we're probably going to contribute to a Terraform repository. But definitely, I share the sentiment that a lot of the tooling ecosystem is built and tailored and optimized towards the experience against the cloud, which often is just slow and, you know, that's what it is, right?Corey: One thing that I didn't expect, though, in hindsight, is blindingly obvious, is your support for a variety of different frameworks and deployment methodologies. I've found that it's relatively straightforward to get up and running with the CDK deploying to LocalStack, for instance. And in hindsight, of course; that's obvious. When you start out down that path, though it's well, you tend to think—at least I don't tend to think in that particular way. It's, “Well, yeah, it's just going to be a console-like experience, or I wind up doing CloudFormation or Terraform.” But yeah, that the world is advancing relatively quickly and it's nice to see that you are very comfortably keeping pace with that advancement.Waldemar: Yeah, true. And I guess for us, it's really, like, the level of abstraction is sort of increasing, so you know, once you have a solid foundation, with, you know, CloudFormation implementation, you can leverage a lot of tools that are sitting on top of it, CDK, serverless frameworks. So, CloudFormation is almost becoming, like, the assembly language of the AWS cloud, right, and if you have very solid support for that, a lot of, sort of, tools in the ecosystem will natively be supported on LocalStack. And then, you know, you have things like Terraform, and in the Terraform CDK, you know, some of these derived versions of Terraform which also are very straightforward because you just need to point, you know, the target endpoint to localhost and then the rest of the deployment loop just works out of the box, essentially.So, I guess for us, it's really mostly being able to focus on, like, the core emulation, making sure that we have very high parity with the real services. We spend a lot of time and effort into what we call parity testing and snapshot testing. We make sure that our API responses are identical and really the same as they are in AWS. And this really gives us, you know, a very strong confidence that a lot of tools in the ecosystem are working out-of-the-box against LocalStack as well.Corey: I would also like to point out that I'm also a proud LocalStack contributor at this point because at the start of this year, I noticed, ah, in one of the pages, the copyright year was still saying 2022 and not 2023. So, a single-character pull request? Oh, yes, I am on the board now because that is how you ingratiate yourself with an open-source project.Waldemar: Yeah. Eternal fame to you and kudos for your contribution. But, [laugh] you know, in all seriousness, we do have a quite an active community of contributors. We are an open-source first project; like, we were born in the open-source. We actually—maybe just touching upon this for a second, we use GitHub for our repository, we use a lot of automation around, you know, doing pull requests, and you know, service owners.We also participate in things like the Hacktoberfest, which we participated in last year to really encourage contributions from the community, and also host regular meetups with folks in the community to really make sure that there's an active ecosystem where people can contribute and make contributions like the one that you did with documentation and all that, but also, like, actual features, testing and you know, contributions of different levels. So really, kudos and shout out to the entire community out there.Corey: Do you feel that there's an inherent tension between being an open-source product as well as being a commercial product that is available for sale? I find that a lot of companies feel vaguely uncomfortable with the various trade-offs that they make going down that particular path, but I haven't seen anyone in the community upset with you folks, and it certainly hasn't seemed to act as a brake on your enterprise adoption, either.Waldemar: That is a very good point. So, we certainly are—so we're following an open-source-first model that we—you know, the core of the codebase is available in the community version. And then we have pro extensions, which are commercial and you basically, you know, setup—you sign up for a license. We are certainly having a lot of discussions on how to evolve this licensing model going forward, you know, which part to feed back into the community version of LocalStack. And it's certainly an ongoing evolving model as well, but certainly, so far, the support from the community has been great.And we definitely focus to, kind of, get a lot of the innovation that we're doing back into our open-source repo and make sure that it's, like, really not only open-source but also open contribution for folks to contribute their contributions. We also integrate with other third-party libraries. We're built on the shoulders of giants, if I may say so, other open-source projects that are doing great work with emulators. To name just a few, it's like, [unintelligible 00:27:33] which is a great project that we sort of use and depend upon. We have certain mocks and emulations, for Kinesis, for example, Kinesis mock and a bunch of other tools that we've been leveraging over the years, which are really great community efforts out there. And it's great to see such an active community that's really making this vision possible have a truly local emulated clouds that gives the best experience to developers out there.Corey: So, as of, well, now, when people are listening to this and the episode gets released, v2 of LocalStack is coming out. What are the big differences between LocalStack and now LocalStack 2: Electric Boogaloo, or whatever it is you're calling the release?Waldemar: Right. So, we're super excited to release our v2 version of LocalStack. Planned release date is end of March 2023, so hopefully, we will make that timeline. We did release our first version of OpenStack in July 2022, so it's been roughly seven months since then and we try to have a cadence of roughly six to nine months for the major releases. And what you can expect is we've invested a lot of time and effort in last couple of months and in last year to really make it a very rock-solid experience with enhancements in the current services, a lot of performance optimizations, we've invested a lot in parity testing.So, as I mentioned before, parity is really important for us to make sure that we have a high coverage of the different services and how they behave the same way as AWS. And we're also putting out an enhanced version and a completely polished version of our Cloud Pods experience. So, Cloud Pods is a state management mechanism in LocalStack. So, by default, the state in LocalStack is ephemeral, so when you restart the instance, you basically have a fresh state. But with Cloud Pods, we enable our users to take persistent snapshot of the states, save it to disk or to a server and easily share it with team members.And we have very polished experience with Community Cloud Pods that makes it very easy to share the state among team members and with the community. So, those are just some of the highlights of things that we're going to be putting out in the tool. And we're super excited to have it done by, you know, end of March. So, stay tuned for the v2 release.Corey: I am looking forward to seeing how the experience shifts and evolves. I really want to thank you for taking time out of your day to wind up basically humoring me and effectively re-covering ground that you and I covered about a year and a half ago now. If people want to learn more, where should they go?Waldemar: Yeah. So definitely, our Slack channel is a great way to get in touch with the community, also with the LocalStack team, if you have any technical questions. So, you can find it on our website, I think it's slack.localstack.cloud.We also host a Discourse forum. It's discuss.localstack.cloud, where you can just, you know, make feature requests and participate in the general conversation.And we do host monthly community meetups. Those are also available on our website. If you sign up, for example, for a newsletter, you will be notified where we have, you know, these webinars. Take about an hour or so where we often have guest speakers from different companies, people who are using, you know, cloud development, local cloud development, and just sharing the experiences of how the space is evolving. And we're always super happy to accept contributions from the community in these meetups as well. And last but not least, our GitHub repository is a great way to file any issues you may have, feature requests, and just getting involved with the project itself.Corey: And we will, of course, put links to that in the [show notes 00:31:09]. Thank you so much for taking the time to speak with me today. I appreciate it.Waldemar: Thank you so much, Corey. It's been a pleasure. Thanks for having me.Corey: Waldemar Hummer, CTO and co-founder at LocalStack. I'm Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry comment, presumably because your compensation structure requires people to spend ever-increasing amounts of money on AWS services.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.
We're back for 2023 with Kito, Danno, and special guest Andres Almiray, Senior Principal Product Manager, Database group, to talk about the latest versions of Andres' JReleaser tool, building CLIs in Java (picocli, JCommander, JCommander, Spring...
We're back for 2023 with Kito, Danno, and special guest Andres Almiray, Senior Principal Product Manager, Database group, to talk about the latest versions of Andres' JReleaser tool, building CLIs in Java (picocli, JCommander, JCommander, Spring Boot, Quarkus, Micronaut), jban,, Jarviz, AI, whether or not Java is over the hill, http4s, and much more. We Thank DataDog for sponsoring this podcast! https://www.pubhouse.net/datadog Overview Server Side Java – Accelerate Your Lambda Functions with Lambda SnapStart (https://aws.amazon.com/blogs/aws/new-accelerate-your-lambda-functions-with-lambda-snapstart/) - Quarkus support for AWS Lambda SnapStart (https://quarkus.io/blog/quarkus-support-for-aws-lambda-snapstart/) IDEs and Tools - JBang (https://www.jbang.dev/) - Writing CLIs in Java () - picocli (https://github.com/remkop/picocli) - JCommander (https://jcommander.org/) - Frameworks that you can use to create CLIs - Spring Boot Console Apps (https://www.appsdeveloperblog.com/spring-boot-console-application/) - Quarkus Command Mode Apps (https://quarkus.io/guides/command-mode-reference) - Micronaut Command Line Applications (https://docs.micronaut.io/1.0.0.M4/guide/index.html#picocli) - Command Line Interface Guidelines (https://clig.dev/) AI - Microsoft, GitHub, and OpenAI ask court to throw out AI copyright lawsuit (https://www.theverge.com/2023/1/28/23575919/microsoft-openai-github-dismiss-copilot-ai-copyright-lawsuit) JReleaser - v1.4.0 released on Dec 29 2022 - Improved Maven deployment support - New FLAT_BINARY distribution - Threaded messages in Mastodon - Buildx support in Docker packager - New java-archiver - v1.5.0 (upcoming) - Environment variables and System properties support - New Linkedin announcer - New winget packager for NATIVE_PACKAGE distribution - Updates and deprecations to CLI flags Jarvis (https://github.com/kordamp/jarviz) - Jarviz is a JAR file analyzer tool. You can obtain metadata from a JAR such as its manifest, manifest entries, bytecode versions, declarative services, and more. Other - Github changes checksum algorithm for source archives (https://github.blog/changelog/2023-01-30-git-archive-checksums-may-change/) - http4s (https://http4s.org/) - Versioning schemes: - ChronVer - Calver - SemVer - Java-Module / Java-Version - Tmux (https://github.com/tmux/tmux/wiki) - Charm.sh (https://charm.sh/) - GitHub - shyiko/jabba: (cross-platform) Java Version Manager (https://github.com/shyiko/jabba) - Snapcraft (https://snapcraft.io/) - OpenFein (https://github.com/OpenFeign/feign) Picks - NixOS (https://nixos.org/) - Neovim (https://neovim.io/) - Toot (https://toot.readthedocs.io/en/latest/usage.html) Other Pubhouse Network podcasts - Breaking into Open Source (https://www.pubhouse.net/breaking-into-open-source) - OffHeap (https://www.javaoffheap.com/) - Java Pubhouse (https://www.javapubhouse.com/) Events - DevNexus 2023 - April 4-6, Atlanta, GA, USA (https://devnexus.com/call-for-papers) - JCON EUROPE 2023 - June 20-23, Cologne Köln, Germany (https://jcon.one/) - Gateway Software Symposium Mar 31 - Apr 1, 2023 (https://nofluffjuststuff.com/stlouis) - Pacific Northwest Software Symposium April 14 - 15, 2023 (https://nofluffjuststuff.com/seattle) - JPrime - May 30-31st, Sofia, Bulgaria (https://jprime.io/) - Central Iowa Software Symposium June 9 - 10, 2023 (https://nofluffjuststuff.com/desmoines) - Lone Star Software Symposium: Austin July 14 - 15, 2023 (https://nofluffjuststuff.com/austin) - ÜberConf July 18 - 21, 2023 (https://uberconf.com/)
Ian Sutherland, Node.js core contributor and Architect and Developer Experience Lead at Neo Financial, joins the pod to talk about zero-dependency CLIs, why they're fun to build, and what they can teach us about developing other applications. Links https://twitter.com/iansu https://github.com/iansu https://iansutherland.ca Tell us what you think of PodRocket We want to hear from you! We want to know what you love and hate about the podcast. What do you want to hear more about? Who do you want to see on the show? Our producers want to know, and if you talk with us, we'll send you a $25 gift card! If you're interested, schedule a call with us (https://podrocket.logrocket.com/contact-us) or you can email producer Kate Trahan at kate@logrocket.com (mailto:kate@logrocket.com) Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket combines frontend monitoring, product analytics, and session replay to help software teams deliver the ideal product experience. Try LogRocket for free today. (https://logrocket.com/signup/?pdr) Special Guest: Ian Sutherland.
In the beginning was the command line. Actually, before that were punch cards and paper tape. But at Multics and RSTS and DTSS came out, programmers and users needed a way to interface with the computer through the teletypes and other terminals that appeared in the early age of interactive computing. Those were often just a program that sat on a filesystem eventually as a daemon, listening for input on keyboards. This was one of the first things the team that built Unix needed, once they had a kernel that could compile. And from the very beginning it was independent of the operating system. Due to the shell's independence from the underlying operating system, numerous shells have been developed during Unix's history, albeit only a few have attained widespread use. Shells are also referred to as Command-Line Interpreters (or CLIs), processes commands a user sends from a teletype, then a terminal. This provided a simpler interface for common tasks, rather than interfacing with the underlying C programming. Over the years, a number of shells have come and gone. Some of the most basic and original commands came from Multics, but the shell as we know it today was introduced as the Thompson shell in the first versions of Unix. Ken Thompson introduced the first Unix shell in 1971 with the Thompson Shell, the ancestor of the shell we still find in /bin/sh. The shell ran in the background and allowed for a concise syntax for redirecting the output of commands to one another. For example, pass the output to a file with > or read input from a file with
Noah Prince (Co-Founder/ CEO, Strata Protocol) and Austin Adams (Lead Protocol Dev, Metaplex Studios) sit down with Austin Federa to discuss the integration of Strata's Dynamic Pricing Mint tool into the Metaplex Program Library.00:51 - What is Strata?02:12 - Challenges when launching a token04:43 - Why is Strata more successful than competitors?06:15 - Fundraise and the changing use cases of tokens on Solana08:47 - Changing mentalites around the function of tokens10:48 - How is Metaplex's approach different11:51 - Description of the flow using Strata13:25 - Mechanisms of dynamic pricing15:12 - Tools for dynamic pricing / Collusion19:06 - Metaplex and additional tooling21:54 - Optimizing Metaplex's architecture for the community25:05 - Advantages and drawbacks with metaplex's architecture29:44 - Metaplex and backward compatibility32:39 - Pitch for using dynamic pricing DISCLAIMERThe information on this podcast is provided for educational, informational, and entertainment purposes only, without any express or implied warranty of any kind, including warranties of accuracy, completeness, or fitness for any particular purpose.The information contained in or provided from or through this podcast is not intended to be and does not constitute financial advice, investment advice, trading advice, or any other advice.The information on this podcast is general in nature and is not specific to you, the user or anyone else. You should not make any decision, financial, investment, trading or otherwise, based on any of the information presented on this podcast without undertaking independent due diligence and consultation with a professional broker or financial advisor. Austin Federa (00:10):Welcome to The Solana Podcast. I'm Austin Federa. Today we're talking about a new partnership between Metaplex, the NFT implementation on Solana, and Strata Protocol, a toolkit that helps developers launch tokens. They've built some new tools to help creators set dynamic pricing for NFT mints and these change the economic incentives around NFTs which will hopefully reduce the botting of NFT mints. We're joined by Noah Prince, the co-founder, and CEO of Strata Protocol, and Austin Adams, a software engineer and lead protocol developer at Metaplex. Gentlemen, welcome to the Solana Podcast.Noah Prince (00:42):Thanks, Austin.Austin Adams (00:42):Thanks for having us.Noah Prince (00:43):Glad to be here.Austin Federa (00:44):Great. So let's go ahead and start out today with just an overview of, Noah, what is Strata and what are you guys trying to do in the space?Noah Prince (00:52):So Strata Protocol at its core is a protocol for launching tokens and managing the liquidity around those tokens. So we have a variety of different auction mechanisms, and we can launch tokens anywhere from small tokens that you don't really know who the counterpart of the trade is, there's not going to be much volume, all the way up to large tokens where you want to do a large offering and then eventually put those on a DEX. How we ended up getting into this space is just that our auction mechanisms for tokens also offer a solution for the NFT botting situation. So we thought long and hard about how to keep bots from botting the token launches that we have. And if you launch one of those tokens and then put it as the entry price to a Candy Machine, you get a dynamic pricing Candy Machine.Austin Federa (01:39):So let's talk a little bit just to kind of roll back to what Strata really is trying to accomplish here. You mentioned it's a solution for launching tokens and providing initial liquidity for those tokens. What are the challenges that people run into when actually launching a token? I think if you look across the space, you'd see that there are hundreds of different tokens run by hundreds of different projects across the Solana ecosystem, the majority of which were not launched with something like a launchpad or basically a protocol to help them go through that process. What are the challenges that people are facing when they're actually looking at launching a token?Noah Prince (02:12):Yeah. So I think token launching kind of comes in a few steps, right? The very first step is the ideation phase, where you're trying to figure out what your token is, do you have multiple tokens? What are the tokenomics? And somewhat in that same phase is where legal comes in. And to a lot of degrees, that is the hardest spot is where you're going to figure out what your token does. But a lot of times for people launching a token, there's this kind of big okay, we know what we want to do, but how do we physically create that token? And then how do we go and do things like auction that token off? I want to sell some of that token to investors. I want to sell some of that token to all of my community, how do we actually distribute that thing?Noah Prince (02:54):And then after that, there's the step where you've distributed it, you've collected some money for the token that you can use to bootstrap the project. And then you also want it to be tradable on a DEX or on an AMM. And then you go and set that up. So Strata is really there to help with the creation part of the token and for really small tokens, we also manage the liquidity. So if you don't want to even care about what is an AMM, what is a DEX, who is the counterparty to a trade? We have a way you can launch a token and it's basically one click. The protocol just takes care of all of it for you.Austin Federa (03:30):Yeah. So if you think about maybe a year ago, when someone was trying to launch a token, there was lots of technical components in actually creating and launching that token, but you'd have to go and submit something to the Solana token registry. You'd have to then go ahead and set up a permissionless pool on something like Serum. You'd have to go ahead and try and get it verified to get it actually listed there so it would show up in the list. Someone didn't have to add it as a custom market and all these things are functionally automated through you guys at this point, correct?Noah Prince (03:56):Yep. So for the most part, those things are automated. You still need to go and set it up on a AMM after you bootstrap the liquidity, but yeah, we're basically making it permissionless to go and do this. So the idea was to make it as easy to launch a token on Solana as it is right now to launch an NFT which Metaplex has kind of done a great job of.Austin Federa (04:16):And so there's been a lot of organizations that have tried to create launchpads or create basically systems of easier onboarding on the Solana blockchain. And a number of them haven't really gone anywhere, or they've been sort of overrun with, I would say very low-quality projects that are just trying to find a quick way to launch a token. What's the reason you think that Strata has had a bit more success here and not fallen into some of those traps?Noah Prince (04:43):Well, I think the first big trap there is talking about projects that are obviously disingenuous, they're trying to cash grab. They're not actually a real project. And when you talk about creating something permissionless, you want to get away from that, right? The barrier to entry should be low so that anybody can do it because tech is tech, but we don't want to be the ones that are creating the list of all the different tokens that we think that people should buy, right?Noah Prince (05:12):Equally, Metaplex isn't doing that. Metaplex doesn't go out and tell you which projects to buy. There are plenty of Launchpads that have their own lists that'll tell you who they think you should buy, and there are plenty of Twitter influencers who will tell you that as well. So that's one way that we're doing it. And then the other way that we're doing it is trying to make it easier for these projects that are smaller and maybe don't have all the idea of how to do everything that's complicated with launching a token, they just want a simple token. Things like social tokens are like little community chat projects, making it easier for those.Austin Federa (05:47):Some of the interesting things you guys have done in addition to the ability to create a new token or sell an existing token bootstrap liquidity is this idea of a fundraise and the dynamic pricing of NFT mints. On the fundraising side, what did you see in the changing ways people wanted to use tokens or the changing use cases of tokens on Solana that really led to the idea of a fundraise being something that a launchpad protocol should build tooling for.Noah Prince (06:14):Yeah. Fundraise was inspired deeply by ConstitutionDAO, which if you didn't see it, was this thing where a bunch of people on Eth just banned together, they said they were going to buy a copy of the Constitution of the United States of America. There was an open bidding that happened. I think they raised like $30 or $40 million and ended up getting outbid, but it was still this example of a community coming together and bootstrapping a ton of liquidity to do something cool. And the idea was that there will be shared ownership of the Constitution, or at least this copy of it after the bidding was done.Noah Prince (06:49):And so how you do that mechanically is just, you're collecting money into a pool that somebody then uses for the bid. And then as you're collecting money, people are getting a token that represents their share of that pool. And so even after you've used the money, they still have the token. And so with the ConstitutionDAO you had the people token. And that's just one of many ways to launch a token and why a launchpad is formatted like a wizard because we want it to be like a no-code tool where it asks you the questions that you need to answer to get towards launching your own token.Austin Federa (07:20):It's super interesting to think about the implications for some of the stuff for the intersection of real-world assets like something like a constitution and then the intersection of the ability to have full liquidity through something like a token mechanism for this. So I think that's a very interesting use case for it. And one of the great things I think about ConstitutionDAO when we saw that all happen is it's still going, right? There's still a community there. It's still passionate about this thing that they failed to create, but now is turned into something else which is in a large part, a lot of the story of NFTs on Solana as well, is that they start with one mission and suddenly something changes and Trash Pandas are now fighting plastic in the oceans, and all these other projects are building real community service kind of components into them.Austin Federa (08:05):When you're looking at the idea of a no-code solution here, what was the reasoning for something like that for more complex protocols? I guess the thing that I'm trying to tease out here is there's an assumption from a lot of folks that if someone's not sophisticated enough to figure out how to launch a token, they're probably not sophisticated enough to launch a project on a blockchain. That's obviously not necessarily the case, but that is part of why you've seen launchpads in general, or less code solutions be something that tends to have a lower quality project coming out of it in general. How do you respond to some of that criticism or look at the different ways that we just need to change our mentality around what a token's meant to be used for?Noah Prince (08:47):Yeah, I think there is that tendency, but as a dev, it's all about tools, right? For me, it's how can I get something done with the least effort possible that meets all the requirements. And so when you give tools to devs so that they can launch a token really easily, the devs can spend time focusing on the things that matter and not the things that don't. So part of it is that but also you need this primitive and you need this primitive to be easy because we're in the infancy of tokens right now, right? There's just one token. We're starting to see more complex systems like STEPN pop up where you have systems of tokens where that's just two tokens, all the way up to things like WOMBO, and BitClout, and Rally, where you have hundreds and hundreds of social tokens. And these things all start to interconnect together, and you can start to do really cool things when you can create systems of tokens. And that's something that you couldn't do in the past without this kind of primitive.Austin Federa (09:43):Yeah, it's fascinating. So, Austin, let's talk a little bit about the interface here with Metaplex NFT initial mints. So one of the things that we've observed over the last few months is that the increasing demand for NFT is on Solana. And also I would say real success of projects in building a strong community pre-launch has created situations where there is both a high incentive to bot the launch of an NFT, but also there's just extremely high demand for these things when they're coming up for initial mint. Some of that's driven by expectations that they might be able to flip them, but a lot of this is just organic community demand for a project that they feel very excited and interested in.Austin Federa (10:22):There's been a few attempts to create systems that would either increase the fairness or would try and reduce the incentives for botting. One of these was the Fair Launch Protocol which was created as sort of an extension of the Candy Machine toolkit, but Fair Launch Protocol never really caught on from a community standpoint. So what is sort of different in the approach here that you think is going to be successful in creating better incentives and dynamics?Austin Adams (10:48):I think the reason that this will be more successful is we will market it a lot harder than we marketed Fair Launch, but also the mechanics of Fair Launch weren't really, and they could have been changed they weren't really a great experience having to wait and then not knowing if you were going to get things. The NFT minter, once that sort of casino-style experience, they pull the lever, they get the thing they know right away, they're having fun, it's addicting. With the dynamic price mint coming in we get that addicting and fun feeling while still getting some technical protection against bots and making it a little bit more advantageous for creators. If they've created demand, they're getting rewarded for that demand.Austin Federa (11:38):Yeah. That's super interesting. So let's walk through, I guess, from both of you, what is the flow that both a creator and a user goes through if the project that they're trying to mint is using this new dynamic pricing powered with Strata?Noah Prince (11:52):Yeah. So the flow right now is a little bit broken up and that's kind of the point of this partnership, but right now you launch a normal Candy Machine through Metaplex, you grab the ID of that Candy Machine, and then Strata has a UI where you can plug in that ID and it converts it to a dynamic pricing Candy Machine. Now from a user standpoint, this looks pretty much exactly like the usual mint interface that you're looking, they're used to, right?Noah Prince (12:18):You just click a mint button, but the price is changing. So the price is just slowly ticking down and occasionally it bumps up when somebody purchases something and you can also switch tabs and you can go look at a price history plot. But as a user, you're trying to figure out at what point do you want to enter, right? At what price do you want to pay? And bots are playing the same game which is an unsolvable game. When do you enter a live market is a question that nobody knows the answer to. So it feels very much like a normal mint it's just that the price is moving and it's a game of who flinches first.Austin Adams (12:53):That's the current experience but as I'm sure you'll get to, we hope to create a deeper integration together that can utilize Strata's tech and Metaplex's tech for the entire experience without needing to go from one place to the other but using our new UIs and CLI tools, they can create a dynamic price Candy Machine that also gives us even more bot protections than we had before without having to go from one website to another.Austin Federa (13:25):So what is the dynamic pricing set based on? What are the mechanics that go into actually setting what that amount should be and how much volatility do you expect to see throughout the course of a typical 10,000 mint that might sell out in the course of several minutes?Noah Prince (13:43):Yeah. So generally, you want to establish what is basically the order of magnitude of the price. So something that's going to be in the 0.01 SOL range versus something that's going to be in the 10 SOL range, they're pretty different and it would be hard for any system to account for that. So generally what you're doing is you're setting kind of a range that you expect. So in the case of Divine Dogs, they were one of the very first ones that we did this with, they were minting an NFT that they thought would probably sell for two SOL. Now they're associated with the gods. And so 3.33 is a magic number for them. And so they actually set the starting price at 3.33 SOL and the minimum price at 1.1 SOL.Noah Prince (14:24):And so the idea was the minimum that they were willing to take as a project to get the funding to do what they needed to do was 1.1 SOL and they thought that people would pay up to 3.3 but probably not much more. And so what happened with that was I think the average price ended up being 2.32. But generally, you want the prices start slightly higher than what you think people will enter at so that bots don't have an advantage to spamming, they're just waiting for it to fall down and then it'll hit some fair price and it just oscillates around the fair price.Austin Federa (14:57):You mentioned a few things there where it sounds like projects have to do a bit of estimation around what they expect to see. What are the either software or just like human tools that someone should be looking at when they're trying to figure out where do they start with dynamic pricing?Noah Prince (15:13):Yeah. I mean, I think to a lot of degrees this is similar to right now people are just deciding a fixed price for their mint which is even more dangerous. You have no idea if it's going to sell out for that fixed-price or not. If you're a really hyped project, it probably will as long as you set it less than 10 SOL. But there's also a stigma, right? SolBears came out and set it to 10 SOL and people got pretty mad about it. So I think for most projects, this range of I mean, it depends what SOL's current prices, but right? This range of 1 to 5 SOL is generally reasonable. If you get really far off on the price, it can go above the starting price but we haven't seen that happen in practice. Usually, projects have a pretty good idea of what they're going to sell for or at least like a ballpark. They don't know exactly but they know a range.Austin Federa (16:05):Yeah, just because this was one of the first prominent uses of the Fair Launch Protocol where the community of degenerate Trash Panda Minters banded together and actually crashed the price. They all basically colluded against the project owners to mint at 0.1 SOL when the pre-mint tokens had been trading at 3 or 4 SOL on the exchanges and obviously, the price has gone up from there, but it's a very interesting dynamic when you give the community the tools to set their own pricing, you do open yourself up to a certain amount of collusion which I think is fascinating. No one would've thought that in a free market open system you'd be able to get a bunch of degens who are trying to optimize for the most value they can create to all band together and try and basically drive down the mint price of an NFT.Noah Prince (16:52):They also got to change their vote in the second half which made it a little less risky to bid small.Austin Federa (16:57):Yes. That's true. So that sort of one-tiered system is part of the dynamics here that you think make it more robust to get something like that.Noah Prince (17:06):Oh yeah. I mean, so we've done a couple of mints with it now and every single time in the Discords I actually hope that someone proves me wrong because it would be kind of interesting from a psychology perspective. But usually, there's a band of people in the Discord that are like, "Nobody buy. Nobody buy. We're going to let the price fall really low, like the bid small." But because it's so real-time, what ends up happening is it hits a number that's really, really good and it's just like a prisoner's dilemma. A few people defect and then everybody sees that a few people defect and all of a sudden the faction that was trying to hold back and not buy everyone starts buying and the price starts ripping because it's the lowest price that they're going to see.Austin Federa (17:47):Yep. Totally. It's really interesting the way those dynamics play out.Noah Prince (17:51):Yeah. Honestly, if your project gets hit by this and the people actually manage to do a prisoner's dilemma experiment where nobody defects, you have an amazing community, I don't even know that you need the money. Your community is incredible.Austin Federa (18:04):Yeah. It's worth noting that for the more successful projects out there, they have made many multiples of the initial mint revenue on secondary sale royalties. So it's kind of this interesting dynamic where you really want to bring the strongest community possible into an NFT project but the same time you need to fund appropriately for whatever your medium-term goals are to make sure you can actually deliver on any roadmap you've sort of laid out as a project which is really interesting. So when we're looking at some of the underlying architecture here and how it interfaces with Metaplex, I know there's a whole bunch of work on Metaplex that's been rebuilding a lot of the way that some of these contracts work. There's a whole expansion of what's possible on Metaplex coming soon. Austin, how are you thinking about additional tooling like Strata and other types of partnerships that will make it easier for a lot of this work that's being done to actually be deployed and usable? So the difference between reference implementation engineering and actually production engineering.Austin Adams (19:06):I think on a case-by-case basis, we always look at where we can stay generic and composable meaning one contract calls into the Metaplex contracts and the Metaplex contracts stay as this secure core that we audit very frequently and we're taking care of all that nonsense for the community. But in other cases, we identify a piece of technology that's really good and the composable way of doing it doesn't give us the guarantees necessarily that we want. And so we look at a deeper level of integration. The recent gains in shipping velocity that Metaplex is getting are coming more from CICD and looking at ways to improve our software stability so we are not scared to ship.Austin Adams (20:00):And I think that's what Metaplex is moving into as we're stabilizing and as we're trying to remain the base infrastructure for NFTs as well as move into some exciting new landscapes. So with this specifically, we do have some big changes coming to canning machines soon. We have some big changes coming to optional changes for everyone coming soon, but this one here falls right in line with our anti-botting work. And so we're heavily invested in making this as deep of a integration as it needs to be and shipping it as soon as possible, as well as shipping it not just in the contract level, but shipping it in our UIs and CLIs that are coming out or are out.Austin Federa (20:44):Yeah. Interesting. So I'd actually love to dig in a little bit more on how you're thinking about multiple layers of contracts or interoperable contracts that all can, I guess, give optionality in terms of how someone wants to deploy something. What are those different components and how are you thinking about... So classically, every time you have a contract talk to another contract, you've created an attack vector. This is most of the hacks that you see across DeFi and on Solana and on other places in Solana are non-validated fields. There's some ability for someone to inject something into the contract at a point that someone thought wasn't injectable that ends up creating an outcome that's not desirable for the users of that protocol or that contract.Austin Federa (21:28):That's like a very standard attack vector. So not to go too far into the security of this because of course that's maybe a separate conversation, but when you're thinking about that sort of multiple contract architecture, talking back to one central contract, what are the types of things you are thinking about or the Metaplex protocol is really thinking about from an architecture standpoint to make that secure, stable, but also upgradeable and able to respond to the needs of the community quickly?Austin Adams (21:53):Yeah, that's a great question. So I do believe it does depend on what the contract does in a large part, but generically, when we think about Web 2.0 land, when we've all created public APIs that take in user input, we can think about those as if they're analogous to we're allowing someone to direct their digital plumbing pipe at our digital plumbing pipe to use the euphemism or the saying that we're just all digital plumbers. I think I like that. One of the ways that we approach this is just being extremely careful on validating the input and being very restrictive with what specific instructions and what specific things a transaction can do when calling into our contracts.Austin Adams (22:47):So for example, with Candy Machine, although it is not as composable as other programs may be, we restrict the specific programs that can call out to Candy Machine and we restrict what they can do. We look at the instruction data using the instructions this far. For those who are non-technical that just means we can inside of the instruction or inside of the program, we can look at the instructions that are coming in and we can validate the input that's coming in. But for other programs such as AuctionHouse, we actually have purpose-built it to be composed over. And the way that we handle that is by bringing all the things that we want to make sure always stay secure into the contract.Austin Adams (23:33):So the token account creation, the mint creation, for example, the transfers, all of those are in the core AuctionHouse transaction protocol, but we've created this other system of composability called Auctioneer where people can put their additional logic such as token gating, timed auctions, even dynamic priced auctions via Strata can be done at that layer. So like I've said in summary, it does depend on the contract for Candy Machine because it's such a target for bots. We are very restrictive but we hope to find additional ways to loosen those things to allow more contracts to compose over it while still getting more bot, anti-botting guarantees.Austin Federa (24:20):It's kind of an interesting question here. When you think about on most layer ones or layer twos, the implementation of an NFT is something that's sort of done, I guess you called it the L1 or L2 level at the protocol level, as opposed to at the application level. Metaplex is a little bit different in its architecture, right? The tokens that are built are fundamentally still SPL-compatible tokens. And they're built more like an application level. And by application, I mean, it's not hard coded into the base Solana code. It's actually running on top of it which is a little bit different of an architecture than you see on something like Ethereum. What are the both advantages and challenges that both of you have run into because of that difference in architecture?Noah Prince (25:05):Yeah. So I did a deep dive at one point on composability on Solana versus Eth. Fundamentally, the NFTs on Eth and even the tokens that are on Eth are just following an interface. So it looks a lot like interfaced extension. I'm going to get real deep in engineering if I don't be careful here.Austin Federa (25:23):No, no, no, let's do it. This is the back half of the podcast.Noah Prince (25:26):Cool. Okay. Yeah. So it looks a lot like interfaced extension and classical object-oriented programming. So you think Java is the big example of object-oriented programming. Now Solana actually ends up looking a lot more like functional programming where you've got these contract endpoints that are effectively functions that operate over some state and then output a state. And then the next function can take that state and do something with it. Now, a lot of people will tell you when they're learning functional programming coming from object-oriented programming, it's scarier at first. It's like chewing glass. It's a little bit more complicated, but there's a lot more that you can do with it. And so like my example of composability actually is the current state of the integration with Metaplex where you talk about how there are different security vulnerabilities with checks, but a token is the absolute interface between us and Metaplex and that's the only interface.Noah Prince (26:26):The single check is whether or not you have the token that allows you to mint this Candy Machine and we just output that token. So we are a function that takes in some SOL and outputs a token. They are a function that takes in a token and outputs an NFT. And actually, they don't have to know about each other at all. It's only the user interface that knows about it. So this is how we generally deal with composability on Solana and why I like this model a little bit better, but I am a little bit of a functional programming maxi, so …Austin Federa (26:57):Austin, what about you?Austin Adams (26:59):Yes. I believe that the Metaplex model for NFTs is actually quite brilliant. And I'll talk about the pros first and maybe the cons second. I believe one of the reasons for our enormous growth is because our contracts are like APIs. You don't need to deploy your own contract. You don't need to manage that. You don't need to have everything that can be known about your implementation done ahead of time and then deploy an immutable contract. You can iterate and fail and try again and do new things on top of our programs without having to, one, manage the security of the program. Two, without having to really be an expert. And I know that you don't have to be an expert to launch an Ethereum NFT series because there's some great tools. But I think that's one of the reasons people choose Solana. Devs choose Solana, creators choose Solana to run their NFT projects is because the Metaplex contracts were brilliantly designed as APIs whereas they could have been designed in an interface model.Austin Adams (28:07):Now the cons of that are the Metaplex development team now needs to look at backward compatibility every single day. Any change that we make we have to micromanage that aspect all the time because we don't want to break anybody's use of the system. And through our DAO we need to ensure that what we're doing is reflective of what the community wants. So another con would be that some people see it as less decentralized, but in reality, because it's a community project, it doesn't seem so decentralized when you can build right on top of it and do whatever you need to because we try to keep the protocol light and do less things. I see that we can move into both areas. We can produce an interface-like system while getting these contracts as API feel. And I think that's some of the backbone of some things that you'll see coming out of Metaplex soon.Austin Federa (29:07):So when you think about something like backwards compatibility, what does Metaplex see as its sort of role and responsibility there, right? So famously for a number of years, Android had like seven different versions of the Android API that Google had to support because folks just would not update their apps. And Windows still has backwards compatibility with stuff that was probably about when most of us were born. What are you guys thinking about when you look at that sort of backwards compatibility and how long or what kind of functionality needs to persist for X amount of time?Austin Adams (29:44):So what we try to do is never break you unless it's security-related. If it's security-related, we fix it as soon as we can and we announce as quickly and as widely as we can. That hasn't happened very often and currently, we think that... We kind of take the semantic versioning approach where we will give you a long amount of time. Now we don't have a rigorous set amount of time yet. We're very new as a project if you think about it, but we will always provide you a new instruction and deprecate the old instruction and it works perfectly fine for a long time. And it's very rare. In fact, it's only happened once where we will remove old instructions. Part of that is looking at our contracts as APIs. And when you look at microservice patterns because that's how we think about them kind of is our contracts are microservices.Austin Adams (30:43):Look at the traffic of your instruction. If you're seeing it the traffic go down, people have moved to the other one, you're in safe territory to start announcing that, "Hey, we're going to start moving on from this specific instruction." But if you see it holding steady, that's a good signal from your community that that thing still needs to live or you need to educate and do more work. And that's how we'd like to see it. I think in the future, we'll see probably more rigorous guidelines around how long we're going to keep things out. But right now it's we'd be nothing without the people using it. So they're our top priority when we're shipping new things, we don't want to break anyone.Noah Prince (31:22):Yeah. I think at least how we've been approaching it with Strata is that I am very, very bearish on the idea that I'm never going to have to change anything. And so actually every one of our smart contract endpoints, every one of our arguments, every one of our piece of state has a V0 next to it. Some of them actually have a V1 already. And then in SDK land, so like in JavaScript, we wrap these calls with things that don't include V0 and we wrap them in interfaces such that if we ever have to change anything, we just bump it to V1 at the protocol level, change the interfaces, leave the V0 endpoints around for a while and then like Austin said, watch the traffic and then slowly deprecate them. But yeah, I mean, I think you kind of have to accept that these things are living, breathing things and like most APIs, you just have to version them. Now, a lot of people who don't have V0 next to their things, don't worry, you can put V1 next to anything.Austin Adams (32:19):It's okay.Noah Prince (32:19):And V0 is just the lack of a tag. It's okay.Austin Federa (32:22):So all of this depends of course, on creators and people launching NFT projects actually adopting and using the dynamic pricing tools. What's your pitch for why someone who's launching an NFT project should do it this way as opposed to doing it the way that's currently done.Noah Prince (32:37):Yeah. So one of the big things, I mean, even if you watch Frank, he is talking all the time about how he wants people who are long-term his project. He doesn't want paper hands. He doesn't want flippers, right? So right off the gate, you've got to acknowledge that having people that are just buying the project to flip it immediately aren't really good for your project long term anyway. I mean, if you were going to overprice your fixed price mint, you just weren't going to sell out. And so this will help you sell out which is ideally what you want, right?Noah Prince (33:09):Because you're picking the quantity of the mint so that you have a certain size of community. Now, if you had underpriced your fixed price meant, this actually means that you're going to get more funding to do what you want to do, right? And that's what matters is that you can actually execute on your roadmap. Now it's not like that price discovery isn't happening, right? It is still happening. If you price your mint at 2 SOL and the NFT is actually worth 10 SOL, it just drives up to 10 SOL on the secondary. But you know who makes that money? People who are flipping it and don't care about the project. So I would rather have that money go to the team than people who are flipping it any day of the week.Austin Federa (33:49):I'd love to hear from the Metaplex side what the pitch is to use it that isn't just it doesn't break the network.Austin Adams (33:55):Then I get nothing.Austin Federa (33:56):Because this is the thing is like one of the things about crypto is we have to assume everyone is a evil self-interested actor at all times who cares primarily about what they're trying to accomplish from a financial standpoint and isn't a altruistic actor trying to make the world's best-decentralized computing environment possible or else all of the assumptions of how blockchain works start to break down. So I think that's one of those questions that if either of you have something addressing that sort of side of things and-Austin Adams (34:25):Yeah, totally. I'll go with Metaplex's side of why I use dynamic price mint. So from the Metaplex side, we realize that Candy Machine has been botted so badly and we want to increase fairness for the collectors, creators, and the community. Just like Noah said earlier, we want to incentivize long-term holders, people to be a part of the project because NFTs are showing us they're more about community than they are really like a financial mechanism. They are a financial mechanism, but they've exposed this incredible new, psychological phenomenon.Austin Adams (35:01):For collectors, we've seen click farms, and bots, and extensions, not even if it hurts the network, but just hurting the experience. So one way that dynamic price mint helps is by making these click farms and botters, I mean, have to think twice, have to actually do some calculation, and have to do it in a fast and real-time manner. So this helps them be able to take part in the project even if they didn't get into the discords or other things like that at the right time, it's also going to help us move past this whole allow list trend in the community where you have to do all these specific things to get a spot and then you get a spot and you get a chance to mint, but then you don't actually get to mint. And so, hopefully, this makes the work that's required just being a part of the community and having the desire and the funds to mint.Noah Prince (36:01):Well said.Austin Federa (36:01):Awesome. Well, I think that does it for today. Thank you both for joining us to talk about this new launch of a Stratus support for dynamic pricing on Metaplex and creating new tools for creators to be able to actually implement this. If folks want to read more about it or want to consider using this for their next drop, where should they go to find more information?Noah Prince (36:22):Yeah. So for now it's actually, if you go to app.strataprotocol.com and you have a Candy Machine ID, you can launch one directly right there. We also have on docs.strataprotocol.com. We have extensive documentation on how to set up one of these dynamic pricing mints and a YouTube video on how to do one and even do one with a white list. In the future, we hope that this is directly on Metaplex's documentation and kind of more built as a first-class citizen into the Candy Machine and Metaplex's new UIs such that you don't need to be bouncing around from Strata to Metaplex. It's just there for you.Austin Adams (36:59):Yeah. 100% stay tuned on the Metaplex Docs and on our blog, Twitter, radio station. Oh wait, we don't have a radio station.Noah Prince (37:08):Yet.Austin Adams (37:09):Yet.Austin Federa (37:09):Great. Well, thank you both for joining us today.Austin Adams (37:13):Thank you, Austin.Noah Prince (37:14):Thanks for having us.
Will McGugan has brought a lot of color to CLIs within Python due to Rich. Then Textual started rethinking full command line applications, including layout with CSS. And now Textualize, a new startup, is bringing CLI apps to the web. Special Guest: Will McGugan.
Sujets traités : - Les élus de la Communauté de Communes du pays Rhin Brisach se sont réunis lundi soir en conseil communautaire. A l'ordre du jour : une subvention de 500 euros pour les jeunes passant le BAFA, le projet de tronçon cyclable reliant Fessenheim à la centrale nucléaire et économie, un emprunt de 13 millions d'euros est contracté. On en parle avec Gérard Hug, le président de la Communauté de Communes. Environnement à présent avec cette commande de plusieurs composteurs en bois, destinés à être revendus aux habitants. La communauté de commune prendra donc en charge l'essentiel du coût de ces composteurs. 40% du prix d'achat restera à la charge des habitants. Des propos recueillis par Jérémie RENGER. Retrouvez l'article complet consacré au Conseil Communautaire du Pays Rhin Brisach sur azur-fm.com, rubrique actualité régionales. - Journée verte à Marckolsheim, ce samedi de 10h à 16h. C'est la 3ème édition de ce rendez-vous. Il s'agit d'une journée pédagogique, de sensibilisation et de découverte concernant le broyage, le compostage, le fleurissement, le tri et le recyclage des déchets. Il sera également question des alternatives à l'utilisation des produits phytosanitaires. Des ateliers sont proposés toute la journée, animés par les équipes des ateliers municipaux. Le programme complet est disponible sur le site de la Ville : marckolsheim.fr - Erstein, rien ne va plus au sein du Conseil Municipal ! 4 des 5 membres du groupe d'opposition ont donné leur démission, empêchant le fonctionnement du conseil municipal. C'est l'achèvement d'un processus engagé depuis quelques mois déjà, selon eux. En démissionnant, les membres de la liste Avec vous pour Erstein provoquent la dissolution du conseil municipal et une nouvelle élection pourrait avoir lieu dans les semaines à venir. - Tchernobyl : trente-six ans plus tard… Une trentaine de personnes ont répondu hier à l'appel du collectif Stop Fessenheim, pour marquer le triste anniversaire de la catastrophe de Tchernobyl. Ils se sont retrouvés rue des clefs à Colmar. Une commémoration qui n'est pas anodine à la suite des bombardements de l'armée russe sur la centrale nucléaire de Zaporijia en Ukraine. - Puisqu'on en parle de la Centrale de Fessenheim, la commission locale d'information et de surveillance (CLIS) de la centrale doit se réunir vendredi à Colmar, pour faire le bilan de l'évacuation du combustible et faire le point sur l'avancement de son démantèlement.
Sujets traités : - Les élus de la Communauté de Communes du pays Rhin Brisach se sont réunis lundi soir en conseil communautaire. A l'ordre du jour : une subvention de 500 euros pour les jeunes passant le BAFA, le projet de tronçon cyclable reliant Fessenheim à la centrale nucléaire et économie, un emprunt de 13 millions d'euros est contracté. On en parle avec Gérard Hug, le président de la Communauté de Communes. Environnement à présent avec cette commande de plusieurs composteurs en bois, destinés à être revendus aux habitants. La communauté de commune prendra donc en charge l'essentiel du coût de ces composteurs. 40% du prix d'achat restera à la charge des habitants. Des propos recueillis par Jérémie RENGER. Retrouvez l'article complet consacré au Conseil Communautaire du Pays Rhin Brisach sur azur-fm.com, rubrique actualité régionales. - Journée verte à Marckolsheim, ce samedi de 10h à 16h. C'est la 3ème édition de ce rendez-vous. Il s'agit d'une journée pédagogique, de sensibilisation et de découverte concernant le broyage, le compostage, le fleurissement, le tri et le recyclage des déchets. Il sera également question des alternatives à l'utilisation des produits phytosanitaires. Des ateliers sont proposés toute la journée, animés par les équipes des ateliers municipaux. Le programme complet est disponible sur le site de la Ville : marckolsheim.fr - Erstein, rien ne va plus au sein du Conseil Municipal ! 4 des 5 membres du groupe d'opposition ont donné leur démission, empêchant le fonctionnement du conseil municipal. C'est l'achèvement d'un processus engagé depuis quelques mois déjà, selon eux. En démissionnant, les membres de la liste Avec vous pour Erstein provoquent la dissolution du conseil municipal et une nouvelle élection pourrait avoir lieu dans les semaines à venir. - Tchernobyl : trente-six ans plus tard… Une trentaine de personnes ont répondu hier à l'appel du collectif Stop Fessenheim, pour marquer le triste anniversaire de la catastrophe de Tchernobyl. Ils se sont retrouvés rue des clefs à Colmar. Une commémoration qui n'est pas anodine à la suite des bombardements de l'armée russe sur la centrale nucléaire de Zaporijia en Ukraine. - Puisqu'on en parle de la Centrale de Fessenheim, la commission locale d'information et de surveillance (CLIS) de la centrale doit se réunir vendredi à Colmar, pour faire le bilan de l'évacuation du combustible et faire le point sur l'avancement de son démantèlement.
Dominik und Jochen unterhalten sich über FastAPI. FastAPI ist ein noch sehr junges, aber trotzdem recht verbreitetes Webframework für Python, das darauf ausgelegt ist, die moderneren Sprachfeatures von Python wie Typannotationen und Async-Fähigkeit besser zu nutzen als traditionellere Webframeworks wie Django oder Flask. Shownotes Unsere E-Mail für Fragen, Anregungen & Kommentare: hallo@python-podcast.de News aus der Szene PEP 665 -- A file format to list Python dependencies for reproducibility of an application | Brett Cannon CPython on WASM At long last, Black is no longer a beta product! | Stability Policy Django wird jetzt auch wie in DEP 8 angekündigt mit black formatiert PyTest 7.0 release HATEOAS — An Alternative Explanation The future of editing in Wagtail Prototype Fund EdgeDB 1.0 Release | asyncpg -- A fast PostgreSQL Database Client Library for Python/asyncio | uvloop is a fast, drop-in replacement of the built-in asyncio event loop. uvloop is implemented in Cython and uses libuv under the hood. Twitter: My dental hygienist: "Are you flossing regularly?" Me: "Do you backup your laptop and photos regularly?" Laravel Livewire mit Christoph Rumpel | Alpine.Js | Caleb Porzio Werbung Exklusiv-Deal + ein Geschenk
This week on the podcast, Wael Manasra and Cody Oss join hosts Carter Morgan and Mark Mirchandani to chat about new branding in Cloud SDK and gcloud CLI. Google Cloud SDK was built and designed to take over mundane development tasks, allowing engineers to focus on specialized features and solutions. The SDK documentation and tutorials are an important part of this as well. With clear instructions, developers can easily make use of Cloud SDK. Software Development Kits have evolved so much over the years that recently, Cody, Wael, and their teams have found it necessary to redefine and rethink SDKs. The popularity of cloud projects and distributed systems, for example, means changes to kit requirements. The update is meant to reevaluate the software included in SDKs and CLIs and to more accurately represent what the products offer. Giving developers the tools they need in the place they work means giving developers code language options, providing thorough instruction, and listening to feedback. These are the goals of this redesign. The Google Cloud SDK contains downloadable parts and web publications. Our guests explain the types of software and documentation in each group and highlight the importance of documentation and supporting materials like tutorials. The Cloud Console is a great place for developers to start building solutions using the convenient point-and-click tools that are available. When these actions need to be repeated, the downloadable Command Line Interface tool can do the work. Cody talks about authentication and gcloud, including its relationship to client libraries. He walks us through the steps a typical developer might take when using Google products and how they relate to the SDK and CLI. Through examples, Wael helps us further understand client libraries and how they can interact with the CLI. The Cloud SDK is a work in progress. Our guests welcome your feedback for future updates! Wael Manasra Wael manages the gcloud CLI, the client libraries for all GCP services, and the general Cloud SDK developer experience. Cody Oss Cody works on the Go Cloud Client libraries where he strives to provide an delightful and idiomatic experience to all the Gophers on Google Cloud. Cool things of the week Google Tau VMs deliver over 40% price-performance advantage to customers blog Find products faster with the new All products page blog Interview Cloud SDK site Cloud SDK Documentation docs Go site Google Cloud site Cloud Storage site Cloud Storage Documentation docs Cloud Code site Cloud Run site GKE site Cloud Functions site Cloud Client Libraries docs Cloud Shell site Cloud Shell Editor docs What's something cool you're working on? Carter is working on his comedy. Hosts Carter Morgan and Mark Mirchandani
Milli Məclisin Şuşa Bəyannaməsini təsdiqləyəcək.
In todays session we have Joe Winchester with us, who originally wanted to do microbiology, stuff but somehow ended up in IT. Here he is very busy making the IBM Z platform more accessible and open. Join us for a journey on open source development and CLIs, APIs and all the fun stuff Joe did and still does in his career.
Sujets traités : - Du changement à la CLIS de Fessenheim. La commission locale d'information et de surveillance a pour but de faire connaitre aux Alsaciens ce qu'il se passe sur le site nucléaire, de le contrôler. Depuis septembre, le conseiller d'Alsace Raphaël Schellenberger a été élu à sa présidence, et il veut faire bouger les choses. La prochaine réunion de la CLIS, sera donc ouverte au public, le 15 novembre à la salle des fêtes de Fessenheim. Il est également possible, en se rendant sur le site de la CEA, d'y poser des questions, des thématiques quant à l'avenir de Fessenheim, pour que ces sujets soient abordés lors de ce rendez-vous. - Une proposition de loi pour les langues et cultures régionales. Yves Hemedinger, député de la première circonscription du Haut-Rhin, veut aller plus loin que ce que souhaitait le député Paul Molac avec sa proposition de loi sur les langues régionales. Pour le député Haut-rhinois, langue et culture sont indissociables. Cette proposition a été signée par les députés les républicains d'Alsace, et d'autres siégeant à l'assemblée, comme le breton Paul Molac. Mais l'élection présidentielle approchant, Yves Hemedinger, n'est pas certain qu'elle puisse être débattue dans l'hémicycle. Elle pourrait en revanche l'être lors de la prochaine mandature. - Autre proposition de loi, celle du député haut-rhinois Bruno Fuchs. Début octobre à Kingersheim, Dinah, une collégienne de 14 ans s'est pendue. Un suicide qui met fin à deux années de harcèlement scolaire, qui ont poussé la jeune fille à mettre fin à ses jours. Pour que ce drame ne se reproduise pas, Bruno Fuchs souhaite la création du délit de harcèlement scolaire dans le code pénal, ainsi que le droit à la protection contre ce harcèlement à l'école. Il s'est rapproché de Frédéric Bierry, président de la collectivité européenne, afin de faire de l'Alsace une région pilote en la matière. - Une possible rentrée masquée pour les écoliers haut-rhinois. En début de semaine, le taux d'incidence pour le coronavirus est repassé au-dessus de la barre des 50 cas pour 100 000 habitants dans le département. Au début du mois, il avait pourtant atteint son niveau le plus bas, avant de remonter. Depuis le 18 octobre, un peu avant pour les bas-rhinois, les écoliers ne sont plus obligés de garder le masque en classe. Mais cela pourrait changer si la situation sanitaire s'aggrave dans le territoire, comme ce fut le cas en Lozère par exemple, touchée par une reprise de l'épidémie. Le préfet haut-rhinois, d'ici la rentrée, devra prendre une décision au regard de la situation actuelle. - Courir pour la bonne cause. Suite au trail du petit Ballon, organisé par le CCA Athlétisme de Rouffach, la structure a pu remettre un chèque de 4 000 euros à l'association Vivre avec Parkinson. C'est déjà la cinquième année que cette opération est réalisée avec différentes associations. Sur chaque inscription à la course, un euro était destiné à être redistribué ensuite. Sur les 2 200 inscrits, de nombreux coureurs ont également fait des dons supplémentaires. Le club a ensuite arrondi le chèque à 4 000 euros pour l'association Vivre avec Parkinson. Une somme qui sera utilisée pour financer différents projets, notamment de la pratique sportive et de la course à pied, pour faire reprendre confiance aux personnes malades.
Sujets traités : - Du changement à la CLIS de Fessenheim. La commission locale d'information et de surveillance a pour but de faire connaitre aux Alsaciens ce qu'il se passe sur le site nucléaire, de le contrôler. Depuis septembre, le conseiller d'Alsace Raphaël Schellenberger a été élu à sa présidence, et il veut faire bouger les choses. La prochaine réunion de la CLIS, sera donc ouverte au public, le 15 novembre à la salle des fêtes de Fessenheim. Il est également possible, en se rendant sur le site de la CEA, d'y poser des questions, des thématiques quant à l'avenir de Fessenheim, pour que ces sujets soient abordés lors de ce rendez-vous. - Une proposition de loi pour les langues et cultures régionales. Yves Hemedinger, député de la première circonscription du Haut-Rhin, veut aller plus loin que ce que souhaitait le député Paul Molac avec sa proposition de loi sur les langues régionales. Pour le député Haut-rhinois, langue et culture sont indissociables. Cette proposition a été signée par les députés les républicains d'Alsace, et d'autres siégeant à l'assemblée, comme le breton Paul Molac. Mais l'élection présidentielle approchant, Yves Hemedinger, n'est pas certain qu'elle puisse être débattue dans l'hémicycle. Elle pourrait en revanche l'être lors de la prochaine mandature. - Autre proposition de loi, celle du député haut-rhinois Bruno Fuchs. Début octobre à Kingersheim, Dinah, une collégienne de 14 ans s'est pendue. Un suicide qui met fin à deux années de harcèlement scolaire, qui ont poussé la jeune fille à mettre fin à ses jours. Pour que ce drame ne se reproduise pas, Bruno Fuchs souhaite la création du délit de harcèlement scolaire dans le code pénal, ainsi que le droit à la protection contre ce harcèlement à l'école. Il s'est rapproché de Frédéric Bierry, président de la collectivité européenne, afin de faire de l'Alsace une région pilote en la matière. - Une possible rentrée masquée pour les écoliers haut-rhinois. En début de semaine, le taux d'incidence pour le coronavirus est repassé au-dessus de la barre des 50 cas pour 100 000 habitants dans le département. Au début du mois, il avait pourtant atteint son niveau le plus bas, avant de remonter. Depuis le 18 octobre, un peu avant pour les bas-rhinois, les écoliers ne sont plus obligés de garder le masque en classe. Mais cela pourrait changer si la situation sanitaire s'aggrave dans le territoire, comme ce fut le cas en Lozère par exemple, touchée par une reprise de l'épidémie. Le préfet haut-rhinois, d'ici la rentrée, devra prendre une décision au regard de la situation actuelle. - Courir pour la bonne cause. Suite au trail du petit Ballon, organisé par le CCA Athlétisme de Rouffach, la structure a pu remettre un chèque de 4 000 euros à l'association Vivre avec Parkinson. C'est déjà la cinquième année que cette opération est réalisée avec différentes associations. Sur chaque inscription à la course, un euro était destiné à être redistribué ensuite. Sur les 2 200 inscrits, de nombreux coureurs ont également fait des dons supplémentaires. Le club a ensuite arrondi le chèque à 4 000 euros pour l'association Vivre avec Parkinson. Une somme qui sera utilisée pour financer différents projets, notamment de la pratique sportive et de la course à pied, pour faire reprendre confiance aux personnes malades.
Jake and Michael discuss all the latest Laravel releases, tutorials, and happenings in the community.This episode is sponsored by Honeybadger - combining error monitoring, uptime monitoring and check-in monitoring into a single, easy to use platform and was streamed live.Show links Laravel 8.54 is released TrustProxies middleware Add withoutTrashed on Exists rule Add attempt method on RateLimiter Conditional validation rule support added in Laravel 8.55 Laravel Forge CLIF-Bar New Alpine.js plugins: Intersect, Persist, and Trap Regex helpers for Laravel API version control in Laravel API versioning discussion on Twitter HTTP request migrations package Laravel mail export Immutable IP address library for PHP HTTP client dashboard for Laravel View presenter classes for Eloquent models Laravel Cashier for OpenPay billing servics TypeScript with Laravel Introducing Iterator functions Laravel method injection: Why we don't need to create class objects? The road to PHP 8.1 Laracon Online Summer 2021
Ahmad Awais joins Amal, Amelia, and Jerod to discuss scripting, automation, and building CLIs with Node! We hear Ahmad's back story, learn the ABC's of mastering Node automation tooling, and share automation wins from all of our lives (and Twitter too).
Ahmad Awais joins Amal, Amelia, and Jerod to discuss scripting, automation, and building CLIs with Node! We hear Ahmad's back story, learn the ABC's of mastering Node automation tooling, and share automation wins from all of our lives (and Twitter too).
Video: https://octo.github.com/speakerseries/swyx Blog Post: https://codingcareer.circle.so/c/dx-blog/technical-community-builder-is-the-hottest-new-job-in-tech Slide dec: https://docs.google.com/presentation/d/1WGCfellGTboDwtM_D9uMwsHtD0qCFeBv6AYNUSxlDLg/edit?usp=sharing My talk at Heroku's conference where I met Idan: https://www.youtube.com/watch?v=1_w1YWCHXFg Timestamps 00:01:17 Intro presentation on Why Dev Community 00:16:15 Discussion between Idan, Brian, and Swyx Transcriptswyx: [00:00:00] Hey everyone! On weekends, we do long form audio from one of my conversations with people. [00:00:06] And a few months ago, I published an article on why technical community building is the hardest new job in tech. And it got a lot of traction. In fact, some of the other weekend drops on this podcast are related to that. Podcasts, but I was invited by the GitHub office of the CTO to talk about it. [00:00:25] These are two people that I knew from prior engagements before. Idan Gazit. I actually met at the Heroku conference. When I spoke aboutNetlify CLI and Netlify Dev. And then Brian Douglas, BDougie , it was the dev advocate at Netlify before any of us were dev because another fi. So he kind of pioneered and originated the role, which I stepped into. [00:00:46] And both of them are just very well. The tunes to dev community. So I thought we had a really good conversation. About it. So the first part of this talk basically is me presenting a few slides on the, my thoughts on dev community. And then it was just a freeform discussion between. Myself and these two experts at GitHub. so enjoy [00:01:17] Idan Gazit: [00:01:17] Hello, welcome to the Octo speaker series. My name is Eden and I'm with Gibbs office of the CTO. We look at the future of development, developer experiences and try to figure out how to make development faster, safer, easier, more accessible to more people and more situations. All I find jazz today we're trying something a little different.[00:01:43] Our guest is GitHub Star, Shawn Wang, better known by his internet handles Swyx and we'll also be joined by Brian Douglas, AKA B Douggie, who is a developer advocate and educator, and my colleague here at get hub. So, excited for that. I first met Swyx at a conference in the before times before the Corona, almost two years ago when he was giving a talk about state machines for building CLIs.[00:02:07]I knew of him in the context of his famous learning in public essay. And the talk that he gave was a fantastic demonstration of that diving into an area where he had relatively little expertise and making sense of that territory and jumping back out to explain it to the rest of us after his talk, he can.[00:02:28] To me that he he's actually a refugee from programming, Excel for finance. And I think coming out of that background, Swyx excels at finding that place of empathy for developers in the middle of the unglamorous, the hard parts of development the parts that we don't like to show off to one another, because they don't make us look smart.[00:02:49] They don't make us look, look cool. His work normalizes, the feeling of I'm stupid right now, which is very much a part of every developer journey and with which I identify very, very much. I think that's what makes his thoughts on community building so relatable and so topical developer facing businesses have to find a way to channel empathy into action.[00:03:13] And Swyx is figuring that out in all of its messiness in public for us to see and learn from. And in fact the reason I reached out to invite them onto the show is this recent post that he wrote called technical community builders. And looking critically at, at how that's different from the way Deborah has done today.[00:03:30]And I think this is a very interesting take on the future of, of, of this business function for developer facing businesses. Okay. So before I bring him on I'll remind everybody that we have a code of conduct it's really important to me that chat is a place where everyone feels welcome. So, please make sure to make that possible.[00:03:47] And without further ado I would like to welcome Swyx and be Douggie. Hello. [00:03:52]swyx: [00:03:52] Hey, Hey, Hey [00:03:54] Idan Gazit: [00:03:54] Swyx, you're, you're out in Singapore and it's like the middle of your night. Thank you so much for coming in and joining us for, for, for this talk. [00:04:02] swyx: [00:04:02] Oh, it's my pleasure. Yeah, I mean, I work specific hours specific time anyway, so, this is I guess the start of my day. [00:04:10] Idan Gazit: [00:04:10] Okay, well, good morning to you then.[00:04:12]Doug, [00:04:14] Brian Douglas: [00:04:14] I'm doing perfectly fine enjoying my normal time of the day, [00:04:19] Idan Gazit: [00:04:19] the north, the morning. That includes the day star. Fantastic. Swyx you said that you wanted to give a little bit of a, an upfront a mini talk about this before we dive into this discussion. Why don't I bring you on.[00:04:35] There we go. Okay. So like enlighten us. [00:04:39] swyx: [00:04:39] I can't, I can't actually see the screen cause I just have my slides full screen. So just pause me if there's anything I just wanted to, I guess, set some context for people who may not have read the post. You know, I think you and I, and, and Douggie, like we, we've all talked about community for a bit, so we may have more context than others.[00:04:58] And so I just wanted to, you know, whip up a few slides just to set some context and then we can actually talk because I'm very inspired by what GitHub does. And I'm definitely learning a lot from what you know, you guys do for, for community. Okay. So why invest in developer community a little bit?[00:05:16] I feel like this is a bit obvious, but, but the reason I write, like I would normally never write something like this because it just seems obvious. But the reason I write about it is I do a lot of conversations with startups and Sometimes for investing sometimes just to give dev REL advice sometimes, you know, marketing or whatever other network I can offer to startups.[00:05:38] I, I often do that. But in, in the past week or so, like at least when I wrote that book blog posts in one week, I had three conversations that all ended in can you help us find somebody to build developer community? And I was like, okay, this is, this is not just like one-off thing. This is a trend.[00:05:53] A lot of startup founders are feeling and there's no one really dedicated to it. There, there are people of course, but it's not like a, an industry trend yet. So I decided to write a blog post about that. And that's, that's why, I guess we're here today to talk about going on. Wait, wait, communities becoming more of a thing.[00:06:12] Always has been a thing, but it's becoming more of a thing and maybe professionalizing as well. So a bit of context about me, I think you done already introduced me quite a bit. I did change careers at age 30 but I definitely owe a lot of my career change and learning to code. To community, right?[00:06:27] I joined the free code camp community, the coding blocks slack group and podcasts was also a very big part of companionship through the journey of learning to code, which is a very rough one even for me. And and then of course I also did a bootcamp, which is a paid community, but one that's very, very focused on getting you hired.[00:06:46]And that got me into two Sigma Netlify AWS and I work at Tim portal. I think what I'm better known for maybe in the community space is, is my volunteer work in this reacts subreddit where I helped to grow the subreddit from 40,000 developers to over 220,000 before I stepped down I stepped down to basically, cause I started moving my interests to another front end framework spelt and I started that from zero to now it's like eight, eight to 9,000 feet.[00:07:12]And I also run a paid community for learning in public. So, I wrote a book, people like the book, and then we chat about career related stuff in, in our discord and then also go community. So that's my community credentials, I guess I should preface that. I guess I'm also, I had to put this here because a get hub at GitHub universe did this really cool Octo cat thing here.[00:07:33] So I just redid my profile as a GitHub look at which is really fun. And I did, I am pretty honored to be invited as a GitHub star which I think is a way that get hub recognizes community members as well, which we can also talk about, like, how do you recognize and promoted? You know, I, I guess your, your, your super fans and, and what does that really do for you?[00:07:54]Okay. So, I'll just, I'll just re blast through a few points and then we can, we can set it up for wherever you guys want to talk about. So to me, I think the, the main articulation that I want to have is like community is increasingly the moat of a lot of developer companies. So developers have always self-organized communities like IRC and BB SS.[00:08:12]But now companies, entire companies have communities where that's the entire mode like get hub is essentially get a plus a social network. And it's really like anyone can offer get, you know, but it, it, it's it's a V it's very hard proposition to replace a social network. And, and you find that the same for stack overflow.[00:08:29] There's a question and answer site. Anyone can build that, but you can not build the community. And same for hacker news. So it seems like very. You know, very key modes. And you would think that a lot more companies should be focused on that. But it doesn't seem so at least in, in terms of hiring, when you look at job titles and stuff like that they're more focused on the content creation and marketing, not so much community.[00:08:50]And I think that's changing right now and that's why I write about it. So that's the real question, like whose job is it anyway? There are community managers but typically we, we had one in LFI. They're typically focused on giving the forums and social media, like maybe making inoffensive posts or whatever.[00:09:08]They can do it. They're capable of a lot more. These, these are just stereotypical tasks that are assigned to community managers and then developer advocates have a bit of community as well. They do a lot of content and outreach to other communities. So it's not so much forming your own community rather than.[00:09:23] Let's how do we reach out and present and be a part and meet developers where they are rather than draw people to us, which there is a lot of as well. But the, they maybe don't have as much of a focus on sticking around and making interrelationships customer success is support documentation, solutions, engineering, all these are, you know, community of people who pay you and marketing, mailing lists, webinars, conferences.[00:09:45] These are all, you know, isolated communities of people who don't yet pay you, but could pay you. And then I think there's also, you know, apart from function functional split, there's also or chart split. And I do find that a lot of people who are directly responsible for community are at the lower rungs of the, of the org chart rather than at the, at the upper rung.[00:10:04] So it's pretty weird that it's just splintered all over the place. It's not really organized. I don't know. Doesn't seem like a organizational priority in a lot of the. Companies that I've seen. So the, the, the main realization for me is that community is basically part of the product. And in fact, in a lot of companies, it is the main part of the products, but it's, under-resourced compared to the products or engineering.[00:10:25]And I think something that is key is like, maybe we should not call it just community management, even though that's a default title. So I offered a few suggestions, like community developer or community tumbler. Tumbler is a word from I guess the circus. I took it from an Alex Holman post blog posts, but essentially a tumbler is someone who gets conversations going in and then pieces out.[00:10:47]So a lot of the times community manager does a lot of the heavy lifting. But you need to, in order for functional community to form into something that has many to many interactions, instead of one too many you, you need to get, so you need to have someone to create events where people feel safe and, and and inspired and motivated to, to share and to help each other out.[00:11:09]My preferred term right now is technical community builder because it's very similar to technical product manager, which is an actual job title at Microsoft and Amazon and a bunch of other places. And it has an emphasis on technical and the, and there's a question of like, must they be technical?[00:11:24] Of course not, of course you can have very, very good community builders and community managers who are not technical at all. But I think people who are technical have this extra dimension, which they can really empathize with developers on and connect people, solve their, solve their problems right away.[00:11:40] Basically just, you know, be one of, one of them. Like when you, when you talk to someone who fundamentally empathizes with your problems as a developer, you share more and you, you have deeper discussions. And then the other question is why must the title be different? I posit that it's very similar to, to the once in a lifetime upgrade in status impacts authority and career prospects for ops professional.[00:12:02] When the dev ops and got started, like dev ops used to not be a thing. Now it's a very highly in demand thing. And that's because it was a rebrand of existing skills that were, that, that were around, but, you know, repackage with, with new technology and a new focus in in a lot of organizations that the, that they realize that they need to invest in it.[00:12:22]So I think a similar movement needs to happen and you, you can't really rebrand something by calling it the same exact name. So th so that's why, that's why there's an opportunity to rebrand this discipline here. Okay. I'm very influenced by this model from comScore, which is essentially the opposite of what I showed you earlier, where community used to be at the fringe.[00:12:42]And you used to have all these other, other things in control of community and here, and, and the community led model kind of inverts that where community is at the core of everything. And from your insights from community and building relationships you, you spin out marketing, you spin out products, you spin on sales and so on and so forth.[00:12:59]And I think it's very interesting migration from periphery to core which. Been told actually is the same thing. That's happening to data science, data science, at least in, in the, in the companies that I've worked with used to be a fringe thing where like it's a bunch of geeks, you know, messing around with their with the analytics to like now it actually is part of the reporting process that generates a lot of product and sales and marketing insights.[00:13:25]And I think, I think community can, can do that with humans and not, not less, less less data, but you can, you can have a lot of data with, with it as well. So the question is why invest in it? And really, I think my, my fundamental assumption is that traditional marketing and support isn't cutting it.[00:13:38]This is the traditional idea of a marketing and sales funnel. You have awareness, evaluation, and conversion, and we as developer relations people definitely biased towards awareness for better or worse. But I think it, it is only one part of the picture and it's very transactional, right. It, you start at the top.[00:13:53]And then you, you, you come out at the bottom as a, as a salesperson and then, and then you're, they're done with you. I wash my hands off you and I, and you're handed off to someone else. The, the problems here are a few, few fold, right? Like marketing, especially in development. Marketing has extremely long cycles.[00:14:08]In traditional digital marketing, you need to touch you know, th th the traditional advice is that someone needs to hear about you six to seven times before they even check you out. For me. I know a lot of technologies. I ignore them for a year just to see if they stick around. And if they're still relevant after your, then I check them out.[00:14:24] So try to do marketing attribution. Impossible. So, very, very difficult. And, and not within any con China performance evaluation timeframe. And then also what happens after I convert, right. What happens after I come out the funnel? Do I feel supported there? Do I, do I grow and succeed and all that?[00:14:39]So the solution is to change from mostly transactional finite games to relationship-based infinite games. And this is the bigger picture that I see there's marketing and sales going on here. But then you, it exists within a broader scope of community that kind of catches all the other stuff that isn't really handled by marketing and sales.[00:14:55]We actually has loaded up the orbit model, which we can, we I'm sure we're going to talk about, so instead of the funnel, which is a very linear approach the orbit model, like kind of is isn't or. So characterize as the people around your company, as a people orbiting your company and they may be in wider orbits, or they may be in closer Orbitz.[00:15:14] Sometimes they may drop out. Sometimes they may come back in. It's a very infinite relationship model, the way they just constantly orbiting. And you're just trying to draw them closer with more and more gravity towards your, your software or your community. The reason I think it's important for startups in particular is that it's a very big part of crossing the chasm because there's a small set of people who actually picked technologies based on pure technical merit.[00:15:38] And there's a large set of people who pick technologies partially on merit partially because there's a strong ecosystem. And there's a very, very big steep gap in between that. And people who can help companies cross this gap can deliver a lot of value for, for the companies involved. And, and that's a, that's a really core insight, I think.[00:15:57] Okay. There's even more reasons. In my blog post, I don't have time to go into all of these, but we can talk about them in a discussion. I don't want this to be a lecture and I will refer and I have the last part on why now. And I'll send people to the blog post if they want to see it, but that's my short little primer for my thoughts on community.[00:16:15]Idan Gazit: [00:16:15] Fantastic. That was a solid, that was a solid introduction. One thing that really strikes me about what you're calling out here is that I can't, I can't highlight another area where there's a business motion. That's so central to success, which is which is so undefined. Like you think about most, most functions in a business like marketing or engineering or product.[00:16:40] And if I took, you know, 10 random people and asked them, you know, what does this job entail? What does success. Look like, and how does it contribute to the success of the overall business? And I'll get 10 answers that are more or less the same. And here, I think what's, what's special and maybe is in a, in a difficult sense is that I don't think that if I asked 10 people, like, you know, what's the purpose of this business function?[00:17:05] What does success look like? What does the job entail? What level of talent do we need to hire in order to accomplish this? Well, even, you know, things as boring as, like you say, sort of like, you know, where on the totem pole, like, you know, who, who does, who's responsible for this and who do they report to that level of, of definition?[00:17:25] I don't think I'm going to get 10 answers that are mostly the same. I think I'm going to get 10 wildly different answers that that don't resemble one another[00:17:33]Brian Douglas: [00:17:33] If I can add to as well. This is something that's come up really recently for me. Cause I, I shipped a YouTube video yesterday focused on like what the future of dev role looks like. So think about community and how that sort of changed even in us being over remote. There's no real like structure.[00:17:48] I think the everything, everybody can do something to move the needle, but I think the folks who are doing really good jobs is when you look at that, that model of the orbit, the folks as you bring more and more people closer to the nucleus they stick around longer. And I think one thing that Swyx and I had in common is that, well, a couple of things, we had a comment, like I was part of that react sub subreddit as well.[00:18:07]We also spent time at Netlify. So like I've saw a lot of the same stuff that Swyx us all and what I agree with everything that he said too as well. And the things that I think I saw successful at notifies that we had a committee. Folks who are just really excited about the product. And we found ways to bring them closer to the inner circle, to the point where there are Netlify employees, who now, who, who came from that community.[00:18:27] So when you think of like recruiting or not just actually using the product, but if you're looking for your next advocate, it should come from the community that's already existed. [00:18:35]swyx: [00:18:35] Yeah. I, one of the points that I made was that if hiring is your biggest problem just like 99% of other startups or companies in general, it doesn't have to be startups.[00:18:46]Then building a strong community helps you source very, a much higher quality of employee than you know, just picking any random developer off the street. [00:18:53] Idan Gazit: [00:18:54] I mean, yeah, like there's, there's in the post, you actually highlight that there's this sort of litany of, of of benefits. And I don't remember all of them off the top of my head, but I remember as I was reading through the post.[00:19:06]Excuse me. I thought that there was a lot more there than I expected, you know, like I expected going into it. It's just like, well, what benefits am I going to, I see from, from doing this well, well, you know, I'll do a better job at outreach. I'll do a better job at uptake of my product. But you know, I hadn't thought of the hiring angle, even though that's, you know, it's playing right there in front of us.[00:19:26] You know, if you build a strong community, you have a very like high quality pool in which to fish for, for, for, for standout employees. That it's a source of, of not exactly free marketing, but you know, it's like you have a chance of growing a class of evangelists, people that are going to go out and spread the word about, about what, whatever it is that you're doing.[00:19:46]I've even [00:19:47] swyx: [00:19:47] sorry. I've even gone one step further. So I took the hiring thing to the extreme. So, the, they started that I work at right now, it's in portal. We actually started listing jobs for our customers so that we can help them hire based on at least through us. So, so like, okay, if you don't work for us, but can just come work at one of the, one of the company, one of the customer companies.[00:20:07]And it's just like, like we win if they win, you know what I mean? And, and it's, you can just take this to an extreme level where you just start becoming a de facto recruiting agent. Really good. But I do, I do that, like, you know, if you do a really good job community, actually your the person's membership in the, in your community actually outlives there.[00:20:24]Present employer. And that that's a really strong community. That's like, okay. I'm, I'm I'm first and foremost, a member of your developer community. Then secondarily, I just happened to be at this company right now. But you know, I do, I do have my primary network within, within your community.[00:20:38] That's a really strong one. [00:20:40]Brian Douglas: [00:20:40] And I guess, can I add actually get some clarification too, from you Swyx when you talk about these terms like dev ops, who like everybody knows what dev ops is now, it wasn't an unknown thing, you know, 10 plus years ago. But when you build a community, like what are some sort of like ways you can avoid those pitfalls?[00:20:56] Because I know every time I go to an event and I join a random slack channel for just that event, like I leave that slack channel as soon as it's done. So like, I'm curious what your, your, your thoughts are. As far as building community from scratch. [00:21:11] swyx: [00:21:11] Oh, wait, are you saying that this is a problem with DevOps?[00:21:14] Or are you just so [00:21:15] sorry? [00:21:15] Brian Douglas: [00:21:15] I use dev ops because dev ops is a very clear term. There's already established community, but if I started B Douggie conference and wanted to everybody joined the movement, like it's going to be a challenge because it's going to be me and maybe a couple of people in chat. So like, how do I make sure that this is not another community that's become stagnant or stale?[00:21:34] Like I want to create the next devil. [00:21:36] swyx: [00:21:36] I gotcha. I gotcha. Yeah. I think so you and I, of course were very informed by our Netlify experience for anyone who doesn't know actually started the whole debt roll practice at Netlify. And I basically, you know, was one fourth of his job after he left. Anyway and something that nullify did, which was brilliant was that they didn't create the Netlify movement.[00:21:57] They didn't create the Netlify conference. They created the JAMstack movement and the jazz that conference. And, and, and I really. I like this idea that you build something that's bigger than yourself. Like you build a movement that other people can evolve get involved with and see themselves in to the point where they start competing with you and you have to be okay.[00:22:15]If you're, so mission-driven that you're okay. Losing because someone did your job better than you. Then you, then you've really found something that's worth building a community around because otherwise it's just, you're building a cult, I guess, where it's centered around you. And, and so I, I really like that.[00:22:32] For example, I'll give you a concrete example, which is at, I think our second JAMstack conference Netlify we invited people from Microsoft competitor in, in some ways who did not use Netlify at all, did not pitch another fight at all. But just presented their ideas on JAMstack and we invited them as a speaker.[00:22:49]Yeah. Ha. Yeah. I mean, I, I, and I think that we should have more you know, competitive competitor companies also visited the conference as well. I think we should have more of that. I think it shows a fundamental level of security that you're like, okay, I'm not threatened by you. Or like, I care about this enough that you know, this is big enough that multiple players can win in this space.[00:23:11] That's a real community where, whereas you know, a lot of other times you're just running it to as a feeder service into, into, into your marketing funnel. [00:23:22] Brian Douglas: [00:23:22] Yeah. I like the, the thought about building a community that's bigger than yourself. And I think like speaking from good hubs perspective, cause I was a time user recently employed at GitHub in the last three years.[00:23:32] Not really that recent, but it startup worlds. That's, that's kinda, that's like forever ago. But what I'm getting at is like the whole get collaboration, open source protocol. I, I liked that GitHub didn't try to strangle it and try to own it completely. There were other competitors are doing a great job and having collaboration tools around, get up, get, just get in general.[00:23:53]And that sort of funnel of new users, community conferences, slack rooms, discords it's been helpful for me in doing my job because there's already established community that I can just go in and not try to take leadership on, but more of like, Hey, I want to learn from you as well. [00:24:10] swyx: [00:24:10] Yeah, totally, totally.[00:24:11] I do think that at some level there's, there's a transition from like, okay, this is bigger than yourself, but then at some point you're, you're big enough that you are a community on your own. And I think, you know, once you're past like 50 million developers, you can have your own community. That's totally fine.[00:24:27]Same thing for like Salesforce at Dreamforce and AWS and reinvents. Like we all have, you know, huge companies have their own conferences and this totally fine, but I think when you're getting things off the ground, that's a totally different story.[00:24:38] Idan Gazit: [00:24:38] I think, I think you, you, you touched on something interesting there about picking, you know, it's always, it's always hard to stay away from like blatant advertising when it comes to like developers, like, you know, who do I work for? What is it that they make? That's obviously going to be a central part of the discussion if, you know, I'm representing, you know, company X or Y but you highlighted that, you know, for Netlify the story was not it was not Netlify, it was JAMstack forget hub it wasn't look at GitHub and, and and our specific web app, but the the collaborative nature of open source, specifically powered by decentralized version control.[00:25:18]And like, you know, the get is important. The polar requests are important. The rest of the stuff that get it brings is important, but it's not that's not the thing that's going to emotionally resonate with with people on its own. Not unless you have such a, you know, so much of a better product that it's like, oh my God, people are wowed by just the existence of this thing.[00:25:38]Which is great. If you can pull that off, like more power to you, you know? I think you, you touched on this sort of linear path. Okay. Like you have a story, you tell it and you think about this, this path that, that you want to take people, a journaling journey that you want to take people along that starts in marketing territory and ends in sales territory.[00:25:57] Hope. And then by contrast, you know, coming back to that. To the orbit model. One of the sort of assertions you made there is that your remodel is not, it's not strictly linear, that it has these other dimensions. It has this love dimension, basically like a measure of, of activity and reach as a, as a, as a measure of influence.[00:26:14] But when I still look at this at this model, it's still talking about these sort of concentric rings of, you know, you start at the very outer, most orbit, you know, as just an observer and accessibly, you move, move your way into the middle. That's still seems like a relatively, you know, linear journey to me.[00:26:30]I think it's curious, I, you know, that they, that they put advocates at as the closest, the inner most ring versus contributors. Because when I think about like, where, where do I spend the maximum amount of energy? It's in contributing, it's not an obvious, it's really easy for me to advocate.[00:26:48] I can advocate. React until the cows come home. And you know, all I got to do is write like nice things about react, but contributing to react like an effortful activity. So, I'm curious, you know, about that journey, like, what do you think, is it, is it really about getting people to contribution is contribution just a, like a left turn on this.[00:27:09] Does this make sense to you? I don't know. I'm curious what you think.[00:27:11] swyx: [00:27:12] I, I feel like they've probably written this up. So I'm actually looking up the, the, the writer right now, cause this is probably a better question for Patrick Woods who came over this model. But I, I agree if you want in principle, at least in an open source context that people who number of people who contribute are far less than the number of you who advocate for the thing.[00:27:29] And maybe that, that should be the inner circle. I would say that it's less linear because the whole point is that you can jump in and out of different orbits depending on your life situation or just whatever projects you're working on. That's totally fine. And it's not considered a failure. Yeah, I don't know if that.[00:27:47] Brian Douglas: [00:27:47] Yeah, I do have some thoughts cause I know Patrick and I know Josh pretty well and I have been able to rub shoulders with them so that the founders of the corporate model or the orbit company as well. And I talked to Patrick on his podcast, which is called developer love and episode one, you can hear way more detailed what I go into and right now but the one thing that I had to figure out when I joined GitHub as a developer advocate and at the time we had advocates, but no one actually had the title at the time at GitHub.[00:28:12] So I was even the reigns to do developer relations at, get up, figure out what that meant. And at that time I had to figure out also what that meant, but also give a talk at developer dev role con cause we had a speaking slot and I call myself the Beyonce of get hub. And I do that tongue in cheek and I joke around about that, but I do that because like, I don't play.[00:28:33] I don't play Beyonce music all day, every day. Like I don't, you know, I don't know how to play the backing tracks on base or anything like that. So I'm not really contributing in that sense, but I will tell you about Beyonce and tell you her story. And I think it's the same thing with open source. Like I made a contribution to no JS back in November, it was a really painful process.[00:28:50] I learned a ton and my contribution to the no JS was that I read blog posts. I did a contribution on their repo, but the difference is when I get on stage and I show you how to write a script in node and I go around and I share, I'm like, well, I noticed still great despite dyno or Dino and all these sorts of Russ compiler times, like I'm still advocating for no JS.[00:29:12] And I think. If you can bring more people to the sort of inner circle. I think that's, that's always going to be super helpful. And if you have people who are going to be the mouthpiece, I guess what I'm getting at is my job at GitHub is not to be the number one developer advocate in the world. My job is to build more developer advocates.[00:29:30] So if you can advocate, get, get help on behalf of get hub and I don't have to be involved, then that's an entire automation automated process. Now you can argue contributions that can automate that and just grow and sustain the project. But there are a lot of GitHub projects or sorry, open-source projects have lots of contributions that you've never heard of.[00:29:48]So like until someone tells me that exists or I see it on the trending tab it's going to be a hard a hard thing to focus on to try to get more contributors when no one's actually knows about this project. [00:29:57]Idan Gazit: [00:29:57] Right. There's there's definitely, I mean, that's definitely like a, like a hurdle to be crossed in terms of just like, you know, where do I even hear about this?[00:30:05] I mean, obviously there's, there's, you can think of that as a, I'm sure. Not coming from a marketing background, you know, I'm sure there's entire textbooks about the phase of like, you know, how do I get people to even know that I exist before I like, you know, how do I wedge the door open long enough for me to attempt to get across?[00:30:23] Like, and here's why you should care about me. There's a whole phase of, of, of just spreading the word. [00:30:30] swyx: [00:30:30] That's why, that's why I think, you know, we I do, I do think that we do need technical community builders, whatever the, you know, whatever we call this thing. They, they, they probably need to be technical because they need to have that technical leadership of like, I authentically went through the same journey that I'm telling you that I'm hoping that you also go through with me on this.[00:30:48]And, and this is something that non-technical community managers cannot do. So it's like a. Thing where you have to hire someone on who has a software engineering background or is it, you know, pay them like a developer, but then put them on non-technical things, which is communities less Senegal.[00:31:09] Brian Douglas: [00:31:09] Right. You know, I don't know. It's a weird job. It's just this thing, authenticity to it too as well. Like I would not have know how to be a developer advocate if I wasn't a developer first. So like, I always put myself in the mindset of like, if I had to use this thing and it takes me 12 minutes to get it set up, like, I'm probably never going to use it again.[00:31:24] So like, how can I advocate on the behalf of this product to make this better? And how could I bring that information back to whoever makes decisions at this project company maintain her level or whatnot. And it's just like, I, I just still think it's one step more than just contributing, keeping the lights on.[00:31:41]It's more of like, Hey, I want to also bring that feedback. How can I improve this? And I think. The, the roles inside the community. I think technical community manager, it's a great world because it actually touches all those different pillars. And specifically in the model, I know we're focused on that, but like being able to turn it on, turn it off and also know how to listen as well.[00:32:03]Are very valuable like attributes that I would love to have on my team. I get hub for sure. And we do have those by the way. I just want to set the record, [00:32:13]Idan Gazit: [00:32:13] Just to be, just to be upfront and clear. So, I think, I think we're all dancing around a little bit, the, the, the bigger question of what are the qualities like, what are, what does success look like for this role?[00:32:25] How does it, how has it changed? Like, you know, if we, if we called the role previously developer relations, and now we're calling it this subtly. Name around technical community building and sort of the, the, the stewardship and the shepherding of, of a community. What what's success, how is success different in, in this sort of like a slightly different like mental model and, and what's different in the day to day?[00:32:50] Like, you know, if, if previously, you know, previously I was doing Debra and that meant I was doing X, Y, and Z with my days in order to succeed at my job and contribute to the success of the business. What does that look like in this sort of new, mental framing of community building, as opposed to simply developer relations?[00:33:11] swyx: [00:33:11] Yeah. So I can give a crack at it and then I'm sure Doug has, has other thoughts. You know, at Amazon, I can tell you directly the, the, the KPIs that we were reporting and, or. To the outside world. That's, that's the only thing that they expect out of us, which is number of views on the content that we produce.[00:33:28] Right. Very depersonalized. You're just a number to me. Did I get a thousand? Did I get 10,000? Did I get a hundred thousand? I did a better job if it was a bigger number. Great. But there's no relationship there. There's no measurement of quality, like was, was that they just glance at the title.[00:33:43] Where did they actually read the whole thing and try out the demo? There, there are different weights for different you know, actions that people can take. And we do try, they check that, but it's all a joke. Like it's not okay. Everyone knows that it's a joke. You know, it's a proxy to what we really want, which is people trying you out and seeing if they like you and you know, short of standing over their shoulders, you can't really get that.[00:34:06] I'm so sorry. What I, what I do, what I do like is that orbit is trying to innovate on that by measuring you know, what they call love, which is just the intensity of activity which is the same thing, but tracks on a per person basis. And, and, and suggest in, and that opens up the possibility of like, having more of like a CRM model, which is very much the sales idea of like, you know, have, have an idea of that, the customer journey from beginning to end and suggests or automate engagements as they, as they come along on the journey.[00:34:36]Which, which is less, it's just, it's just a lot less transactional, like at, even at Netlify. Like I was, when, when you get to the point of like attaching UTM tags to your posts, to see the, the response of of, of your campaigns that's just, you're just marketing. You're not there role. I mean, and so, so, so I definitely care a lot more about the relationship aspect and how much you can, you can cultivate just by understanding the customer journey rather than treating them as a sort of faceless numbers [00:35:04]Brian Douglas: [00:35:04] to add to that too, as well.[00:35:06] Like I am all, I'm definitely against trying to look at views and how many people are in the stream right now. Cause I think that's you you've lost it at that point. But I think what success looks. Is the names that I see in the chat right now. I see a lot of familiar names. So how many of those familiar names do I see next time?[00:35:21]Because as those were my, I didn't even know this term tumblers that you mentioned in your slides. Cause I've seen this around, but I didn't know what that was. The party corgi chat has tumblers and I didn't know what tumblers were today. But I guess I have an anecdote too, as well from net network.[00:35:33]Netlify when I was doing, and we, we were bottom growing and we have this opportunity to speak or speak and also attend and have a boot that react rally. And it would have been super easy to say, Hey, can you fill out this form? And we'll send you, we'll get your email. And then you have a chance to win, you know, this thing at Netlify.[00:35:52]And instead my approach at that conference, which was like one of the first conferences I ever had, any sort of marketing, advertising, whatever my approach. Come to the booth. We had an Nintendo switch on the, on the booth table, and then we had a bunch of stickers. And the thing was if you switched to Netlify, which is like, it was a pine, really.[00:36:10]And then we'll give you a chance to win the switch. And the step was, all you had to do is scan a QR code and then click the deploy to Netlify button. And it was on that, that website or, sorry, it was a get hub repo. You put click the deploy Netlify button, and then inside the site you deployed from Netlify.[00:36:24] After 15 to 30 seconds, it took happened to be a gap suicide. So we were at on-brand for the conference. Then you read the website you just deployed and the instruction says, click this button to tweet. And if you tweet that would actually put you in a hashtag and I had a node server that would then pick a random person.[00:36:38] So we did this for three days. We gave away the switch by the second day, cause we'd had enough people. I think the conference was like 600, 700 and we had about 320. People who participated. And then after the first day we knew we engaged the community because the next day two or three people came and said, Hey I clicked the button and then I saw what you deployed.[00:36:55] And it was a Gatsby site. And at the time Gatsby wasn't even 1.0, so like nobody would use Gatsby at that time. And they're like, yeah, I switched my entire blog to Gatsby. And it's hosted on Netlify. So then we know, Hey, this person is actually super engaged. This is, this is my next advocate. Like, I'm going to, whatever you need, I'll give you a sweater or a t-shirt eat.[00:37:12] If you don't win the switch, like I will engage you and give you everything. You need to continue down this path. And that was the focus. And like for marketing, it looked great. But we didn't have the sort of traditional fill out this web form. It was this click, this button used a product if you don't want it, or if you want to delete the repo by all means, get hub out at the time, get hub had all hit all your information.[00:37:33] Like we weren't even collecting your information. So like the goal was just really. Taking it for a test drive. And then if it works out for you we have this forum, we have this community, we have get up issues like this jump in where you, where [00:37:46] swyx: [00:37:46] you fit in. Yeah. And then we also, I think potential enterprise team customers.[00:37:52] This was after Brian left, but you know, w w we also had like a separate process for potential customers to highlight to the sales team where we actually scanned their badges and took down info and basically fed indirectly to their CRM or whatever. And that was pretty good because w we were able to capture a lot of really useful detail that gave our salespeople are really good [00:38:10] Brian Douglas: [00:38:10] headstart.[00:38:11] Yeah. And you just don't know who you're, who you're chatting with too as well. Cause that, that story. About being at react rally. One of the people who walked up and said, Hey, this is actually pretty cool. That person was maxed away, Burr and max Storybird. A lot of, a lot of people know him. He used to actually work at, get hub for a time.[00:38:25] He built a whole product, got acquired by GitHub, and now he's at Gatsby as well. Coincidentally. But I never met max. I just knew who he was. I knew of his story. And then we connected and like, he didn't like, he wasn't like the number one Netlify fan boy, I don't, I'm pretty sure he didn't walk away shipping everything to Netlify, but we made that connection.[00:38:42] So every time I had a conversation with max or he remembered me, that was like a nice serendipitous moment of like, oh yeah, we met at that one time that when I did that thing and like, you just can't put a metric to that of like, what big names do you know that like, at the time max was like, he wasn't even a big name, but like you just, yeah, you just can't quantify that you can't put a number to that.[00:39:02] Just have to go. [00:39:04] swyx: [00:39:04] I mean, it probably contributed to the reacts to be on Netlify as well. Yeah, it's, it's, it's a domino effect and there's a sort of like a density effect, like one person using it. All right. Cool. Two people. All right, cool. But then like three prominent people then it's starts to become a thing, you know?[00:39:19]So I like that concentration of like, presence which, which also points of it being more of a community. Right. So, yeah. I don't think, I don't think we gave you that like a lot of like numbers, we're just like, we just talked about people, which is very natural thing. [00:39:34] Brian Douglas: [00:39:34] Yeah. And the one thing that I did want to add to real quick is that the one thing, when I joined GitHub, my biggest goal was I spent four years in San Francisco and I only needed like a handful of get up.[00:39:43] Employees never went to, to get hub office. And my goal is to get up employee today. And a developer advocate is I want to put, be a face to a company that has an Okta cat for a face. Like I want you to know who to reach out. And if it's me, or if it's not me, like, I'll give you the right person. And that's like one of my goals that get hub to do, to be an advocate for getting you in the right router.[00:40:02]Idan Gazit: [00:40:02] That's, that's interesting. I mean, like, you know, there's a part of my brain in the back and it's like, you know, like the true wind was the friends we made along the way. Exactly. It turns out that It's interesting though, because this role that you, you just described, this thing exists, it's called an ombudsman. And if you're familiar with this, I think it comes out of the military. Like, you know, this is like the person that the families at home are in touch with in order to like, you know, reach their, their loved ones that are deployed wherever and have any concerns or whatever.[00:40:31] And so the there's a, there there's a sort of like, a name for, for, for this role of like, you know, liaison into the company and actual human that can, you know, step in and maybe not help you solve your problem directly, but at least point you in the right direction, like, you know, attach you to the person that can actually help you move forward.[00:40:51]But you're, you're right in saying that there's like, you know, these aren't, you haven't really given me like hard metrics. Like, you know, if I'm now going to pitch to a company like, Hey, here's what I'm going to do for you. They're going to be like, okay, like, What are, what are the, what are the OKR is what are the KPIs?[00:41:09] What, what, what is the thing that you're going to be measured on? How do we know if what we're doing is succeeding. [00:41:16] swyx: [00:41:16] There is a company that actually does that, which is Weaver that AI, they call it community qualified leads, and it takes a very salesy model to, to this direct attribution towards sales and marketing and all that.[00:41:27]And so, yeah, I mean, once you have the tracking system in place, you can Def you can absolutely do that. And if you need to quantify in that way then absolutely. Yeah, you know, I, I, I don't necessarily feel that strongly because it tends to be. Then become a fight for whoever is the last touch who gets the most attribution which makes it a very political thing.[00:41:49] Idan Gazit: [00:41:49] Sometimes between departments in some senses, that sounds like it's going to set up all the wrong incentives inside. You know, it's like when you're like, you know, at a store and you get mob by like, you know, it was like, no, I'm the one who like, you know, did anybody help you today? Well, [00:41:59] swyx: [00:41:59] yeah, it's like, so for me, I don't know if you guys have played Kerbal space program.[00:42:04] No, [00:42:05] Idan Gazit: [00:42:05] only her only her. [00:42:08] swyx: [00:42:08] Okay, I'll just give you like the rough intuition. When you, when you start off trying to get the rocket from off the ground, into, into orbit you're very concerned with all the tiny little mechanics of like what degree tilt you're doing, what what your yall is and pitch and whatever and your, your velocity and your weight and, and the stages that you do.[00:42:24]But once you're basically at velocity and in space you then only care about your like DV. I forgot what the, the, the, the calculus is, but like, you only care about your high level metrics and you don't actually care about the low level stuff, because you're, you're, you're beyond that.[00:42:41] You're, you're cruising at a speed where you, you should just move the big controls that actually matter, and then leave the lethal minor attribution's to, to like random noise or like, it's going to bubble up if it actually becomes. And I, I think that that's how large and our community should be managed.[00:42:57] Like, as long as, as long as your efforts are growing at a, at a decent rate, you can trust that it probably will trickle down to whatever and you don't really have to be too precise about how exactly you attribute it. That's at least my intuition. It's going to be, it's going to bother me now that I don't remember what the it's like DVD or something like that for your, your, your Delta Delta V or yeah.[00:43:18] Anyway, I'm sure someone in chat is yelling at me. I have a question for you guys if, if, if you want to enter entertain this. So there's a, there's a problem in my mind, which I haven't resolved, which is this idea of a super user. So at Netlify Netlify we call them the other friends at get hub.[00:43:32] You call them, get up stars. Stripe has drug community experts. These are an AWS as, as community builders. These are basically unpaid super users, which you give some kind. Yeah, but you know, perks but they're your external third party advocates. What do you think about them? How do you, how do you make them effective?[00:43:49]And, and basically everyone is new to this game. Like GitHub stars program is like a few months old. Right. Or maybe a year old. Yeah. Since September. What's your, what's your, what's your take on these kinds of programs? Like what, what is, what are what's their role compared to you guys? [00:44:06] Brian Douglas: [00:44:06] Yeah. I, I could speak on partially behalf of get hub and something that I've always also put a lot of thought into before I got, I could have, because I was trying to it's ironic because I was trying to help build what is now the net difference.[00:44:18]And but I, I just didn't have time before I left to, to actually see that. What it is today. But I had that same thought of like, what is the reason I gave that talk on being the Beyonce of GitHub is because Beyonce has a super fan group. And they're called the beehive intents of if you go after Beyonce, that beehive will show up.[00:44:37] And and it's not as that intense, but it's like when people came after her, after she had the baby, like [00:44:42] swyx: [00:44:42] people will know. SNL had a really great skit where there was like someone who had admitted that they didn't, they didn't like a Beyonce song. And then they just, the beehive showed up. Yeah. And it's [00:44:52] Brian Douglas: [00:44:52] the same that we saw with the the K-pop stands like BTS that's a little, like more extreme, but like there is a group that will go to bat for you.[00:45:01] And like, my job is to really go to bat for the hive. So to answer your question, like success looks like these are the folks that are creating the courses, writing the books, they're there on the forefronts of creating the YouTube videos. When the thing is announced, like it's the opportunity to give them as much information as they want.[00:45:20] So if they want to monetize it, they can, if they want to grow a community around it, they can. But it's simply like they're doing a good job. And we want to make sure that we're catering to them because. If, if someone's already doing like my job for me, like I'm all for, Hey, let's, let's have a coffee.[00:45:35] Let's let's learn. What are your blockers? How can I unblock you in the future? Or are there any features you're looking to like to ship? Like, let me introduce you to the PM and let me let the PM get your feedback directly. So like you just take the company directly to the source of the growth and that's, that's what I see it as.[00:45:52] And I've seen very, I've had similar talks to other leaders of these sort of groups. And that's usually what their goal is, is like this help empower folks through the people who are empowering the,[00:46:02]swyx: [00:46:02] yeah. Yeah. I like that. [00:46:05] Idan Gazit: [00:46:05] I, I I'm like strongly reminded of there's a post from way back in the dinosaur ages. About success being a function of, of being able to grow a thousand fast. And that if you can find a way to, to reach that sort of threshold and it's thrown out there, I think in the same census at our member who coined the, like, you know, mastery comes at 10,000 hours or something like that is like a order of magnitude.[00:46:28] Like when you reach this this, this tipping point, and that's, that's a signal that like, you know, what you're doing is working and maybe, maybe this is the kind of metric that that we're looking at. It's not views, it's not posts. It's like, you know, how many, how many engaged, super fans are? Are we creating?[00:46:44] How many people do we have that love the thing that we're doing so much, that they're going out of their way. To spread that to more people and looking at that as the like you say about the Kerbal space program, sort of like, you know, the gross leavers of, of success, not the little like fine tuning adjustment dials, but like, you know, the big steering wheel that indicates that like we're doing the right thing.[00:47:05]I don't know. I mean, the, this, this question of like, you know, what has been the impact of, of GitHub stars? This has only existed, I guess now since Doug, you said since September, September. Yeah. So this is like a hot minute old or maybe it's like a thousand years old. It's unclear. [00:47:19] Brian Douglas: [00:47:19] Yeah.[00:47:20] And actually, I, I think the official launch was September. We actually started formatting this form I guess making the formation of the stars around may, June. And I get to have like a very clear impact that we, I saw from my end which we, we watched this feature called to get hub profile, read me it's a feature everybody has access to, but at the time we had the sort of under wraps in like a super alpha we do for all features that get hub.[00:47:42] We have the staff ship that we call it. Alpha alpha or whatever comes before alpha, but that's what we, we have, we test our feature. So get up, employees all leverage it. And it sort of like came out of nowhere as far as this feature goes and what I have access to it. We are able to get this in front of stars pretty early on to the point where we actually had to get up star who created some content on how to build your, your profile.[00:48:04] Remi was like pretty cool, like within a week of launch. And that basically is the de facto tutorial on how to create a profile. Read me because it was so early, it just came out and this individual Monica, which I guess I can, I can name them as well. They are now like, they're, they're SEO wise, like that's the post, like it's not the docs.github.com and like that's success to me.[00:48:26] That's like seeing someone win in the, in the source in the sense of content and engagement of the community. And now as the point person, when it comes to that. [00:48:36] Idan Gazit: [00:48:36] That's that's actually a, really a really great it's like, you know, I know that I've succeeded at this job when, when other people's like, you know, results, outrank mine on, on on Google then success.[00:48:46]That's fantastic. We actually have a question, meaning a question here from Jeremy feel what's the feedback for, for a feedback loop for these super users? Like, I, there's a follow on question. There is, should the company be monitoring the output to manage their message? I'd argued that, you know, you can't manage other people's message otherwise you have to pay them a salary.[00:49:07]But but there is, there is a question if this is, if this is part of what you're trying to do as a community builders to build up this frontline, like top tier. Set of, of super fans. How do you help them succeed at that? Like what ammunition are you giving them? And how can you influence, I guess sort of like what the Diane's like, I launch a new feature.[00:49:29] What I really want is for my super fans to go out there and create content that shows off like, you know, what this new feature can do. Maybe use it in ways that I didn't even think of show how it fits into like a million different workflows. And each of those super fans, also, they have another foot into whatever communities they came from.[00:49:47] So, you know, you say like reacts Velt view, whatever, all these front end frameworks I'm going to have super fans from all of these different sort of walks of life. And each one of them is going to take the new thing that I did and show like, this is how it matters to the view community. This is how it matters to the whatever community and that's I think a very different thing.[00:50:07] So what do, what do you both think about that?[00:50:09]swyx: [00:50:09] I like it. [00:50:13] Brian Douglas: [00:50:13] Yeah. I don't know if you, if you had connections to the AWS community builders when you're AWS. [00:50:17] swyx: [00:50:17] Swyx yeah, yeah. We I know, made it something. Yeah. [00:50:22] Brian Douglas: [00:50:22] Awesome. Yeah. So I get we mentioned the get up stars but we have other groups as well. Like we have some members of our support team that also have a support community give him very likely a 56 million developers worldwide, which is, it sounds like a flex.[00:50:35] It is, but it means that we just have multiple groups. So another group that you might not know we have is we have a group of open-source maintainers that we talk to on a regular basis. And it's, it's actually a structured conversation and a group, and we get feedback from some of the largest open source projects that you've heard of.[00:50:51]And it's, it's very important for us to actually treat them. With this well not treat them. I was going to say treat them with respect, but it really is respecting their time providing, getting their feedback directly to the source of the people who can actually impact that feedback into our, our platform.[00:51:06]But as far as structure goes, the structure, it looks like we have a monthly meeting with all the stars. Everybody's invited and we call these the stars inside calls and like the PMs will show up and talk about some really early ideas of features and they get to see the feature develop over the course of time until it's ready for beta.[00:51:23] And at that point it starts with like, oh, I knew this was coming out. I'll use this, I'll incorporate this in my team at work, or I'll write some content, whatever you want to do with that. You just have some interactions. And that's what we did the stars conference which is, again, it wasn't a huge public events.[00:51:38] It was more just for the stars. So it's the point where I think I've even used Swyx you mentioned like, oh, I didn't know. This was a thing and never heard of this before. And it was like, because yeah, we just did it. It's only for the stars is not meant to promote GitHub in any way. It's just to give you access to all the information [00:51:54] swyx: [00:51:54] we even had an astronaut and swing by [00:51:57] Brian Douglas: [00:51:57] did have an astronaut from NASA.[00:51:59]But in addition to that, like we did have, we do give you the opportunity to have some unfiltered conversations too, as well. So one of the requirements for stars is to sign an NDA and it's just so we can have some really freeform conversation about GitHub, the platform, but also complaints wins everything across the board.[00:52:17] Idan Gazit: [00:52:17] Yeah. [00:52:18] swyx: [00:52:18] Candor. Yeah. I mean, I, I like it. I, I, it's a, it's hard to organize. I think it's a full-time job, actually, if you do it, if you, if you want to do a good job of it, you know, and again, points to this thing becoming, because it it's probably is not, I mean, I don't know who handles it, but it's probably not developer relations handling it.[00:52:35]It's. It's just like, it is yeah. I think, I think this is a growing field where we're all defining what different categories of activities we can invest in. This is one of them. Another trend I see a lot is as people building universities like, Apollo building Odyssey Netlify building gems like explorers.[00:52:53]I forget who [00:52:54] Brian Douglas: [00:52:54] the nation academy from Angie, [00:52:56] swyx: [00:52:56] Angie, you know, while she has she's the orgy. And then you know, GitHub has had labs or I forget what, what you guys call it. We did [00:53:02] Brian Douglas: [00:53:02] that the iLab. [00:53:04] swyx: [00:53:04] Yeah. Yeah. I tried to go through it for actions, but I didn't really get very far to be honest. But I think, I think, you know, like people are building like LMSs, their custom custom LMS is for their learning.[00:53:16] And I think that's another investment in community anyway. Sorry, I don't mean to ramble. I just like, these are all really cool trends where I think you know, it's part of the whole future of develop thesis. Yeah, [00:53:27] Idan Gazit: [00:53:27] fantastic. We are at time even a little bit over time. So, I think we could probably keep jamming on this for awhile, I'm going to throw up a banner on screen.[00:53:39] There's an, a thread in Okta discussions where if people have questions or maybe, Swyx, if you can drop some interesting resources in that thread. So, folks who are maybe coming out this later from the YouTube recording or who didn't get a chance to ask the question, you know, think about it later when they're like, oh, falling asleep.[00:53:57] Oh, wow. I should have asked this it can drop in and ask those questions and, and, and get some followup engagement. Thanks so much for joining us. Swyx. Especially because it's like, I don't it's tomorrow in the middle of the night in Singapore. It's unclear to me what time it is. Thank you so much for joining us and thank you so much.[00:54:13] Be Douggie for joining me here on the Octo speaker series. This has been a blast and have a lovely day, [00:54:20] swyx: [00:54:20] right?
Yeni qanunvericilik layihələrinin paketi artıq Milli Məclisə təqdim edilib
« On n'aura plus besoin de parler d'école inclusive quand elle le sera… » Au contact d'élèves à besoins particuliers, Sandrine Boissel a toujours été animée par une dynamique de transformation et d'accompagnement qui vise à rendre accessible tous les apprentissages. Depuis de nombreuses années elle détourne, invente, hybride et teste une multitude d'outils et de dispositifs afin d'anticiper les besoins et d'assurer la continuité des parcours de ses élèves jusqu'à l'insertion professionnelle. Pour Extra classe, elle fait le point sur sa conception universelle de l'apprentissage et nous raconte comment, en faisant équipe et au moyen d'un mantra bien à elle, on peut faire bouger les choses. Manipulate learn look touch, site de Sandrine Boissel. La transcription de cet épisode est disponible après les crédits. Chaque mercredi, découvrez un nouvel épisode d'Extra classe sur votre plateforme de podcasts préférée. Suivez-nous, écoutez et partagez… Retrouvez-nous sur : Extraclasse.reseau-canope.fr Apple Podcasts Spotify Deezer Google Podcasts Podcast Addict Extra classe, des podcasts produits par Réseau Canopé. Émission préparée et réalisée par : Silvère Chéret Directrice de publication : Marie-Caroline Missir Coordination et production : Hervé Turri, Luc Taramini, Magali Devance Mixage : Simon Gattegno Secrétariat de rédaction : Aurélien Brault Contactez-nous sur : contact@reseau-canope.fr © Réseau Canopé, 2021 Transcription : Je suis Sandrine Boissel, directrice de l'Atelier Canopé de l'Isère depuis le 1er avril 2018 : c'était pas un poisson ! Avant d'être directrice de l'Atelier, j'ai surtout été enseignante spécialisée et maître formateur. J'ai eu la chance de travailler auprès d'enfants, de la toute petite section de maternelle jusqu'au lycée, pour des jeunes que j'accompagnais dans la préparation des épreuves du baccalauréat. Ce qui m'anime, c'est l'école inclusive et la prise en compte de la différence puisqu'on est tous différents finalement. Je suis convaincue qu'en agissant à toute petite échelle, on peut peut-être changer le monde par toutes petites touches, un petit peu comme une goutte d'encre qu'on laisserait tomber sur de l'essuie-tout. À un moment quelconque de sa vie, on peut devenir un individu à besoins particuliers. Ça peut tous nous arriver, et l'idée de la conception universelle des apprentissages, c'est de faire en sorte que ce que l'on propose soit accessible au plus grand nombre. Ça demande de se pencher sur les travaux de recherche autour des sciences cognitives et aussi de la prise en charge des émotions, du bien-être, du temps, des espaces. Quand on se pose toutes ces questions-là, on se pose la question de qui sont les enfants, les adolescents, les jeunes adultes qu'on a en face de soi. On va concevoir quelque chose qui sera accessible au plus grand nombre de ces individus et, ensuite, on a parfois besoin de rajouter quelques petites adaptations à la marge. En les détournant un petit peu de leur mission première, on se rend compte qu'on arrive à en faire quelque chose d'utile pour tout le monde. Tout est parti d'une toute petite anecdote, je la raconte parce qu'elle est importante. Au début de ma carrière d'enseignante spécialisée, j'ai eu la chance d'accompagner des jeunes, tout-petits et très grands. Il y avait un tout-petit de toute petite section de maternelle avec qui je faisais du pré-braille pour pouvoir apprendre progressivement à reconnaître son prénom, comme ses camarades. Ce petit garçon me dit : « Mais, Sandrine, à quoi ça sert que tu t'acharnes à m'apprendre le braille ? De toute façon, je vais mourir. » Donc je lui ai demandé : « Effectivement, tu vas mourir comme tout le monde mais tu as encore toute la vie devant toi. Pourquoi est-ce que tu me dis ça ? » Et il me répond : « Mais tu en vois des adultes comme moi ? » Quand je suis arrivée comme coordo de l'Ulis-collège [unité localisée pour l'inclusion scolaire] de l'académie de Grenoble, j'ai repensé à ce petit garçon parce que, finalement, mes élèves, mes grands ados, avaient du mal à se projeter dans leur vie d'adulte. Aussi bien d'un point de vue familial, social que professionnel, parce que les adultes qu'ils avaient autour d'eux ne leur ressemblaient pas. Ils se sentaient moins légitimes pour pouvoir chercher un stage par exemple. Je suis partie du postulat que, pour les accompagner et leur permettre de se projeter dans l'avenir comme leurs camarades, j'allais leur faire rencontrer les adultes en emploi qui portaient les mêmes troubles ou failles qu'eux. Au départ, on est parti d'un ouvrage, Témoignages de travailleurs aveugles de Philippe Chazal [Le Cherche midi, 2014]. On a lu cet ouvrage avec des témoignages très variés, des fonctions professionnelles qui dépassaient largement les stéréotypes. C'était mon but. Je leur ai dit : « Écoutez, maintenant, on va leur écrire. » Ces jeunes m'ont dit : « Mais Sandrine, tu es bien gentille mais on ne nous répondra jamais. » Les 10 premiers courriers ont obtenu des réponses positives et on est allé bien au-delà de cela puisque, au fur et à mesure des rencontres, chaque adulte me disait : « Mais vous devriez contacter un tel parce que, si je me souviens bien, il a pris cette voie professionnelle, etc. » Et, finalement, de 10 adultes on est passé à 45 adultes en emploi, dans des professions très différentes : avocat, assistante sociale, monitrice d'équitation, garagiste, etc. Toutes sortes de professions qui dépassaient largement les stéréotypes et qui ont permis aux élèves plusieurs choses. D'abord d'apprendre à parler d'eux-mêmes puisque, en entendant parler des adultes, ils ont appris à parler d'eux-mêmes progressivement. De se projeter aussi bien dans leurs études que dans leur vie professionnelle puisqu'on a eu aussi la chance d'échanger avec des étudiants qui étaient en alternance. Les élèves ont surtout pu puiser là-dedans des ressources importantes qui étaient des arguments. Lorsqu'il allait chercher un stage, l'élève était en mesure de dire : « Je sais que c'est possible parce que Monsieur Tartempion il exerce cette profession et son poste a été adapté avec ça, ça, ça et ça. » Parallèlement, on a eu aussi la chance de travailler avec un chercheur de l'université d'Orléans qui menait une étude sur l'emploi de ces jeunes en situation de handicap. Puis de travailler également avec un coach en entreprise qui les a accompagnés dans la recherche de stage. En particulier, notre travail a vraiment été d'accompagner ces élèves à transformer leurs failles en termes de besoins. Parce que si je parle de mon trouble en termes de besoins, j'apporte la solution en même temps que ce dont j'ai besoin. Un élève, qui avait un trouble dys en plus de sa déficience visuelle, a été en mesure d'expliciter que, pour être efficace, il avait besoin de pouvoir brancher son matériel numérique, lancer ce qui était « embarqué » dans l'ordinateur. Une fois que tout cela a été mis en place, il était capable, avec une petite procédure qui tient compte des travaux de l'attention, de corriger les fautes d'orthographe de façon très précise. Il expliquait que, si ses besoins étaient pris en charge, il devenait aussi efficace, même presque un « détecteur » à fautes d'orthographe pour les autres. Du coup, ça pouvait devenir un atout pour l'entreprise. Cette question de faire en sorte que notre monde soit accessible le plus possible à tous, parce que c'était parfois le monde qui n'était pas accessible et pas le handicap qui en était un, m'a occupée très longtemps. On n'aura plus besoin de parler d'école inclusive quand elle le sera. J'ai toujours eu un peu tendance à monter des projets pour essayer de changer les choses. La toute première année où j'étais en Clis [classe pour l'inclusion scolaire], on a décidé de monter un très grand spectacle sur toute l'année. C'était vraiment LE projet de l'année. J'avais annoncé aux collègues que j'allais faire le char sur lequel arriverait le pharaon à la fin. Ce char était un sphinx que j'ai réalisé avec les élèves du CP au CM2 et j'ai annoncé à tout le monde qu'il ferait 2 mètres par 2 mètres par 2 mètres. Un petit peu comme pour le livre, on m'a dit : « Mais Sandrine, là ça va pas le faire ! » Finalement, ce sphinx a bien vu le jour. La directrice, avec qui j'ai gardé des contacts, m'a dit quand j'ai quitté l'école : « Si un jour tu as des doutes sur quelque chose, sur un projet, si tu penses que tu n'y arriveras pas, pense “sphinx”. » Donc quand on me dit que l'école inclusive n'est pas possible, que la conception universelle des apprentissages est compliquée, je pense « sphinx » et j'arrive à persuader et à entraîner les gens.
In an unprecedented show of activity - merely two weeks after the new years first episode (170) Mark and Greg are back, this time joined by Andres Almiray (Oracle) and Stephen Connolly (Cloudbees) to discuss all things build, modules, this weeks Java 16 release, and why Java programmers should take a look at the rust programming language. Hosts Mark Derricutt - @talios Greg Amer Guests Andres Almiray - @aalmiray Stephen Connolly - @connollys Table of Contents 00:00:15 Intro 00:00:37 Guest Introductions 00:02:05 Java 16 Released! 00:02:47 Jenkins and JDK Versions 00:04:38 var changes = LIPSERVICE; 00:05:11 Improve your Java by learning Rust 00:07:31 Hey Bruno - It's NOT YAML! 00:10:22 Project Liliput 00:11:31 Java Turning 26 00:13:30 Java for CLIs? 00:16:47 Modules: Thought on The Java Platform Module System 00:18:12 Modules: Modules and Versioning 00:19:15 Modules: Semantic Versioning 00:22:19 Build: Hijacking The Maven Release Process 00:26:40 Explicit Merge Commits 00:29:16 Build: JDK Dependency (Lacking) In Maven 00:31:21 Kotlin Standard Library Versions 00:31:53 Libraries should avoid Guava 00:35:36 Jackson Version 3 Changes 00:39:10 Modules: The Lack Of Runtime Versioning In Modules 00:39:46 Modules: Agents And Module Systems 00:40:39 Run The Damn Tests Twice 00:46:00 Modules: Module Systems and Debugging 00:55:02 The Ecosystem Is More Than Code 00:55:46 Build: The Hinderance of IDEs 00:56:47 Build: Mixins In Maven 01:02:18 Build: The Perfect POM is with a BOM 01:07:17 Build: Custom Lifecycles as Mixins 01:10:09 Build: Gradle is Surprises and Deathtraps 01:11:31 Build: Maven Consumer POM and POM 4.0.0 01:14:16 Build: Project Dependency Trees Proposal 01:23:28 Build: Maven 4 and 5 Releases 01:26:49 Build: Plugin Phases and Execution Order 01:33:05 Build: Interim Hacks and Abstractions Considered Harmful 01:39:33 The Problem with Preview Features News Oracle Announces Java 16 Project Lilliput - OpenJDK proposal to reduce the Java object header by half or more would lower memory and CPU usage on all Java workloads. Pull Requests merging instanceof Pattern matching https://github.com/openjdk/jdk/pull/2544 https://github.com/openjdk/jdk/pull/2879 https://github.com/openjdk/jdk/pull/2913 JEP 401: Primitive Objects (Preview)and many other new JEPs landed for JDK 17. Caffeine cache goes 3.0 and with it - JDK11 baseline Links Semantic Versioning git-timestamp-maven-plugin Git Log's --first-parent Option The rise of Kotlin's stdlib and the versioning conflicts that may arise guava-beta-checkerfor Error Prone Jackson Release 3 Plans Build Health PomChecker 1.1.0 has been released! Problems with sorting, tidying poms Build / life cycle order Maven Bill of Materials Maven Tiles / Mixins Crafting better Gradle builds with the Kordamp Gradle Plugin suite with Andres Almiray (YouTube Video) Proposal: Project Dependency Trees schema Plugin Execution & Property Ordering Tests Module Systems Java Platform Module System / Jigsaw Layrry- Including an excellent video demonstration of Layrry in action with JavaFX. OSGi Runtime Dependencies (build is only half the picture)
Check it out
I caught up with George DeCandio, Chief Technology Officer - Mainframe Software Division, Broadcom, and Peter Wassel, Director of Product Management & Strategy - Mainframe Software Division, Broadcom, to talk about the latest in Mainframe DevOps and what Broadcom are doing to help their Mainframe clients unlock the value of their Mainframe applications with modern tooling and processes, such as DevOps, and via open interfaces. We kick off with an overview from both George and Peter of the Challenges and opportunities on the mainframe from a DevOps perspective, with some great insights into what is happening within the world inside Broadcom as well as throughout their ecosystem of partners and customers. We then look at the various impacts and potential for opening up the mainframe, the general opportunities of Mainframe DevOps into the opportunities in opening up the mainframe - we discuss Broadcom’s open-first message, and why being low-opinion & non-prescriptive is valuable and why that approach matters for businesses who want to succeed. Next we do a deep dive into what it means to be open (touch on APIs, CLIs, SDKs, IDEs), and discuss Zowe, IDEs like Eclipse Che and Visual Studio Code, George and Peter explain some background on what the Open Mainframe Project does, then walk us through Broadcom’s commitment and contribution to it and why it matters, and was look at what off-platform tooling means, i.e. enterprise DevOps tool chains and more. We wrap up with a broad look at what Broadcom is currently doing for Mainframe DevOps, from both Business and Technical perspectives, George and Peter offer their unique insights from inside the business, as well as key highlights from around the world and what they are doing with customers to drive successful outcomes. Key points we cover here include their commitment to Zowe and IDEs through CA Brightside, their focus on Git and CA Endevor Bridge for Git, the the significance of the Mainframe Developer cockpit, and that in effect in the modern world, everyone becomes a developer. An important point discussed is that Broadcom understand that Mainframe DevOps is a journey and requires cultural change, new tools, new practices, etc. which if not done right, can indeed be an overwhelming change, but with the support of and tools from Broadcom is a journey and a journey to success - we also learn more about their amazing no-fee offerings and why they view this journey as a partnership and not just a vendor-customer relationship. We wrap up with a brief look at the design thinking workshop Broadcom offers, be sure to visit the DevOps link below today as your organisation will gain real value from this offering. Tune in now for all of these amazing topics and more. This podcast was made in partnership with the Mainframe division of Broadcom. For more information please visit: http://bit.ly/BroadcomMainframe For a free MRI Security Essentials assessment today visit: http://bit.ly/broadcommritrial Learn more about the Mainframe DevOps design thinking workshop here: https://bit.ly/broadcommainframedevops . #sponsored, #broadcominfluencer, #broadcom, #mainframe, #devops, #aiops, #development, #operations, #automation, #orchestration, #security, #research, #development, #self, #healing, #systems, #selfhealingsystems, #cybersecurity, #data, #dataprotection, #software, #management, #services, #hybrid, #cloud, #iaas, #paas, #saas, #ai, #ml, #aiops, #ops .
Through the history of computing, user interfaces (UIs) have evolved from punch cards to voice interaction. In this episode we track that evolution, discussing each paradigm and the machine that popularized it. We primarily focus on personal computer UIs, covering command-line interfaces (CLIs), graphical user interfaces (GUIs), touch-screen interaction, and voice interfaces. We also imagine the future, including neural interfaces, virtual reality, and augmented reality. This episode is an introductory guide to the interfaces available and a short history, not a comprehensive tour. Show Notes Episode 16: The Personal Computer Revolution The Mother of All Demos via Wikipedia Fingerworks (developer of modern multi-touch) via Wikipedia Neuralink via Wikipedia Follow us on Twitter @KopecExplains. Theme “Place on Fire” Copyright 2019 Creo, CC BY 4.0 Find out more at http://kopec.live
https://rust-lang.orgFerramenta de backup de emails:https://github.com/hugopeixoto/mail-toolshttps://en.wikipedia.org/wiki/Internet_Message_Access_Protocolhttp://www.offlineimap.org/https://crates.io/crates/imaphttps://crates.io/crates/mailparsehttps://tools.ietf.org/html/rfc5322#section-3.3https://hugopeixoto.net/articles/backing-up-gmail-with-rust.htmlEncurtador de file paths:https://github.com/portocodes/ticohttps://github.com/hugopeixoto/ticoConversor de imagens F-Zero GX:https://github.com/fzerocentral/fzgx-image2emblem-rshttps://github.com/fzerocentral/fzgx-image2emblemhttp://fzerocentral.github.io/fzgx-image2emblem/format.htmlhttps://www.fzerocentral.org/https://en.wikipedia.org/wiki/F-Zero_GXhttps://en.wikipedia.org/wiki/F-Zero_GX#Arcade_counterparthttps://gamedev.stackexchange.com/questions/62548/what-does-changing-gl-texture-wrap-s-t-dohttps://dolphin-emu.org/https://en.wikipedia.org/wiki/NTSChttps://en.wikipedia.org/wiki/PALFerramentas de rust:https://blog.rust-lang.org/2018/12/06/Rust-1.31-and-rust-2018.html#module-system-changeshttps://rust-analyzer.github.io/https://code.visualstudio.com/https://github.com/rust-lang/rust-clippyhttps://github.com/rust-lang/rustfmthttps://clap.rs/https://github.com/TeXitoi/structoptNyan - npm+yarn:https://github.com/locks/nyanhttps://classic.yarnpkg.com/en/docs/cli/upgrade-interactive/https://github.com/dylang/npm-checkhttps://classic.yarnpkg.com/en/docs/workspaces/https://github.com/npm/rfcs/blob/latest/accepted/0026-workspaces.mdhttps://doc.rust-lang.org/book/ch14-03-cargo-workspaces.htmlEdição de podcasts:https://github.com/hugopeixoto/wedithttps://www.audacityteam.org/https://ardour.org/https://en.wikipedia.org/wiki/WAVhttps://en.wikipedia.org/wiki/Pulse-code_modulationhttps://ffmpeg.org/ffplay.htmlGUIs e CLIs:https://www.libsdl.org/https://crates.io/crates/sdl2https://github.com/maps4print/azulhttps://crates.io/crates/glutinhttps://github.com/locks/crustyhttps://rust-cli.github.io/book/index.htmlhttps://oclif.io/Iniciativas de rust:https://areweguiyet.com/https://areweaudioyet.com/https://rust-lang.github.io/async-book/
Rob is enjoying his new life as a freelancer, while JD struggled through another stage of grief. Both have been using their side projects to ignore the current state of the world, and have found great enjoyment in Rust and Elixir. Rob has been diving into GraphQL and password hashing algorithms, while JD has used Rust to rewrite a small CLI he created five years ago.
An airhacks.fm conversation with Alex Soto (@alexsotob) about: Director of Developer Experience, Quarkus was secret at the beginning at RedHat, replacing Micronaut with Quarkus, public Quarkus release, Micronaut comes with its own API, Quarkus is more familiar for WildFly / Java EE / Jakarta EE developers, Quarkus separates the business logic from the infrastructure, Quarkus also supports FatJARs / UeberJARs but this feature is pointless for container deployments, Quarkus and FatJARs are interesting for desktop, electron-like deployments, The Quarkus Cookbook, quarkus disabling HTTP cache, kaffein cache, quarkus and batch processing -- building CLIs with quarkus, combining quarkus with picocli, quarkus integrates kafka kstreams without the necessity of including JAX-RS, airhacks.fm episode #23 with alexis about glassfish, the easy loading vs. eager loading trade off, quarkus optimizes hibernate, tree shaking of JDBC-drivers in quarkus, proactively introducing DAOs: "Generic CRUD Service aka DAO - EJB 3.1/0 Code - Only If You Really Needed" then deleting them, the quarkus developer mode mvn compile quarkus:dev, dynamically adding columns with Panache in development mode, adding extensions on-the-fly, mapping kafka streams to websockets with microprofile reactive streams, quarkus should support both: application.properties and mp-config.properties, The Quarkus cookbook is going to be published in summer 2020, writing kubernetes operators with quarkus, the quarkus vault integration, airhacks.tv quarkus / vault questions, the vault sidecar container, Alex Soto on twitter: @alexsotob
Heute haben wir uns bei Dominik zusammengesetzt, um mal über unsere Python Entwicklungsumgebungen zu sprechen. Die groben Themen waren dabei unter anderem: Hardware Betriebssysteme IDEs/Editoren Virtualenvironments Linter Shownotes Unsere E-Mail für Fragen, Anregungen & Kommentare: hallo@python-podcast.de News aus der Szene Python 2 end of life Setuptools dropping support for Python 2 Euro Python 2020 2020 djangocon porto Python barcamp Köln Entwicklungsumgebung PowerShell Bash Z shell Fishshell Terminals for windows: cmder best combined with ConEmu and alternatively hyperjs iTerm2 Terminal for macOS shell integration WSL Windows Subsystem for Linux dotbot dotfile handling Chocolatey (Windows Package Manager) Homebrew (The Missing Package Manager for macOS) My Python Development Environment, 2020 Edition Dominiks unfinished 'work always in progress' dotfiles-den for windows virtualenvwrapper classical virtual environments virtualenvwrapper for windows powershell pyenv simple Python version management miniconda conda virtual environments Poetry python packaging and dependency management made easy pipenv - Python Dev Workflow for Humans¶ cmd - Support for line-oriented command interpreters pep-0518 pyproject.toml etc vim Lieblingseditor + list of awesome vim plugins Visual Studio Code Code editing Redefined, live share pyforest - feel the bliss of automated imports emacs - an extensible, customizable, free/libre text editor PyCharm The Python IDE for Professional Developers flake8 Your Tool For Style Guide Enforcement Black the uncompromising Python code formatter Pylama Code audit tool for Python and JavaScript mypy Optional type checker Radon Various code metrics for Python code graphviz graph visualization software fzf fuzzy search on stdin fd find reimplementation bat cat reimplementation ripgrep grep implementation oh-my-fish package manager for fish ohmyzsh tmux terminal multiplexer mosh mobile shell Picks pprint pretty printing Typer is FastAPI's little sibling. And it's intended to be the FastAPI of CLIs. Öffentliches Tag auf konektom
In this episode of Adventures in DevOps Charles Max Wood interviews Priya Nagpurkar, Paul Castro and Nick Mitchell. They all work for IBM and are here to talk about their new DevOps tool Kui. They start by explains what the IBM cloud research team is all about and what motivates them. Their goal is to make programming on the cloud as easy as possible. They share past tools that they have made for this goal. Charles asks the guests about the future of Kubernetes and DevOps. They explain why Kubernetes is so popular and what makes it a powerful tool. Kui is built mostly on Kubernetes. They discuss the evolution of DevOps tools. They compare CLIs and browser-based consoles and explain why people gravitate towards CLIs. Kui lets developers have the best of both worlds. The guests walk Charles though different scenarios of getting started with Kui. The workflow of using Kui inside an established Kubernetes cluster is discussed. They also explain how to move over from a VPS easily with Kui. They explain how Kui betters the developer experience. They go over the features that make developers DevOps experiences easier. They end by discussing how to get started in Kui if developers are new to Kubernetes. Panelist Charles Max Wood Guests Priya Nagpurkar Paul Castro Nick Mitchell Sponsors CacheFly Links https://www.kui.tools/ https://openwhisk.apache.org/ https://istio.io/ Knative Install and Set Up kubectl https://helm.sh/ Get Started with the CLI 10 Weird Ways to Blow Up Your Kubernetes https://github.com/starpit https://twitter.com/priyanagpurkar?lang=en https://github.com/paulcastro https://www.facebook.com/Adventures-in-DevOps-345350773046268/ Picks Nick Mitchell: Crime Pays But Botany Doesn't Paul Castro: Wifi Analyzer D’Addario NS Micro Clip-On Tuner Priya Nagpurkar: Solsa https://iter8.tools/ Charles Max Wood: https://discourse.org/ https://javascriptforum.net/ https://codefund.io/ The Man In the High Castle
In this episode of Adventures in DevOps Charles Max Wood interviews Priya Nagpurkar, Paul Castro and Nick Mitchell. They all work for IBM and are here to talk about their new DevOps tool Kui. They start by explains what the IBM cloud research team is all about and what motivates them. Their goal is to make programming on the cloud as easy as possible. They share past tools that they have made for this goal. Charles asks the guests about the future of Kubernetes and DevOps. They explain why Kubernetes is so popular and what makes it a powerful tool. Kui is built mostly on Kubernetes. They discuss the evolution of DevOps tools. They compare CLIs and browser-based consoles and explain why people gravitate towards CLIs. Kui lets developers have the best of both worlds. The guests walk Charles though different scenarios of getting started with Kui. The workflow of using Kui inside an established Kubernetes cluster is discussed. They also explain how to move over from a VPS easily with Kui. They explain how Kui betters the developer experience. They go over the features that make developers DevOps experiences easier. They end by discussing how to get started in Kui if developers are new to Kubernetes. Panelist Charles Max Wood Guests Priya Nagpurkar Paul Castro Nick Mitchell Sponsors CacheFly Links https://www.kui.tools/ https://openwhisk.apache.org/ https://istio.io/ Knative Install and Set Up kubectl https://helm.sh/ Get Started with the CLI 10 Weird Ways to Blow Up Your Kubernetes https://github.com/starpit https://twitter.com/priyanagpurkar?lang=en https://github.com/paulcastro https://www.facebook.com/Adventures-in-DevOps-345350773046268/ Picks Nick Mitchell: Crime Pays But Botany Doesn't Paul Castro: Wifi Analyzer D’Addario NS Micro Clip-On Tuner Priya Nagpurkar: Solsa https://iter8.tools/ Charles Max Wood: https://discourse.org/ https://javascriptforum.net/ https://codefund.io/ The Man In the High Castle
Sponsored by Datadog: pythonbytes.fm/datadog Michael #1: Data driven journalism via cjworkbench via Michael Paholski The data journalism platform with built in training Think spreadsheet + ETL automation Designed around modular tools for data processing -- table in, table out -- with no code required Features include: Modules to scrape, clean, analyze and visualize data An integrated data journalism training program Connect to Google Drive, Twitter, and API endpoints. Every action is recorded, so all workflows are repeatable and transparent All data is live and versioned, and you can monitor for changes. Write custom modules in Python and add them to the module library Brian #2: remi: A Platform-independent Python GUI library for your applications. Python REMote Interface library. “Remi is a GUI library for Python applications which transpiles an application's interface into HTML to be rendered in a web browser. This removes platform-specific dependencies and lets you easily develop cross-platform applications in Python!” No dependencies. pip install git+https://github.com/dddomodossola/remi.git doesn’t install anything else. Yes. Another GUI in a web page, but for quick and dirty internal tools, this will be very usable. Basic app: import remi.gui as gui from remi import start, App class MyApp(App): def __init__(self, *args): super(MyApp, self).__init__(*args) def main(self): container = gui.VBox(width=120, height=100) self.lbl = gui.Label('Hello world!') self.bt = gui.Button('Press me!') self.bt.onclick.do(self.on_button_pressed) container.append(self.lbl) container.append(self.bt) return container def on_button_pressed(self, widget): self.lbl.set_text('Button pressed!') self.bt.set_text('Hi!') start(MyApp) Michael #3: Typer Build great CLIs. Easy to code. Based on Python type hints. Typer is FastAPI's little sibling. And it's intended to be the FastAPI of CLIs. Just declare once the types of parameters (arguments and options) as function parameters. You do that with standard modern Python types. You don't have to learn a new syntax, the methods or classes of a specific library, etc. Based on Click Example (min version) import typer def main(name: str): typer.echo(f"Hello {name}") if __name__ == "__main__": typer.run(main) Brian #4: Effectively using Matplotlib Chris Moffitt “… I think I was a little premature in dismissing matplotlib. To be honest, I did not quite understand it and how to use it effectively in my workflow.” That very much sums up my relationship with matplotlib. But I’m ready to take another serious look at it. one reason for complexity is 2 interfaces MATLAB like state-based interface object based interface (use this) recommendations: Learn the basic matplotlib terminology, specifically what is a Figure and an Axes . Always use the object-oriented interface. Get in the habit of using it from the start of your analysis. Start your visualizations with basic pandas plotting. Use seaborn for the more complex statistical visualizations. Use matplotlib to customize the pandas or seaborn visualization. Runs through an example Describes figures and plots Includes a handy reference for customizing a plot. Related: StackOverflow answer that shows how to generate and embed a matplotlib image into a flask app without saving it to a file. Style it with pylustrator.readthedocs.io :) Michael #5: Django Simple Task django-simple-task runs background tasks in Django 3 without requiring other services and workers. It runs them in the same event loop as your ASGI application. Here’s a simple overview of how it works: On application start, a queue is created and a number of workers starts to listen to the queue When defer is called, a task(function or coroutine function) is added to the queue When a worker gets a task, it runs it or delegates it to a threadpool On application shutdown, it waits for tasks to finish before exiting ASGI server It is required to run Django with ASGI server. Example from django_simple_task import defer def task1(): time.sleep(1) print("task1 done") async def task2(): await asyncio.sleep(1) print("task2 done") def view(requests): defer(task1) defer(task2) return HttpResponse(b"My View") Brian #6: PyPI Stats at pypistats.org Simple interface. Pop in a package name and get the download stats. Example use: Why is my open source project now getting PRs and issues? I’ve got a few packages on PyPI, not updated much. cards and submark are mostly for demo purposes for teaching testing. pytest-check is a pytest plugin that allows multiple failures per test. I only hear about issues and PRs on one of these. So let’s look at traffic. cards: downloads day: 2 week: 24 month: 339 submark: day: 5 week: 9 month: 61 pytest-check: day: 976 week: 4,524 month: 19,636 That totally explains why I need to start actually supporting pytest-check. Cool. Note: it’s still small. Top 20 packages are all downloaded over 1.3 million times per day. Extras: Comment from January Python PDX West meetup “Please remember to have one beginner friendly talk per meetup.” Good point. Even if you can’t present here in Portland / Hillsboro, or don’t want to, I’d love to hear feedback of good beginner friendly topics that are good for meetups. PyCascades 2020 discount code listeners-at-pycascades for 10% off FireFox 72 is out with anti-fingerprinting and PIP - Ars Technica Joke: Language essays comic
In this episode of Views on Vue the panel shares what their set-ups look like. They start by discussing IDE and text editors. Most of them use VScode for their setups but they like to use others when they need them. The panelist list some of their favorite plugins, Vetur, Prettier, Vue peeks, NPM, word counters, and spell checkers. They talk about Vue CLI and other CLIs they use. Next, they talk about what machines they are all using. Most are currently using a Mac Book Pro. They discuss the pros and cons of using Mac products. Charles Max Wood talks about the desktop he built and how his next computer will be a PC. They consider Linux on Windows. They also compare Linux and Mac. Source code and deployment are discussed as well. They finish by sharing the physical set-ups in their offices. They discuss furniture, how many monitors they use, how big their monitors are and the tools that make their day more comfortable. They discuss the merits of sitting and standing while working. Desk treadmills are considered. They also talk about working at home compared to working from the office. Panelists Charles Max Wood Devlin Duldulao Lindsay Wardell Steve Edwards Sponsors Sentry– use the code “devchat” for two months free on Sentry’s small plan CacheFly Links https://system76.com/pop https://desktop.github.com/ https://jfrog.com/artifactory/ https://about.gitlab.com/ https://www.sharemouse.com/ Conquer Under Desk Portable Electric Treadmill Walking Pad Anti Fatigue Standing Desk Mat https://vuetifyjs.com/en/ https://github.com/nuxt/create-nuxt-app https://nuxtjs.org/ https://github.com/vuejs/vetur https://www.facebook.com/ViewsonVue https://twitter.com/viewsonvue Picks Charles Max Wood: A Christmas Story Rudolf the Red-Nosed Reindeer The Little Drummer Boy Santa Claus Is Comin' to Town The Ultimate Gift Lindsay Wardell: https://thedangercrew.com/ Steve Edwards: https://laughingsquid.com/mouse-cleans-up-tool-shed/ Devlin Duldulao: Rhinos
In this episode of Views on Vue the panel shares what their set-ups look like. They start by discussing IDE and text editors. Most of them use VScode for their setups but they like to use others when they need them. The panelist list some of their favorite plugins, Vetur, Prettier, Vue peeks, NPM, word counters, and spell checkers. They talk about Vue CLI and other CLIs they use. Next, they talk about what machines they are all using. Most are currently using a Mac Book Pro. They discuss the pros and cons of using Mac products. Charles Max Wood talks about the desktop he built and how his next computer will be a PC. They consider Linux on Windows. They also compare Linux and Mac. Source code and deployment are discussed as well. They finish by sharing the physical set-ups in their offices. They discuss furniture, how many monitors they use, how big their monitors are and the tools that make their day more comfortable. They discuss the merits of sitting and standing while working. Desk treadmills are considered. They also talk about working at home compared to working from the office. Panelists Charles Max Wood Devlin Duldulao Lindsay Wardell Steve Edwards Sponsors Sentry– use the code “devchat” for two months free on Sentry’s small plan CacheFly Links https://system76.com/pop https://desktop.github.com/ https://jfrog.com/artifactory/ https://about.gitlab.com/ https://www.sharemouse.com/ Conquer Under Desk Portable Electric Treadmill Walking Pad Anti Fatigue Standing Desk Mat https://vuetifyjs.com/en/ https://github.com/nuxt/create-nuxt-app https://nuxtjs.org/ https://github.com/vuejs/vetur https://www.facebook.com/ViewsonVue https://twitter.com/viewsonvue Picks Charles Max Wood: A Christmas Story Rudolf the Red-Nosed Reindeer The Little Drummer Boy Santa Claus Is Comin' to Town The Ultimate Gift Lindsay Wardell: https://thedangercrew.com/ Steve Edwards: https://laughingsquid.com/mouse-cleans-up-tool-shed/ Devlin Duldulao: Rhinos
We discuss Kubecon and Slack vs. Microsoft Teams. Pretty frothy stuff! Hey, Coté got off his ass and finally re (https://buttondown.email/cote)v (https://buttondown.email/cote)ved back up his newsletter (https://buttondown.email/cote). People love it! Subscribe (https://buttondown.email/cote) and tell all your friends to subscribe (https://buttondown.email/cote). Mood board: One. “I am not upgrading.” “Nothing ever good comes out of upgrades.” “I need someone with 35 years experience to run the internal blog.” “Shut the doors on your way out.” Planning makes us professional Tech news sucks nowadays, where’s the good news? Put that gravy on your shorts. It’s the Sanka of hot sauces. The problem is, there’s only one size of tortilla: giant. ah tortillas (https://www.ah.nl/producten/product/wi173410/ah-tortilla-wraps-voordeel). They have really good lighting. Prove me wrong, Kubecon 2020. “Their negotiation with the underlying platform.” I’m not a technical person, I’m a toga person. The Oxnard Comma. Stop not giving your money to Kafka, give it to us. Is that still the future, or is it finally the present? When the money tree starts shaking, might as well catch some money. Giant monsters fighting in Japan. Call in Idris Elba. The video conferencing circle of life. The Schwag Cycle. 1am city council meetings for the parking app product manager. Cocky sci-fi Europeans. Relevant to your interests Cloud Native Computing Foundation Reaches Over 100 Certified Kubernetes Vendors (https://www.cncf.io/announcement/2019/11/19/cloud-native-computing-foundation-reaches-over-100-certified-kubernetes-vendors/) A certified vendor is an organization that provides a Kubernetes distribution, hosted platform, or installer. Cloud Native Computing Foundation Continues Tremendous Growth, Surpassing 500 Members (https://www.cncf.io/announcement/2019/11/19/cloud-native-computing-foundation-continues-tremendous-growth-surpassing-500-members/) In the third quarter of 2019, 56 members joined CNCF. The rapid growth underscores increasing momentum around cloud native technologies just as a record-breaking 12,000 attendees gather for KubeCon + CloudNativeCon North America. (https://events19.linuxfoundation.org/events/kubecon-cloudnativecon-north-america-2019/) HPE launches container platform, aims to be 100% open source Kubernetes (https://www.zdnet.com/article/hpe-launches-container-platform-aims-to-be-100-open-source-kubernetes/) "With BlueData, customers won't be managing five different clusters," he said. "We will have one central point and 100% open-source Kubernetes (https://www.zdnet.com/search/?o=0&q=kubernetes) that is curated and at the top of the trunk." HPE doesn’t want Xerox’s valuation: HP Just Rejected Xerox. Here Are the Moves It Could Make Next. (https://www.barrons.com/articles/hp-stock-xerox-whats-next-51574097138) Curiously good press release: Announcing Oracle API Gateway, Oracle Logging and Kafka Compatibility for Oracle (https://containerjournal.com/news/news-releases/announcing-oracle-api-gateway-oracle-logging-and-kafka-compatibility-for-oracle-streaming/amp/#click=https://t.co/xjsIrebqZu) (https://containerjournal.com/news/news-releases/announcing-oracle-api-gateway-oracle-logging-and-kafka-compatibility-for-oracle-streaming/amp/#click=https://t.co/xjsIrebqZu)Streaming (https://containerjournal.com/news/news-releases/announcing-oracle-api-gateway-oracle-logging-and-kafka-compatibility-for-oracle-streaming/amp/#click=https://t.co/xjsIrebqZu). Slack vs. Teams, etc. Slack stock drops as Microsoft claims big lead with 20 million Teams users (https://www.cnbc.com/2019/11/19/microsoft-teams-reaches-20-million-daily-active-users.html). Slack touts users growth as it faces growing competition from Microsoft (https://www.cnbc.com/2019/10/10/slack-says-it-crossed-12-million-daily-active-users.html). Amazon’s packaging: Why thousands of Amazon packages converge on a tiny Montana town (https://www.theverge.com/2019/11/14/20961523/amazon-walmart-target-package-delivery-sales-tax-montana-roundup). Google some company, CloudSimple…? Snowflake Data Warehouse Partner (https://www.snowflake.com/technology-partners/google-cloud-platform/) Google acquires CloudSimple to bolster cloud workload migration (https://www.google.com/amp/s/venturebeat.com/2019/11/18/google-acquires-cloudsimple-to-bolster-cloud-workload-migration/amp/) - CloudSimple® provides a secure, high performance, dedicated environment in Public Clouds to run VMware workloads. What’s the point: Puppet wash, HPE Container Platform, JenkinsX, K3s, and Gremlins in the cloud (https://devclass.com/2019/11/20/whats-the-point-puppet-wash-hpe-container-platform-jenkinsx-k3s-and-gremlins-in-the-cloud/) IBM driving open source advancements to help developers be more productive with Kubernetes (https://developer.ibm.com/blogs/ibm-open-source-developers-productive-kubernetes/) - Kui (https://www.kui.tools/) is designed to be a single tool to help developers navigate between the different CLIs relevant to each part of the solution. Nvidia and Microsoft launch Azure supercomputing instance (https://venturebeat.com/2019/11/18/nvidia-and-microsoft-launch-azure-supercomputing-instance/) SoftBank to create $30 billion tech giant via Yahoo Japan, Line Corp deal (https://www.reuters.com/article/us-z-holdings-line/softbank-to-create-30-billion-tech-giant-via-yahoo-japan-line-corp-deal-idUSKBN1XR0W5) - SoftBank Corp plans to merge internet subsidiary Yahoo Japan with messaging app operator Line Corp to create a $30 billion tech group, as it strives to compete more effectively with local rival Rakuten and U.S. tech powerhouses. Google’s rollout of RCS chat for all Android users in the US begins today (https://www.theverge.com/2019/11/14/20964477/googles-rcs-chat-android-rollout-us-ccmi-texting-sms). Apple plans a Prime-like subscription bundle, but that has News+ publishers worried (https://www.google.com/amp/s/arstechnica.com/gadgets/2019/11/apple-plans-to-bundle-music-news-and-tv-in-one-subscription-report-says/%3famp=1). Nike Pulling Its Products From Amazon in E-Commerce Pivot (https://www.bloomberg.com/news/articles/2019-11-13/nike-will-end-its-pilot-project-selling-products-on-amazon-site). The 20 Best DevOps Podcasts (https://devops.com/the-20-best-devops-podcasts/). Nonsense WeWork at Kubecon? (https://twitter.com/jehb/status/1196896481543774208?s=19) Stadia Countdown (http://stadiacountdown.com/) Sponsors SolarWinds: To try it FREE for 14 days, just go to http://loggly.com/sdt. If it logs, it can log to Loggly. PagerDuty: To see how companies like GE, Vodafone, Box and American Eagle Outfitters rely on PagerDuty to continuously improve their digital operations visit https://pagerduty.com. Suggested Jobs CNCF Job Board (https://jobs.cncf.io/jobs/search) Listener Feedback Sent stickers to Chris in Pittsburgh. He says. “Thanks much for the show, look forward to every week and the great mix of strategy, Kubernetes, parenting, meat, and travel.’ Ashish from Charlotte and he say he “Loves the show.” Conferences, et. al. December - 2019, a city near you: The 2019 SpringOne Tours are posted (http://springonetour.io/): Toronto Dec 2nd (https://springonetour.io/2019/toronto). December 12-13 2019 - Kubernetes Forum Sydney (https://events.linuxfoundation.org/events/kubernetes-summit-sydney-2019/) NO-SSH-JJ wants you go to DeliveryConf (https://www.deliveryconf.com/) in Seattle on Jan 21st & 22nd (https://www.deliveryconf.com/), Use promo code: SDT10 to get 10% off. JJ wants you to read about Delivery Conf Format too (https://www.deliveryconf.com/format). June 1-4: ChefConf 2020 (https://chefconf.chef.io/) Jordi wants you to go to GitLab Commit (https://about.gitlab.com/events/commit/) Jan. 14th SDT news & hype Join us in Slack (http://www.softwaredefinedtalk.com/slack). Send your postal address to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) and we will send you free laptop stickers! Follow us on Twitter (https://twitter.com/softwaredeftalk), Instagram (https://www.instagram.com/softwaredefinedtalk/) or LinkedIn (https://www.linkedin.com/company/software-defined-talk/) Listen to the Software Defined Interviews Podcast (https://www.softwaredefinedinterviews.com/). Check out the back catalog (http://cote.coffee/howtotech/). Brandon built the Quick Concall iPhone App (https://itunes.apple.com/us/app/quick-concall/id1399948033?mt=8) and he wants you to buy it for $0.99. Use the code SDT to get $20 off Coté’s book, (https://leanpub.com/digitalwtf/c/sdt) Digital WTF (https://leanpub.com/digitalwtf/c/sdt), so $5 total. Recommendations Matt: Brandon’s picks are eventually good - Sicario (https://www.imdb.com/title/tt3397884/) - 30 for 30: Pony Exce (http://www.espn.com/30for30/film/_/page/pony-excess)$$. Australian fires (https://www.theatlantic.com/photo/2019/11/photos-of-australias-catastrophic-bushfires/602350/). Brandon: Park in Austin App (https://www.parkatxapp.com/). AWS GameDay (https://aws.amazon.com/gameday/). Coté: casino bread (https://nl.wikipedia.org/wiki/Casinobrood). Ezra Klein Dave Eggers episode (https://www.vox.com/podcasts/2019/11/18/20970587/dave-eggers-ezra-klein-technology-smartphone). Mr. Penumbra's 24-Hour Bookstore (https://www.goodreads.com/book/show/13538873-mr-penumbra-s-24-hour-bookstore) and Sourdough (https://www.goodreads.com/book/show/33916024-sourdough?from_search=true&qid=PiG3RQ9fyP&rank=1).
Wie haben wir unser Spiel Quiz Planet von der Facebook-Instant-Game-Plattform in eine native iOS und Android App konvertiert? Nach vielen Jahren also nun unser erstes Projekt, in dem wir Webtechnologien als Cross-Plattform Lösung einsetzen. Welche Herausforderungen uns dabei begegneten und warum wir das nicht schon seit Jahren so machen, erfahrt ihr in dieser Folge. Außerdem gehen wir darauf ein, warum wir kein Framework genutzt haben, um an unser Ziel zu gelangen. Wir sprechen außerdem über das Lied Bridge over Troubled Water von Simon and Garfunkel und Soundhound ist die App, die Dennis entfallen ist. Picks of the Day Dennis: Superquery – das bessere Google BigQuery Interface. Sebi revidiert seine socket.io Empfehlung. Fabi: Yargs, ein Helferlein um interaktive CLIs zu bauen. Schreibt uns! Schickt uns eure Themenwünsche und euer Feedback. podcast@programmier.bar Folgt uns! Bleibt auf dem Laufenden über zukünftige Folgen und Meetups und beteiligt euch an Community-Diskussionen. Twitter Instagram Facebook Meetup YouTube Erfahrt hier, wann das nächste Meetup in unserem Office in Bad Nauheim stattfindet. Meetup Musik: Hanimo
Panel Joe Eames Brooke Avery Jesse Sanders Sam Julien Luis Hernandez Mike Dane Joined by special guest: Mike Ryan Episode Summary In this episode, the panelists talk to Mike Ryan, Software Architect at Synapse, Google Developer Expert, and a core team member of the NgRx team. Joe starts the discussion by elaborating on the topic chosen and explains what constitutes a "problem" in a developer's life. He asks the panel how often do they use classical algorithms in their everyday work. They then steer the discussion from implementing classical algorithms to logical ones, and discuss how they tackle and overcome complex computing challenges that can be very taxing. They talk about a technique called "Rubber Duck programming", how to go about creating a conducive environment for problem solving, and explain the concept of "flow" in software development along with its importance while dealing with issues. They discuss if pair-programming and mob-programming help in problem solving and their benefits. After discussing problem solving in computing, the panelists change the direction of the conversation towards solving team and process pitfalls. They talk about how important friendships and emotional investments can be, especially when there are challenges at work and Jesse explains a methodology called the Quadrant System. In the end, they speak on handling personal problems as an engineer and offer helpful tips to listeners. Links Mike on Twitter Mike Ryan - Angular in Depth Svelte Rubber Duck Debugging Rework Radical Candor The viral tweet and response! Picks Mike Dane: Pomodoro Technique Brooke Avery: Pomelo Travel Sam Julien: Rocket emoji app Luis Hernandez: GitHub projects Mike Ryan: React for CLIs Joe Eames: Stormboard
Panel Joe Eames Brooke Avery Jesse Sanders Sam Julien Luis Hernandez Mike Dane Joined by special guest: Mike Ryan Episode Summary In this episode, the panelists talk to Mike Ryan, Software Architect at Synapse, Google Developer Expert, and a core team member of the NgRx team. Joe starts the discussion by elaborating on the topic chosen and explains what constitutes a "problem" in a developer's life. He asks the panel how often do they use classical algorithms in their everyday work. They then steer the discussion from implementing classical algorithms to logical ones, and discuss how they tackle and overcome complex computing challenges that can be very taxing. They talk about a technique called "Rubber Duck programming", how to go about creating a conducive environment for problem solving, and explain the concept of "flow" in software development along with its importance while dealing with issues. They discuss if pair-programming and mob-programming help in problem solving and their benefits. After discussing problem solving in computing, the panelists change the direction of the conversation towards solving team and process pitfalls. They talk about how important friendships and emotional investments can be, especially when there are challenges at work and Jesse explains a methodology called the Quadrant System. In the end, they speak on handling personal problems as an engineer and offer helpful tips to listeners. Links Mike on Twitter Mike Ryan - Angular in Depth Svelte Rubber Duck Debugging Rework Radical Candor The viral tweet and response! Picks Mike Dane: Pomodoro Technique Brooke Avery: Pomelo Travel Sam Julien: Rocket emoji app Luis Hernandez: GitHub projects Mike Ryan: React for CLIs Joe Eames: Stormboard
Sponsors Triplebyte offers a $1000 signing bonus Sentry use the code “devchat” for $100 credit Linode offers $20 credit CacheFly Panel AJ O’Neal Chris Ferdinandi Aimee Knight Charles Max Wood Joined by special guest: Mikeal Rogers Episode Summary This episode of JavaScript Jabber starts with Mikeal Rogers introducing himself and his work in brief. Charles clarifies that he wants to focus this show on some beginner content such as node.js basics, so Mikeal gives some historical background on the concept, elaborates on its modern usage and features and explains what “streams” are, for listeners who are starting to get into JavaScript. The panelists then discuss how languages like Go and Python compare to node.js in terms of growth and individual learning curves. Mikeal answers questions about alternate CLIs, package management, Pika, import maps and their effect on node.js, and on learning JavaScript in general. Chris, Charles and AJ also chip in with their experiences in teaching modern JS to new learners and its difficulty level in comparison to other frameworks. They wrap up the episode with picks. Links Mikeal on Twitter Mikeal on GitHub Follow JavaScript Jabber on Devchat.tv, Facebook and Twitter. Picks Chris Ferdinandi: Mozilla Firefox Artifact Conference Aimee Knight: A Magician Explains Why We See What’s Not There Programming: doing it more vs doing it better Mikeal Rogers: The Future of the Web – CascadiaJS 2018 Brave Browser Charles Max Wood: Podwrench
Sponsors Triplebyte offers a $1000 signing bonus Sentry use the code “devchat” for $100 credit Linode offers $20 credit CacheFly Panel AJ O’Neal Chris Ferdinandi Aimee Knight Charles Max Wood Joined by special guest: Mikeal Rogers Episode Summary This episode of JavaScript Jabber starts with Mikeal Rogers introducing himself and his work in brief. Charles clarifies that he wants to focus this show on some beginner content such as node.js basics, so Mikeal gives some historical background on the concept, elaborates on its modern usage and features and explains what “streams” are, for listeners who are starting to get into JavaScript. The panelists then discuss how languages like Go and Python compare to node.js in terms of growth and individual learning curves. Mikeal answers questions about alternate CLIs, package management, Pika, import maps and their effect on node.js, and on learning JavaScript in general. Chris, Charles and AJ also chip in with their experiences in teaching modern JS to new learners and its difficulty level in comparison to other frameworks. They wrap up the episode with picks. Links Mikeal on Twitter Mikeal on GitHub Follow JavaScript Jabber on Devchat.tv, Facebook and Twitter. Picks Chris Ferdinandi: Mozilla Firefox Artifact Conference Aimee Knight: A Magician Explains Why We See What’s Not There Programming: doing it more vs doing it better Mikeal Rogers: The Future of the Web – CascadiaJS 2018 Brave Browser Charles Max Wood: Podwrench
Sponsors Triplebyte offers a $1000 signing bonus Sentry use the code “devchat” for $100 credit Linode offers $20 credit CacheFly Panel AJ O’Neal Chris Ferdinandi Aimee Knight Charles Max Wood Joined by special guest: Mikeal Rogers Episode Summary This episode of JavaScript Jabber starts with Mikeal Rogers introducing himself and his work in brief. Charles clarifies that he wants to focus this show on some beginner content such as node.js basics, so Mikeal gives some historical background on the concept, elaborates on its modern usage and features and explains what “streams” are, for listeners who are starting to get into JavaScript. The panelists then discuss how languages like Go and Python compare to node.js in terms of growth and individual learning curves. Mikeal answers questions about alternate CLIs, package management, Pika, import maps and their effect on node.js, and on learning JavaScript in general. Chris, Charles and AJ also chip in with their experiences in teaching modern JS to new learners and its difficulty level in comparison to other frameworks. They wrap up the episode with picks. Links Mikeal on Twitter Mikeal on GitHub Follow JavaScript Jabber on Devchat.tv, Facebook and Twitter. Picks Chris Ferdinandi: Mozilla Firefox Artifact Conference Aimee Knight: A Magician Explains Why We See What’s Not There Programming: doing it more vs doing it better Mikeal Rogers: The Future of the Web – CascadiaJS 2018 Brave Browser Charles Max Wood: Podwrench
For many years, Jeff Dickey was a lead architect for Heroku's CLI tool, which was used by application developers to get their apps deployed to Heroku's platform. He muses on his history with CLIs with Nahid Samsami, a director of product at Heroku, as the two of them worked together on oclif. oclif was designed from the start to be a framework for developers to use when building their own command-line interfaces. It's currently written in TypeScript, but Jeff goes through its four-year history, starting with its roots in Ruby and on thorough its Frankenstein's monster mashup of Go and JavaScript. While each language had its pros and cons, the key constraint was how the resulting command-line binary program would be distributed. The project has been incredibly popular, both through internal adoption at Heroku and Salesforce, to its reception and use from other companies, as well as the active contributions made on GitHub. The episode concludes with some theories about the future of CLI tooling. PowerShell, for example, is a fully object-oriented environment which a developer can program against. Jeff is also interested in better integration between the terminal and UI elements. Links from this episode oclif.io, the main landing page for learning more about oclif Microsoft's Powershell, which Jeff believes is the most incredibly advanced CLI platform
We’re joined by Sarah Drasner; Vue.js core team member, Microsoft Senior Developer Advocate, and CSS Tricks writer. Michael gets a new pair of shoes, SuAnne needs to watch her mouth at work, and Kyle was the first person to ever have a bad experience with Comcast. Sarah is planning a conference in Nigeria. We compare and contrast CLIs and GUIs and figure out the best use cases for each. We dig into anachronyms, iconography, and words, and get philosophical.
Episode 247: ControlTalk NOW — Smart Buildings Videocast and PodCast for week ending December 3, 2017 features interviews with Network as a Service expert Ron Victor, CEO and Founder of IoTium, along with Master Systems Integrator Extraordinaire, Jason Houck, CIO of Hepta Systems and The Panel Shoppe. More insights and updates from Functional Devices, Women in HVACR, Honeywell LCBS Connect, Siemens Total Room Automation, KMC’s complete Building Control Solutions, Introducing Niagara’s first Cloud Offering, Ken Sinclair’s December edition #RUIoTReady, and the video announcement of the 2017 ControlTrends Awards finalists. CTN 247 Controltalk Now The SmartBuildings Video Cast from Eric Stromquist on Vimeo. Functional Devices, Inc. Functional Devices, Inc. has been designing and manufacturing quality electronic devices in the United States of America since 1969. Our goal is to provide high quality products for the most reliable and economical solutions to the needs of our customers, along with world-class support from our sales and engineering experts. Functional Devices has established itself as a leader in the HVAC, Building Controls, Energy Management, Energy Savings, Lighting Controls, and Wireless industries. Women in HVACR @ HARDI Annual Conference 2017, Aria Hotel, Las Vegas, Dec 2, 2017. Las Vegas, Nevada at The Aria Resort & Casino RESERVE YOUR SPOT TODAY! Saturday December 2nd, prior to the HARDI Annual Conference, Women in HVACR will host a workshop and networking event from 2:00pm to 5:30pm. Women in HVACR exists to improve the lives of our members by providing professional avenues to connect with other women growing their careers in the HVACR industry. Happy Thanksgiving ControlTrends Community! We wish everyone a wonderful, relaxing, and peaceful Thanksgiving Holiday. For those that must work or serve through the Holidays, we greatly appreciate your faithful service. The history of Thanksgiving Day from Reference.Com: Thanksgiving is celebrated in the United States as a national holiday dedicated to being with family and friends to give thanks for all of the blessings received throughout the previous year. Honeywell LCBS Connect — A Ground Breaking Cloud-Based Light Commercial Control Solution for Service Contracting Professionals. LCBS Connect Cloud-Based Light Commercial Control: Improve service to your customers and grow your business with Honeywell’s LCBS Connect. Remote HVAC system monitoring and diagnostics help you better serve your customers while spending less time in your truck. ControlTalk NOW welcomes our first guest is IoTium’s CEO and Founder, Ron Victor. IoTium provides managed secure network infrastructures for the Industrial Internet of Things (IIoT). IoTium’s extensive experience and success in industrial markets allows IoTium to offer a mature solution offering to the BMS and HVAC markets. Eliminate the need for onsite technicians or costly truck-rolls, CLIs, usernames and passwords, or changes to enterprise proxy and firewall policies. With IoTium Orchestrator, you can automatically authenticate, provision and configure your network infrastructure. Policies define what data needs to be sent where and when. Siemens Total Room Automation — The New Standard for Total Room Control and Cost Savings. Siemens Total Room Automation (TRA) combines room HVAC, lighting and/or shading systems into one, seamless package. As one solution instead of three, Siemens TRA cuts initial product costs, installation costs, and ongoing operating costs, while increasing room energy efficiency and providing optimal comfort for room occupants. ControlTalk NOW’s second guest is Jason Houck, CIO of Hepta Systems and The Panel Shoppe. As one of the top Master Systems Integrators in the business, Jason gives the ControlTrends Community a deep-dive view of the multitude of products, solutions, and services The Panel Shoppe offers. The Panel Shoppe delivers Custom Control Panels, Tech Services, FIN Stack, Parts & Smarts, Software as a Service, Niagara Support, Analytics, Entrocim, and Custom Graphics. Dashboards, Enabling Passive and Active Energy Management. Check out this very informative post about Smart Building Controls Dashboards from our friend Steve Guzelimian, the president of Optergy. You can meet Steve at the 2017 ControlTrends Awards and stop by his booth at the AHR Show. Most building owners and operators struggle to incentivize occupants to change their behavior when it comes to sustainability. KMC’s Complete Building Control Solutions are the Best on the Market: Open, Secure, Scalable, and Easy. We’re the Experts. For nearly 50 years, KMC Controls has provided state-of-the-art building automation and control solutions to a global network of system integrators and distributors. Our products are controlling buildings of all sizes and types all around the world, including the most sustainable office building in North America. KMC is committed to open, secure, scalable, and most importantly, easy building automation solutions. Introducing the First Niagara Cloud Offering — Learn More About Backup as a Service During our Next TridiumTalk. Learn more about Backup as a Service during our next TridiumTalk: December 14, 2017, 11:00 a.m. Eastern Time (New York, UTC-05:00). Senior product manager Kapil Sharma will be unveiling Backup as a Service, the first Niagara Cloud offering in a growing suite of services, during our next TridiumTalk. Kapil will offer a first-hand look at how to use Backup as a Service to protect the enterprise from losing valuable Niagara station and configuration data. Congratulations to the 2017 ControlTrends Finalists — Exciting Video Announcement! Congratulations to this year’s well-deserved finalists! Final voting ballots will be emailed in the very near future. ControlTrends is excited to celebrate our sixth annual ControlTrends Awards this year in Chicago, at the Hard Rock Café, located at 93 W. Ontario Street from 6:30 PM to 9:30 PM. The Chicago Hard Rock Café nightclub style of venue is especially well-suited for this year’s theme. Ken Sinclair’s Automated Buildings December, 2017 Theme: Are You – IoT Ready? #RUIoTReady. In his December edition of Automated Buildings, owner and editor, Ken Sinclair rewards his readers with the industry’s most current and comprehensive glossary of IoT related terms, definitions, concepts, and related articles (complete with hyperlinks for further exploration). Additionally, Ken writes with unique insight to his December theme #RUIoTReady and then asks a few rather rhetoric questions: What is IoT? Why should I care? What does “Ready” mean? The post Episode 247: ControlTalk NOW — Smart Buildings Videocast and PodCast for Week Ending December 3, 2017 appeared first on ControlTrends.
"Rule 1 of building the cloud: ruthlessly automate everything.” -@PGelsinger If you've ever considered automating anything in your VMware environment chances are you've either read a blog post or even used some sample code from one or both of our guests this week. With VMworld just over a week away we thought it would be a perfect time to bring in our dear friends William Lam and Alan Renouf to get their thoughts on ruthless automation and to see what they have planned for the big show. Show Related Links StorageHub VMware {Code} William's Blog Virtually Ghetto Alan's Blog: virtu-al.net VMware Github Catch Willaim in the following VMworld sessions MTE4724U VMware Automation with Wiliam Lam SER2958BU Migrate to the vCenter Server Appliance You Should VMTN6722U Hackathon Event: 15 teams hack on ideas! Catch Alan Renouf in the following VMworld sessions MTE4723U vSphere APIs and CLIs with Alan Renouf SER3036GU vSphere APIs with Alan Renouf and Kyle Ruddy SER1875BU The Power Hour: vSphere PowerCLI 10th Birthday Edition SER2529BU vSphere PowerCLI What’s New: The Next Evolutionary Leap Is Now LHC1748BU VMware Cloud for AWS and the Art of Software-Defined Data Centers: API, CLI, and PowerShell SER1906BU VMware and Chef: Leveraging the vSphere API Together SER1912BU VMware Open-Source SDKs: From Getting Started to Web App in One Hour VMTN6722U Hackathon Event: 15 teams hack on ideas! Catch Pete and John in the following VMworld sessions STO2446BU Virtual Volumes Technical Deep Dive STO3276GU vSAN Networking and Design Best Practices STO2047BU What’s New in vSAN 6.6: Technical Deep Dive STO2095BU vSAN ReadyNode and Build Your Own Hardware Guidance PAR4398BCU VMware Technical Solutions Professional – Hyper-Converged Infrastructure (VTSP-HCI, formerly VTSP-SDS) PAR4367BU What’s New in vSAN 6.6 – A Deep Dive STO1926BU vSAN 6.6: A Day in the Life of an I/O The Virtually Speaking Podcast The Virtually Speaking Podcast is a weekly technical podcast dedicated to discussing VMware topics related to storage and availability. Each week Pete Flecha and John Nicholson bring in various subject matter experts from VMware and within the industry to discuss their respective ar...
iPS 210: Build Special 3 - Visual Studio Mobile Center Deeper Dive with Ela Malani & Piyush Joshi This is a special episode of iPhreaks from Microsoft Build with panelists Jaim Uber and Andrew Madsen. There are joined by two special guests, Piyush Joshi and Ela Malani, to discuss Visual Studio Mobile Center. Tune in to learn more about this product! [00:00:20] Introduction to Piyush Piyush is a program manager on the Visual Studio Mobile Center team. He has been at Microsoft for nine years. He’s recently been working on the Mobile Setup Services that are provided by Microsoft. [00:00:44] Introduction to Ela Ela is a program manager in the Mobile Center and has been working for Microsoft for three years. She owns the SDKs and CLIs for Visual Center. [00:01:34] What SDKs does Visual Center have? Mobile Center supports a variety of platforms (iOS, React Native, etc). A great feature is that the SDKs are all Open Source on GitHub. Users can just use the SDKs they want, which provides the ability to keep app sizes small. [00:02:44] Do you accept contributions? Definitely. They are always actively looking for the developer community to contribute to the Open Source SDKs. [00:03:00] If I want to check out the project how do I find it? There are four projects on GitHub. They are Mobile Center SDK’s iOS, Mobile Center SDK Android, one for dotnet and one for React Native. [00:03:25] What installation methods do you support? Developers for iOS can download two ways. They can download manually or via CocoaPods to get started. There is no Carthage support yet, but it is coming. [00:04:30] When you download this, are you getting a library? Users are downloading a library. The biggest reason to have it on GitHub is to gain developers’ trust. Developers want to know what you are shipping because of privacy reasons - is it secure, is it safe? SDK’s are collecting user data and developers need to be confident in the privacy abilities. Open Source SDK’s makes the product more attractive. The app developer gets full control of what info gets sent to the backend. Data does not get transmitted if users do not want it to be. [00:07:30] What does your Command Line Interface (CLI) do? Why do you provide one and how can your users utilize it? Mobile Center has an open CLI in order for users to have a lot of control. Everything can be done via CLI – using the test services, distributing to users, getting crash reports, uploading files, etc. Developers don’t have to go through the portal. Just open the CLI and perform the same actions. [00:08:50] Do you know what your users are using the CLI’s for? Test services is one service that is being heavily used. Mobile Center can provide one line of command that shows what need to trigger in the CLI to set up test services on every device. [00:10:00] Can you use your own CLI service with Mobile Center? Yes. Mobile Center provides all setup services but users are free to choose which services they want to take utilize. They don’t have to download a huge file with everything included; they can just download the one thing they want. Each of the services can be used individually or integration with various test distribution. It is up to developers how they want to customize their app. [00:11:46] How do I set up test services? Create an account and app within Build. Then access the test service in this case. Use any of the frameworks and start a new test run. Then, upload your package and test scripts. After that, send the tests to the backend, which will run them for you. You can select which devices you wish to run tests on and then can see the results. [00:15:40] Fast Lane Support There is no fast lane support in Build right now but they are investigating how that can happen soon. [00:16:35] Does Microsoft have any Ruby applications? Not right now but it should not be a problem. [00:17:10] What platforms are supported with the CLI? There are two platforms that are supported right now, which are Windows and Mac. [00:18:00] What led you to support React Native? A full focus for Mobile Center is React Native. There are not a lot of products out that currently support React Native. A goal is to provide first class support for React Native. Build service also provides support with React Native Apps. They are thinking of how to support CodePush as well. [00:20:50] HockeyApp Mobile Center SDKs are developed on top of the HockeyApp SDKs. For people that use HockeyApp, Ela and Piyush recommends trying Mobile Center. The difference is that they are attempting to make Mobile Center the “one stop shop for all developer needs.” Picks Ela: Settlers of Catan Piyush: Born to Run by Christopher McDougall Links Visual Studio Mobile Center https://github.com/Microsoft/mobile-center-sdk-dotnet https://github.com/Microsoft/mobile-center-sdk-ios https://github.com/Microsoft/mobile-center-sdk-android https://github.com/Microsoft/mobile-center-sdk-react-native
iPS 210: Build Special 3 - Visual Studio Mobile Center Deeper Dive with Ela Malani & Piyush Joshi This is a special episode of iPhreaks from Microsoft Build with panelists Jaim Uber and Andrew Madsen. There are joined by two special guests, Piyush Joshi and Ela Malani, to discuss Visual Studio Mobile Center. Tune in to learn more about this product! [00:00:20] Introduction to Piyush Piyush is a program manager on the Visual Studio Mobile Center team. He has been at Microsoft for nine years. He’s recently been working on the Mobile Setup Services that are provided by Microsoft. [00:00:44] Introduction to Ela Ela is a program manager in the Mobile Center and has been working for Microsoft for three years. She owns the SDKs and CLIs for Visual Center. [00:01:34] What SDKs does Visual Center have? Mobile Center supports a variety of platforms (iOS, React Native, etc). A great feature is that the SDKs are all Open Source on GitHub. Users can just use the SDKs they want, which provides the ability to keep app sizes small. [00:02:44] Do you accept contributions? Definitely. They are always actively looking for the developer community to contribute to the Open Source SDKs. [00:03:00] If I want to check out the project how do I find it? There are four projects on GitHub. They are Mobile Center SDK’s iOS, Mobile Center SDK Android, one for dotnet and one for React Native. [00:03:25] What installation methods do you support? Developers for iOS can download two ways. They can download manually or via CocoaPods to get started. There is no Carthage support yet, but it is coming. [00:04:30] When you download this, are you getting a library? Users are downloading a library. The biggest reason to have it on GitHub is to gain developers’ trust. Developers want to know what you are shipping because of privacy reasons - is it secure, is it safe? SDK’s are collecting user data and developers need to be confident in the privacy abilities. Open Source SDK’s makes the product more attractive. The app developer gets full control of what info gets sent to the backend. Data does not get transmitted if users do not want it to be. [00:07:30] What does your Command Line Interface (CLI) do? Why do you provide one and how can your users utilize it? Mobile Center has an open CLI in order for users to have a lot of control. Everything can be done via CLI – using the test services, distributing to users, getting crash reports, uploading files, etc. Developers don’t have to go through the portal. Just open the CLI and perform the same actions. [00:08:50] Do you know what your users are using the CLI’s for? Test services is one service that is being heavily used. Mobile Center can provide one line of command that shows what need to trigger in the CLI to set up test services on every device. [00:10:00] Can you use your own CLI service with Mobile Center? Yes. Mobile Center provides all setup services but users are free to choose which services they want to take utilize. They don’t have to download a huge file with everything included; they can just download the one thing they want. Each of the services can be used individually or integration with various test distribution. It is up to developers how they want to customize their app. [00:11:46] How do I set up test services? Create an account and app within Build. Then access the test service in this case. Use any of the frameworks and start a new test run. Then, upload your package and test scripts. After that, send the tests to the backend, which will run them for you. You can select which devices you wish to run tests on and then can see the results. [00:15:40] Fast Lane Support There is no fast lane support in Build right now but they are investigating how that can happen soon. [00:16:35] Does Microsoft have any Ruby applications? Not right now but it should not be a problem. [00:17:10] What platforms are supported with the CLI? There are two platforms that are supported right now, which are Windows and Mac. [00:18:00] What led you to support React Native? A full focus for Mobile Center is React Native. There are not a lot of products out that currently support React Native. A goal is to provide first class support for React Native. Build service also provides support with React Native Apps. They are thinking of how to support CodePush as well. [00:20:50] HockeyApp Mobile Center SDKs are developed on top of the HockeyApp SDKs. For people that use HockeyApp, Ela and Piyush recommends trying Mobile Center. The difference is that they are attempting to make Mobile Center the “one stop shop for all developer needs.” Picks Ela: Settlers of Catan Piyush: Born to Run by Christopher McDougall Links Visual Studio Mobile Center https://github.com/Microsoft/mobile-center-sdk-dotnet https://github.com/Microsoft/mobile-center-sdk-ios https://github.com/Microsoft/mobile-center-sdk-android https://github.com/Microsoft/mobile-center-sdk-react-native
## Drupal Console * What is Drupal Console? * It is a suite of tools that you run on a command line interface (CLI) to generate boilerplate code and interact with a Drupal 8 installation. * How is it different from Drush? And are there overlapping features? * There are many similarities between these two tools, but the main difference is how it was built, using an object-oriented architecture and Symfony components. * Will we continue to use both? Or will Drupal Console replace Drush? * I think we will keep using both at least for now and maybe at some point we can merge both tools. * What are some of the things that you will keep using Drush for in Drupal 8? * Site alias (sync, backup), download modules, installing the site. * Are you planning to introduce those features into Drupal Console? * Yes we are actually working on site alias. * What are some things that Drupal Console can do that Drush can’t? * Who is the intended audience of Drupal Console? * You can use Drupal Console to help you developing faster and smarter with Drupal 8 * Developers and SiteBuilders since you can Generate code and files required by a Drupal 8 module and Interacting with your Drupal installation (debugging services, routes) you can put your site in Maintenance mode or Switch system performance configuration. * But you can use this tool to Learn Drupal 8. * We are also working on a GUI so if you are afraid of CLIs we will provide a web site you will be able to use and select what you want to generate (module, controller, services, blocks, entities, etc…) and generate and download the generated code. * Would it be fair to say that right now, it’s most useful to developers who are actually writing code, while later, it will also be useful for sitebuilders who are used to using Drush to create site, install modules and themes, and perform routine tasks? ## Use Cases * What are some things you can do with DrupalConsole? * Drupal Console help you developing faster and smarter with Drupal 8. * Generating the code and files required by a Drupal 8 module. * Interacting with your Drupal installation. * Learning Drupal 8. ## Updates and Future * You were on the Drupalize.me podcast back in February talking about Drupal Console, what are some of the improvements that have been made since then? (Especially with Drupal 8 progressing the way it is.)