POPULARITY
It's been 4 years since TypeScript schema validation library Zod released v3, but the new v4 release makes it worth the wait. Expect faster parsing times across the board, built in error pretty-printing, and even a tree-shakeable API called Zod Mini for constrained environments like edge runtimes.There's a new npm-based CLI tool for managing and sharing AI rules across different editors and tools called vibe-rules. In addition to saving favorite prompts so they can be applied to any supported editor, vibe-rules can also automatically install prompts shared in a project's NPM packages into an editor's configuration. It's early days yet, but a great idea to make prompts easier for anyone to use.Angular v20 is out with some much anticipated highlights. Stabilized signal-based APIs, incremental hydration, custom Angular reporting directly in Chrome DevTools, GenAI development advancements, and, last but not least, a RFC for an official Angular mascot. Not to bias you, but we favor the pink, dice-shaped mascot around here.In this episode:1:10 - Zod v45:50 - vibe-rules15:12 - Angular 2027:03 - Remix v331:32 - Stack Overflow's Annual Dev Survey38:02 - Firefox and Temporal39:15 - Bolt's hackathon statusNews:Paige - Zod v4Jack - vibe-rulesTJ - Angular 20Lightning News:Remix v3 updatesFirefox is the first browser to support Temporal (Temporal on MDN)StackOverflow's Annual Dev Survey is out nowBolt's hackathon startsWhat Makes Us Happy this Week:Paige - Annual Gloucestershire cheese rolling race and Wiki historyJack - The Portland Pickles baseball gameTJ - StoryGraph and The God of the WoodsThanks as always to our sponsor, the Blue Collar Coder channel on YouTube. You can join us in our Discord channel, explore our website and reach us via email, or talk to us on X, Bluesky, or YouTube.Front-end Fire websiteBlue Collar Coder on YouTubeBlue Collar Coder on DiscordReach out via emailTweet at us on X @front_end_fireFollow us on Bluesky @front-end-fire.comSubscribe to our YouTube channel @Front-EndFirePodcast
News includes Hex 2.2.0 with the new :warnifoutdated option for keeping dependencies updated, Honeybadger's APM with built-in Elixir traces for major components, José Valim demonstrating Tidewave with Zed's AI coding agents, LiveDebugger v0.2.0 with DevTools integration and component highlighting, Dave Lucia's new Elixir "Lua" library for embedding Lua scripting, Paulo Valente's "handoff" library for distributed function graph execution, a PhD thesis on Elixir code smells becoming a finalist for a prestigious award, and more! Show Notes online - http://podcast.thinkingelixir.com/254 (http://podcast.thinkingelixir.com/254) Elixir Community News https://paraxial.io/ (https://paraxial.io/?utm_source=thinkingelixir&utm_medium=shownotes) – Paraxial.io is sponsoring today's show! Sign up for a free trial of Paraxial.io today and mention Thinking Elixir when you schedule a demo for a limited time offer. https://github.com/hexpm/hex/releases/tag/v2.2.0 (https://github.com/hexpm/hex/releases/tag/v2.2.0?utm_source=thinkingelixir&utm_medium=shownotes) – Hex releases 2.2.0 introducing the :warnifoutdated option to help keep dependencies updated. Taking a week off - no episode next week, but returning the following week. https://www.honeybadger.io/blog/elixir-performance-monitoring (https://www.honeybadger.io/blog/elixir-performance-monitoring?utm_source=thinkingelixir&utm_medium=shownotes) – Honeybadger now offers APM with built-in Elixir traces, including default dashboards for Ecto, Phoenix/LiveView, Oban, Absinthe, Finch, and Tesla. https://x.com/josevalim/status/1920062725394243640 (https://x.com/josevalim/status/1920062725394243640?utm_source=thinkingelixir&utm_medium=shownotes) – José Valim demonstrates Tidewave being used with Zed editor's AI coding agents. https://zed.dev/agentic (https://zed.dev/agentic?utm_source=thinkingelixir&utm_medium=shownotes) – Zed's agentic features used with Tidewave to code a pricing plan component. https://www.reddit.com/r/elixir/comments/1kgyfhb/livedebuggerv020is_out/ (https://www.reddit.com/r/elixir/comments/1kgyfhb/livedebugger_v020_is_out/?utm_source=thinkingelixir&utm_medium=shownotes) – LiveDebugger v0.2.0 released with Chrome DevTools extension, component highlighting, callback trace filtering, and dark mode. https://podcast.thinkingelixir.com/249 (https://podcast.thinkingelixir.com/249?utm_source=thinkingelixir&utm_medium=shownotes) – Previous podcast episode discussing LiveDebugger with Krzysztof. https://blog.swmansion.com/whats-new-in-livedebugger-v0-2-0-4543d3af5486 (https://blog.swmansion.com/whats-new-in-livedebugger-v0-2-0-4543d3af5486?utm_source=thinkingelixir&utm_medium=shownotes) – Blog post covering the new features in LiveDebugger v0.2.0. https://hexdocs.pm/luerl/readme.html (https://hexdocs.pm/luerl/readme.html?utm_source=thinkingelixir&utm_medium=shownotes) – Luerl v1.4.1 released with Hex docs - an implementation of Lua 5.3 in Erlang/OTP. https://github.com/rvirding/luerl (https://github.com/rvirding/luerl?utm_source=thinkingelixir&utm_medium=shownotes) – The GitHub repository for Luerl, which Dave Lucia worked on with Robert Virding. https://www.lua.org/about.html (https://www.lua.org/about.html?utm_source=thinkingelixir&utm_medium=shownotes) – Information about Lua, a lightweight, embeddable scripting language. https://bsky.app/profile/davelucia.com/post/3lozadtvqtc2m (https://bsky.app/profile/davelucia.com/post/3lozadtvqtc2m?utm_source=thinkingelixir&utm_medium=shownotes) – Dave Lucia's announcement of his new Elixir "Lua" library. https://davelucia.com/blog/lua-elixir (https://davelucia.com/blog/lua-elixir?utm_source=thinkingelixir&utm_medium=shownotes) – Blog post explaining Dave's new Elixir Lua library. https://github.com/tv-labs/lua (https://github.com/tv-labs/lua?utm_source=thinkingelixir&utm_medium=shownotes) – The GitHub repository for the new Elixir Lua library, providing an ergonomic interface to Luerl. https://hexdocs.pm/handoff/ (https://hexdocs.pm/handoff/?utm_source=thinkingelixir&utm_medium=shownotes) – Documentation for "handoff", a new Elixir library for distributed function graph execution. https://bsky.app/profile/polvalente.social/post/3louqxeegrs2u (https://bsky.app/profile/polvalente.social/post/3louqxeegrs2u?utm_source=thinkingelixir&utm_medium=shownotes) – Paulo Valente's announcement of the handoff library, which enables distributed Nx computations. https://github.com/polvalente/handoff (https://github.com/polvalente/handoff?utm_source=thinkingelixir&utm_medium=shownotes) – GitHub repository for the handoff library created by Paulo Valente and sponsored by TvLabs. https://bsky.app/profile/lucasvegi.bsky.social/post/3lke2pt2zws2e (https://bsky.app/profile/lucasvegi.bsky.social/post/3lke2pt2zws2e?utm_source=thinkingelixir&utm_medium=shownotes) – Lucas Vegi's PhD thesis "Code Smells and Refactorings for Elixir" is a finalist for the SBC Dissertation Award. https://hexdocs.pm/elixir/code-anti-patterns.html (https://hexdocs.pm/elixir/code-anti-patterns.html?utm_source=thinkingelixir&utm_medium=shownotes) – Elixir's code anti-patterns guide, a practical resource related to code smells and refactoring in Elixir. Do you have some Elixir news to share? Tell us at @ThinkingElixir (https://twitter.com/ThinkingElixir) or email at show@thinkingelixir.com (mailto:show@thinkingelixir.com) Find us online - Message the show - Bluesky (https://bsky.app/profile/thinkingelixir.com) - Message the show - X (https://x.com/ThinkingElixir) - Message the show on Fediverse - @ThinkingElixir@genserver.social (https://genserver.social/ThinkingElixir) - Email the show - show@thinkingelixir.com (mailto:show@thinkingelixir.com) - Mark Ericksen on X - @brainlid (https://x.com/brainlid) - Mark Ericksen on Bluesky - @brainlid.bsky.social (https://bsky.app/profile/brainlid.bsky.social) - Mark Ericksen on Fediverse - @brainlid@genserver.social (https://genserver.social/brainlid) - David Bernheisel on Bluesky - @david.bernheisel.com (https://bsky.app/profile/david.bernheisel.com) - David Bernheisel on Fediverse - @dbern@genserver.social (https://genserver.social/dbern)
Today's episode is with Paul Klein, founder of Browserbase. We talked about building browser infrastructure for AI agents, the future of agent authentication, and their open source framework Stagehand.* [00:00:00] Introductions* [00:04:46] AI-specific challenges in browser infrastructure* [00:07:05] Multimodality in AI-Powered Browsing* [00:12:26] Running headless browsers at scale* [00:18:46] Geolocation when proxying* [00:21:25] CAPTCHAs and Agent Auth* [00:28:21] Building “User take over” functionality* [00:33:43] Stagehand: AI web browsing framework* [00:38:58] OpenAI's Operator and computer use agents* [00:44:44] Surprising use cases of Browserbase* [00:47:18] Future of browser automation and market competition* [00:53:11] Being a solo founderTranscriptAlessio [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol.ai.swyx [00:00:12]: Hey, and today we are very blessed to have our friends, Paul Klein, for the fourth, the fourth, CEO of Browserbase. Welcome.Paul [00:00:21]: Thanks guys. Yeah, I'm happy to be here. I've been lucky to know both of you for like a couple of years now, I think. So it's just like we're hanging out, you know, with three ginormous microphones in front of our face. It's totally normal hangout.swyx [00:00:34]: Yeah. We've actually mentioned you on the podcast, I think, more often than any other Solaris tenant. Just because like you're one of the, you know, best performing, I think, LLM tool companies that have started up in the last couple of years.Paul [00:00:50]: Yeah, I mean, it's been a whirlwind of a year, like Browserbase is actually pretty close to our first birthday. So we are one years old. And going from, you know, starting a company as a solo founder to... To, you know, having a team of 20 people, you know, a series A, but also being able to support hundreds of AI companies that are building AI applications that go out and automate the web. It's just been like, really cool. It's been happening a little too fast. I think like collectively as an AI industry, let's just take a week off together. I took my first vacation actually two weeks ago, and Operator came out on the first day, and then a week later, DeepSeat came out. And I'm like on vacation trying to chill. I'm like, we got to build with this stuff, right? So it's been a breakneck year. But I'm super happy to be here and like talk more about all the stuff we're seeing. And I'd love to hear kind of what you guys are excited about too, and share with it, you know?swyx [00:01:39]: Where to start? So people, you've done a bunch of podcasts. I think I strongly recommend Jack Bridger's Scaling DevTools, as well as Turner Novak's The Peel. And, you know, I'm sure there's others. So you covered your Twilio story in the past, talked about StreamClub, you got acquired to Mux, and then you left to start Browserbase. So maybe we just start with what is Browserbase? Yeah.Paul [00:02:02]: Browserbase is the web browser for your AI. We're building headless browser infrastructure, which are browsers that run in a server environment that's accessible to developers via APIs and SDKs. It's really hard to run a web browser in the cloud. You guys are probably running Chrome on your computers, and that's using a lot of resources, right? So if you want to run a web browser or thousands of web browsers, you can't just spin up a bunch of lambdas. You actually need to use a secure containerized environment. You have to scale it up and down. It's a stateful system. And that infrastructure is, like, super painful. And I know that firsthand, because at my last company, StreamClub, I was CTO, and I was building our own internal headless browser infrastructure. That's actually why we sold the company, is because Mux really wanted to buy our headless browser infrastructure that we'd built. And it's just a super hard problem. And I actually told my co-founders, I would never start another company unless it was a browser infrastructure company. And it turns out that's really necessary in the age of AI, when AI can actually go out and interact with websites, click on buttons, fill in forms. You need AI to do all of that work in an actual browser running somewhere on a server. And BrowserBase powers that.swyx [00:03:08]: While you're talking about it, it occurred to me, not that you're going to be acquired or anything, but it occurred to me that it would be really funny if you became the Nikita Beer of headless browser companies. You just have one trick, and you make browser companies that get acquired.Paul [00:03:23]: I truly do only have one trick. I'm screwed if it's not for headless browsers. I'm not a Go programmer. You know, I'm in AI grant. You know, browsers is an AI grant. But we were the only company in that AI grant batch that used zero dollars on AI spend. You know, we're purely an infrastructure company. So as much as people want to ask me about reinforcement learning, I might not be the best guy to talk about that. But if you want to ask about headless browser infrastructure at scale, I can talk your ear off. So that's really my area of expertise. And it's a pretty niche thing. Like, nobody has done what we're doing at scale before. So we're happy to be the experts.swyx [00:03:59]: You do have an AI thing, stagehand. We can talk about the sort of core of browser-based first, and then maybe stagehand. Yeah, stagehand is kind of the web browsing framework. Yeah.What is Browserbase? Headless Browser Infrastructure ExplainedAlessio [00:04:10]: Yeah. Yeah. And maybe how you got to browser-based and what problems you saw. So one of the first things I worked on as a software engineer was integration testing. Sauce Labs was kind of like the main thing at the time. And then we had Selenium, we had Playbrite, we had all these different browser things. But it's always been super hard to do. So obviously you've worked on this before. When you started browser-based, what were the challenges? What were the AI-specific challenges that you saw versus, there's kind of like all the usual running browser at scale in the cloud, which has been a problem for years. What are like the AI unique things that you saw that like traditional purchase just didn't cover? Yeah.AI-specific challenges in browser infrastructurePaul [00:04:46]: First and foremost, I think back to like the first thing I did as a developer, like as a kid when I was writing code, I wanted to write code that did stuff for me. You know, I wanted to write code to automate my life. And I do that probably by using curl or beautiful soup to fetch data from a web browser. And I think I still do that now that I'm in the cloud. And the other thing that I think is a huge challenge for me is that you can't just create a web site and parse that data. And we all know that now like, you know, taking HTML and plugging that into an LLM, you can extract insights, you can summarize. So it was very clear that now like dynamic web scraping became very possible with the rise of large language models or a lot easier. And that was like a clear reason why there's been more usage of headless browsers, which are necessary because a lot of modern websites don't expose all of their page content via a simple HTTP request. You know, they actually do require you to run this type of code for a specific time. JavaScript on the page to hydrate this. Airbnb is a great example. You go to airbnb.com. A lot of that content on the page isn't there until after they run the initial hydration. So you can't just scrape it with a curl. You need to have some JavaScript run. And a browser is that JavaScript engine that's going to actually run all those requests on the page. So web data retrieval was definitely one driver of starting BrowserBase and the rise of being able to summarize that within LLM. Also, I was familiar with if I wanted to automate a website, I could write one script and that would work for one website. It was very static and deterministic. But the web is non-deterministic. The web is always changing. And until we had LLMs, there was no way to write scripts that you could write once that would run on any website. That would change with the structure of the website. Click the login button. It could mean something different on many different websites. And LLMs allow us to generate code on the fly to actually control that. So I think that rise of writing the generic automation scripts that can work on many different websites, to me, made it clear that browsers are going to be a lot more useful because now you can automate a lot more things without writing. If you wanted to write a script to book a demo call on 100 websites, previously, you had to write 100 scripts. Now you write one script that uses LLMs to generate that script. That's why we built our web browsing framework, StageHand, which does a lot of that work for you. But those two things, web data collection and then enhanced automation of many different websites, it just felt like big drivers for more browser infrastructure that would be required to power these kinds of features.Alessio [00:07:05]: And was multimodality also a big thing?Paul [00:07:08]: Now you can use the LLMs to look, even though the text in the dome might not be as friendly. Maybe my hot take is I was always kind of like, I didn't think vision would be as big of a driver. For UI automation, I felt like, you know, HTML is structured text and large language models are good with structured text. But it's clear that these computer use models are often vision driven, and they've been really pushing things forward. So definitely being multimodal, like rendering the page is required to take a screenshot to give that to a computer use model to take actions on a website. And it's just another win for browser. But I'll be honest, that wasn't what I was thinking early on. I didn't even think that we'd get here so fast with multimodality. I think we're going to have to get back to multimodal and vision models.swyx [00:07:50]: This is one of those things where I forgot to mention in my intro that I'm an investor in Browserbase. And I remember that when you pitched to me, like a lot of the stuff that we have today, we like wasn't on the original conversation. But I did have my original thesis was something that we've talked about on the podcast before, which is take the GPT store, the custom GPT store, all the every single checkbox and plugin is effectively a startup. And this was the browser one. I think the main hesitation, I think I actually took a while to get back to you. The main hesitation was that there were others. Like you're not the first hit list browser startup. It's not even your first hit list browser startup. There's always a question of like, will you be the category winner in a place where there's a bunch of incumbents, to be honest, that are bigger than you? They're just not targeted at the AI space. They don't have the backing of Nat Friedman. And there's a bunch of like, you're here in Silicon Valley. They're not. I don't know.Paul [00:08:47]: I don't know if that's, that was it, but like, there was a, yeah, I mean, like, I think I tried all the other ones and I was like, really disappointed. Like my background is from working at great developer tools, companies, and nothing had like the Vercel like experience. Um, like our biggest competitor actually is partly owned by private equity and they just jacked up their prices quite a bit. And the dashboard hasn't changed in five years. And I actually used them at my last company and tried them and I was like, oh man, like there really just needs to be something that's like the experience of these great infrastructure companies, like Stripe, like clerk, like Vercel that I use in love, but oriented towards this kind of like more specific category, which is browser infrastructure, which is really technically complex. Like a lot of stuff can go wrong on the internet when you're running a browser. The internet is very vast. There's a lot of different configurations. Like there's still websites that only work with internet explorer out there. How do you handle that when you're running your own browser infrastructure? These are the problems that we have to think about and solve at BrowserBase. And it's, it's certainly a labor of love, but I built this for me, first and foremost, I know it's super cheesy and everyone says that for like their startups, but it really, truly was for me. If you look at like the talks I've done even before BrowserBase, and I'm just like really excited to try and build a category defining infrastructure company. And it's, it's rare to have a new category of infrastructure exists. We're here in the Chroma offices and like, you know, vector databases is a new category of infrastructure. Is it, is it, I mean, we can, we're in their office, so, you know, we can, we can debate that one later. That is one.Multimodality in AI-Powered Browsingswyx [00:10:16]: That's one of the industry debates.Paul [00:10:17]: I guess we go back to the LLMOS talk that Karpathy gave way long ago. And like the browser box was very clearly there and it seemed like the people who were building in this space also agreed that browsers are a core primitive of infrastructure for the LLMOS that's going to exist in the future. And nobody was building something there that I wanted to use. So I had to go build it myself.swyx [00:10:38]: Yeah. I mean, exactly that talk that, that honestly, that diagram, every box is a startup and there's the code box and then there's the. The browser box. I think at some point they will start clashing there. There's always the question of the, are you a point solution or are you the sort of all in one? And I think the point solutions tend to win quickly, but then the only ones have a very tight cohesive experience. Yeah. Let's talk about just the hard problems of browser base you have on your website, which is beautiful. Thank you. Was there an agency that you used for that? Yeah. Herb.paris.Paul [00:11:11]: They're amazing. Herb.paris. Yeah. It's H-E-R-V-E. I highly recommend for developers. Developer tools, founders to work with consumer agencies because they end up building beautiful things and the Parisians know how to build beautiful interfaces. So I got to give prep.swyx [00:11:24]: And chat apps, apparently are, they are very fast. Oh yeah. The Mistral chat. Yeah. Mistral. Yeah.Paul [00:11:31]: Late chat.swyx [00:11:31]: Late chat. And then your videos as well, it was professionally shot, right? The series A video. Yeah.Alessio [00:11:36]: Nico did the videos. He's amazing. Not the initial video that you shot at the new one. First one was Austin.Paul [00:11:41]: Another, another video pretty surprised. But yeah, I mean, like, I think when you think about how you talk about your company. You have to think about the way you present yourself. It's, you know, as a developer, you think you evaluate a company based on like the API reliability and the P 95, but a lot of developers say, is the website good? Is the message clear? Do I like trust this founder? I'm building my whole feature on. So I've tried to nail that as well as like the reliability of the infrastructure. You're right. It's very hard. And there's a lot of kind of foot guns that you run into when running headless browsers at scale. Right.Competing with Existing Headless Browser Solutionsswyx [00:12:10]: So let's pick one. You have eight features here. Seamless integration. Scalability. Fast or speed. Secure. Observable. Stealth. That's interesting. Extensible and developer first. What comes to your mind as like the top two, three hardest ones? Yeah.Running headless browsers at scalePaul [00:12:26]: I think just running headless browsers at scale is like the hardest one. And maybe can I nerd out for a second? Is that okay? I heard this is a technical audience, so I'll talk to the other nerds. Whoa. They were listening. Yeah. They're upset. They're ready. The AGI is angry. Okay. So. So how do you run a browser in the cloud? Let's start with that, right? So let's say you're using a popular browser automation framework like Puppeteer, Playwright, and Selenium. Maybe you've written a code, some code locally on your computer that opens up Google. It finds the search bar and then types in, you know, search for Latent Space and hits the search button. That script works great locally. You can see the little browser open up. You want to take that to production. You want to run the script in a cloud environment. So when your laptop is closed, your browser is doing something. The browser is doing something. Well, I, we use Amazon. You can see the little browser open up. You know, the first thing I'd reach for is probably like some sort of serverless infrastructure. I would probably try and deploy on a Lambda. But Chrome itself is too big to run on a Lambda. It's over 250 megabytes. So you can't easily start it on a Lambda. So you maybe have to use something like Lambda layers to squeeze it in there. Maybe use a different Chromium build that's lighter. And you get it on the Lambda. Great. It works. But it runs super slowly. It's because Lambdas are very like resource limited. They only run like with one vCPU. You can run one process at a time. Remember, Chromium is super beefy. It's barely running on my MacBook Air. I'm still downloading it from a pre-run. Yeah, from the test earlier, right? I'm joking. But it's big, you know? So like Lambda, it just won't work really well. Maybe it'll work, but you need something faster. Your users want something faster. Okay. Well, let's put it on a beefier instance. Let's get an EC2 server running. Let's throw Chromium on there. Great. Okay. I can, that works well with one user. But what if I want to run like 10 Chromium instances, one for each of my users? Okay. Well, I might need two EC2 instances. Maybe 10. All of a sudden, you have multiple EC2 instances. This sounds like a problem for Kubernetes and Docker, right? Now, all of a sudden, you're using ECS or EKS, the Kubernetes or container solutions by Amazon. You're spending up and down containers, and you're spending a whole engineer's time on kind of maintaining this stateful distributed system. Those are some of the worst systems to run because when it's a stateful distributed system, it means that you are bound by the connections to that thing. You have to keep the browser open while someone is working with it, right? That's just a painful architecture to run. And there's all this other little gotchas with Chromium, like Chromium, which is the open source version of Chrome, by the way. You have to install all these fonts. You want emojis working in your browsers because your vision model is looking for the emoji. You need to make sure you have the emoji fonts. You need to make sure you have all the right extensions configured, like, oh, do you want ad blocking? How do you configure that? How do you actually record all these browser sessions? Like it's a headless browser. You can't look at it. So you need to have some sort of observability. Maybe you're recording videos and storing those somewhere. It all kind of adds up to be this just giant monster piece of your project when all you wanted to do was run a lot of browsers in production for this little script to go to google.com and search. And when I see a complex distributed system, I see an opportunity to build a great infrastructure company. And we really abstract that away with Browserbase where our customers can use these existing frameworks, Playwright, Publisher, Selenium, or our own stagehand and connect to our browsers in a serverless-like way. And control them, and then just disconnect when they're done. And they don't have to think about the complex distributed system behind all of that. They just get a browser running anywhere, anytime. Really easy to connect to.swyx [00:15:55]: I'm sure you have questions. My standard question with anything, so essentially you're a serverless browser company, and there's been other serverless things that I'm familiar with in the past, serverless GPUs, serverless website hosting. That's where I come from with Netlify. One question is just like, you promised to spin up thousands of servers. You promised to spin up thousands of browsers in milliseconds. I feel like there's no real solution that does that yet. And I'm just kind of curious how. The only solution I know, which is to kind of keep a kind of warm pool of servers around, which is expensive, but maybe not so expensive because it's just CPUs. So I'm just like, you know. Yeah.Browsers as a Core Primitive in AI InfrastructurePaul [00:16:36]: You nailed it, right? I mean, how do you offer a serverless-like experience with something that is clearly not serverless, right? And the answer is, you need to be able to run... We run many browsers on single nodes. We use Kubernetes at browser base. So we have many pods that are being scheduled. We have to predictably schedule them up or down. Yes, thousands of browsers in milliseconds is the best case scenario. If you hit us with 10,000 requests, you may hit a slower cold start, right? So we've done a lot of work on predictive scaling and being able to kind of route stuff to different regions where we have multiple regions of browser base where we have different pools available. You can also pick the region you want to go to based on like lower latency, round trip, time latency. It's very important with these types of things. There's a lot of requests going over the wire. So for us, like having a VM like Firecracker powering everything under the hood allows us to be super nimble and spin things up or down really quickly with strong multi-tenancy. But in the end, this is like the complex infrastructural challenges that we have to kind of deal with at browser base. And we have a lot more stuff on our roadmap to allow customers to have more levers to pull to exchange, do you want really fast browser startup times or do you want really low costs? And if you're willing to be more flexible on that, we may be able to kind of like work better for your use cases.swyx [00:17:44]: Since you used Firecracker, shouldn't Fargate do that for you or did you have to go lower level than that? We had to go lower level than that.Paul [00:17:51]: I find this a lot with Fargate customers, which is alarming for Fargate. We used to be a giant Fargate customer. Actually, the first version of browser base was ECS and Fargate. And unfortunately, it's a great product. I think we were actually the largest Fargate customer in our region for a little while. No, what? Yeah, seriously. And unfortunately, it's a great product, but I think if you're an infrastructure company, you actually have to have a deeper level of control over these primitives. I think it's the same thing is true with databases. We've used other database providers and I think-swyx [00:18:21]: Yeah, serverless Postgres.Paul [00:18:23]: Shocker. When you're an infrastructure company, you're on the hook if any provider has an outage. And I can't tell my customers like, hey, we went down because so-and-so went down. That's not acceptable. So for us, we've really moved to bringing things internally. It's kind of opposite of what we preach. We tell our customers, don't build this in-house, but then we're like, we build a lot of stuff in-house. But I think it just really depends on what is in the critical path. We try and have deep ownership of that.Alessio [00:18:46]: On the distributed location side, how does that work for the web where you might get sort of different content in different locations, but the customer is expecting, you know, if you're in the US, I'm expecting the US version. But if you're spinning up my browser in France, I might get the French version. Yeah.Paul [00:19:02]: Yeah. That's a good question. Well, generally, like on the localization, there is a thing called locale in the browser. You can set like what your locale is. If you're like in the ENUS browser or not, but some things do IP, IP based routing. And in that case, you may want to have a proxy. Like let's say you're running something in the, in Europe, but you want to make sure you're showing up from the US. You may want to use one of our proxy features so you can turn on proxies to say like, make sure these connections always come from the United States, which is necessary too, because when you're browsing the web, you're coming from like a, you know, data center IP, and that can make things a lot harder to browse web. So we do have kind of like this proxy super network. Yeah. We have a proxy for you based on where you're going, so you can reliably automate the web. But if you get scheduled in Europe, that doesn't happen as much. We try and schedule you as close to, you know, your origin that you're trying to go to. But generally you have control over the regions you can put your browsers in. So you can specify West one or East one or Europe. We only have one region of Europe right now, actually. Yeah.Alessio [00:19:55]: What's harder, the browser or the proxy? I feel like to me, it feels like actually proxying reliably at scale. It's much harder than spending up browsers at scale. I'm curious. It's all hard.Paul [00:20:06]: It's layers of hard, right? Yeah. I think it's different levels of hard. I think the thing with the proxy infrastructure is that we work with many different web proxy providers and some are better than others. Some have good days, some have bad days. And our customers who've built browser infrastructure on their own, they have to go and deal with sketchy actors. Like first they figure out their own browser infrastructure and then they got to go buy a proxy. And it's like you can pay in Bitcoin and it just kind of feels a little sus, right? It's like you're buying drugs when you're trying to get a proxy online. We have like deep relationships with these counterparties. We're able to audit them and say, is this proxy being sourced ethically? Like it's not running on someone's TV somewhere. Is it free range? Yeah. Free range organic proxies, right? Right. We do a level of diligence. We're SOC 2. So we have to understand what is going on here. But then we're able to make sure that like we route around proxy providers not working. There's proxy providers who will just, the proxy will stop working all of a sudden. And then if you don't have redundant proxying on your own browsers, that's hard down for you or you may get some serious impacts there. With us, like we intelligently know, hey, this proxy is not working. Let's go to this one. And you can kind of build a network of multiple providers to really guarantee the best uptime for our customers. Yeah. So you don't own any proxies? We don't own any proxies. You're right. The team has been saying who wants to like take home a little proxy server, but not yet. We're not there yet. You know?swyx [00:21:25]: It's a very mature market. I don't think you should build that yourself. Like you should just be a super customer of them. Yeah. Scraping, I think, is the main use case for that. I guess. Well, that leads us into CAPTCHAs and also off, but let's talk about CAPTCHAs. You had a little spiel that you wanted to talk about CAPTCHA stuff.Challenges of Scaling Browser InfrastructurePaul [00:21:43]: Oh, yeah. I was just, I think a lot of people ask, if you're thinking about proxies, you're thinking about CAPTCHAs too. I think it's the same thing. You can go buy CAPTCHA solvers online, but it's the same buying experience. It's some sketchy website, you have to integrate it. It's not fun to buy these things and you can't really trust that the docs are bad. What Browserbase does is we integrate a bunch of different CAPTCHAs. We do some stuff in-house, but generally we just integrate with a bunch of known vendors and continually monitor and maintain these things and say, is this working or not? Can we route around it or not? These are CAPTCHA solvers. CAPTCHA solvers, yeah. Not CAPTCHA providers, CAPTCHA solvers. Yeah, sorry. CAPTCHA solvers. We really try and make sure all of that works for you. I think as a dev, if I'm buying infrastructure, I want it all to work all the time and it's important for us to provide that experience by making sure everything does work and monitoring it on our own. Yeah. Right now, the world of CAPTCHAs is tricky. I think AI agents in particular are very much ahead of the internet infrastructure. CAPTCHAs are designed to block all types of bots, but there are now good bots and bad bots. I think in the future, CAPTCHAs will be able to identify who a good bot is, hopefully via some sort of KYC. For us, we've been very lucky. We have very little to no known abuse of Browserbase because we really look into who we work with. And for certain types of CAPTCHA solving, we only allow them on certain types of plans because we want to make sure that we can know what people are doing, what their use cases are. And that's really allowed us to try and be an arbiter of good bots, which is our long term goal. I want to build great relationships with people like Cloudflare so we can agree, hey, here are these acceptable bots. We'll identify them for you and make sure we flag when they come to your website. This is a good bot, you know?Alessio [00:23:23]: I see. And Cloudflare said they want to do more of this. So they're going to set by default, if they think you're an AI bot, they're going to reject. I'm curious if you think this is something that is going to be at the browser level or I mean, the DNS level with Cloudflare seems more where it should belong. But I'm curious how you think about it.Paul [00:23:40]: I think the web's going to change. You know, I think that the Internet as we have it right now is going to change. And we all need to just accept that the cat is out of the bag. And instead of kind of like wishing the Internet was like it was in the 2000s, we can have free content line that wouldn't be scraped. It's just it's not going to happen. And instead, we should think about like, one, how can we change? How can we change the models of, you know, information being published online so people can adequately commercialize it? But two, how do we rebuild applications that expect that AI agents are going to log in on their behalf? Those are the things that are going to allow us to kind of like identify good and bad bots. And I think the team at Clerk has been doing a really good job with this on the authentication side. I actually think that auth is the biggest thing that will prevent agents from accessing stuff, not captchas. And I think there will be agent auth in the future. I don't know if it's going to happen from an individual company, but actually authentication providers that have a, you know, hidden login as agent feature, which will then you put in your email, you'll get a push notification, say like, hey, your browser-based agent wants to log into your Airbnb. You can approve that and then the agent can proceed. That really circumvents the need for captchas or logging in as you and sharing your password. I think agent auth is going to be one way we identify good bots going forward. And I think a lot of this captcha solving stuff is really short-term problems as the internet kind of reorients itself around how it's going to work with agents browsing the web, just like people do. Yeah.Managing Distributed Browser Locations and Proxiesswyx [00:24:59]: Stitch recently was on Hacker News for talking about agent experience, AX, which is a thing that Netlify is also trying to clone and coin and talk about. And we've talked about this on our previous episodes before in a sense that I actually think that's like maybe the only part of the tech stack that needs to be kind of reinvented for agents. Everything else can stay the same, CLIs, APIs, whatever. But auth, yeah, we need agent auth. And it's mostly like short-lived, like it should not, it should be a distinct, identity from the human, but paired. I almost think like in the same way that every social network should have your main profile and then your alt accounts or your Finsta, it's almost like, you know, every, every human token should be paired with the agent token and the agent token can go and do stuff on behalf of the human token, but not be presumed to be the human. Yeah.Paul [00:25:48]: It's like, it's, it's actually very similar to OAuth is what I'm thinking. And, you know, Thread from Stitch is an investor, Colin from Clerk, Octaventures, all investors in browser-based because like, I hope they solve this because they'll make browser-based submission more possible. So we don't have to overcome all these hurdles, but I think it will be an OAuth-like flow where an agent will ask to log in as you, you'll approve the scopes. Like it can book an apartment on Airbnb, but it can't like message anybody. And then, you know, the agent will have some sort of like role-based access control within an application. Yeah. I'm excited for that.swyx [00:26:16]: The tricky part is just, there's one, one layer of delegation here, which is like, you're authoring my user's user or something like that. I don't know if that's tricky or not. Does that make sense? Yeah.Paul [00:26:25]: You know, actually at Twilio, I worked on the login identity and access. Management teams, right? So like I built Twilio's login page.swyx [00:26:31]: You were an intern on that team and then you became the lead in two years? Yeah.Paul [00:26:34]: Yeah. I started as an intern in 2016 and then I was the tech lead of that team. How? That's not normal. I didn't have a life. He's not normal. Look at this guy. I didn't have a girlfriend. I just loved my job. I don't know. I applied to 500 internships for my first job and I got rejected from every single one of them except for Twilio and then eventually Amazon. And they took a shot on me and like, I was getting paid money to write code, which was my dream. Yeah. Yeah. I'm very lucky that like this coding thing worked out because I was going to be doing it regardless. And yeah, I was able to kind of spend a lot of time on a team that was growing at a company that was growing. So it informed a lot of this stuff here. I think these are problems that have been solved with like the SAML protocol with SSO. I think it's a really interesting stuff with like WebAuthn, like these different types of authentication, like schemes that you can use to authenticate people. The tooling is all there. It just needs to be tweaked a little bit to work for agents. And I think the fact that there are companies that are already. Providing authentication as a service really sets it up. Well, the thing that's hard is like reinventing the internet for agents. We don't want to rebuild the internet. That's an impossible task. And I think people often say like, well, we'll have this second layer of APIs built for agents. I'm like, we will for the top use cases, but instead of we can just tweak the internet as is, which is on the authentication side, I think we're going to be the dumb ones going forward. Unfortunately, I think AI is going to be able to do a lot of the tasks that we do online, which means that it will be able to go to websites, click buttons on our behalf and log in on our behalf too. So with this kind of like web agent future happening, I think with some small structural changes, like you said, it feels like it could all slot in really nicely with the existing internet.Handling CAPTCHAs and Agent Authenticationswyx [00:28:08]: There's one more thing, which is the, your live view iframe, which lets you take, take control. Yeah. Obviously very key for operator now, but like, was, is there anything interesting technically there or that the people like, well, people always want this.Paul [00:28:21]: It was really hard to build, you know, like, so, okay. Headless browsers, you don't see them, right. They're running. They're running in a cloud somewhere. You can't like look at them. And I just want to really make, it's a weird name. I wish we came up with a better name for this thing, but you can't see them. Right. But customers don't trust AI agents, right. At least the first pass. So what we do with our live view is that, you know, when you use browser base, you can actually embed a live view of the browser running in the cloud for your customer to see it working. And that's what the first reason is the build trust, like, okay, so I have this script. That's going to go automate a website. I can embed it into my web application via an iframe and my customer can watch. I think. And then we added two way communication. So now not only can you watch the browser kind of being operated by AI, if you want to pause and actually click around type within this iframe that's controlling a browser, that's also possible. And this is all thanks to some of the lower level protocol, which is called the Chrome DevTools protocol. It has a API called start screencast, and you can also send mouse clicks and button clicks to a remote browser. And this is all embeddable within iframes. You have a browser within a browser, yo. And then you simulate the screen, the click on the other side. Exactly. And this is really nice often for, like, let's say, a capture that can't be solved. You saw this with Operator, you know, Operator actually uses a different approach. They use VNC. So, you know, you're able to see, like, you're seeing the whole window here. What we're doing is something a little lower level with the Chrome DevTools protocol. It's just PNGs being streamed over the wire. But the same thing is true, right? Like, hey, I'm running a window. Pause. Can you do something in this window? Human. Okay, great. Resume. Like sometimes 2FA tokens. Like if you get that text message, you might need a person to type that in. Web agents need human-in-the-loop type workflows still. You still need a person to interact with the browser. And building a UI to proxy that is kind of hard. You may as well just show them the whole browser and say, hey, can you finish this up for me? And then let the AI proceed on afterwards. Is there a future where I stream my current desktop to browser base? I don't think so. I think we're very much cloud infrastructure. Yeah. You know, but I think a lot of the stuff we're doing, we do want to, like, build tools. Like, you know, we'll talk about the stage and, you know, web agent framework in a second. But, like, there's a case where a lot of people are going desktop first for, you know, consumer use. And I think cloud is doing a lot of this, where I expect to see, you know, MCPs really oriented around the cloud desktop app for a reason, right? Like, I think a lot of these tools are going to run on your computer because it makes... I think it's breaking out. People are putting it on a server. Oh, really? Okay. Well, sweet. We'll see. We'll see that. I was surprised, though, wasn't I? I think that the browser company, too, with Dia Browser, it runs on your machine. You know, it's going to be...swyx [00:30:50]: What is it?Paul [00:30:51]: So, Dia Browser, as far as I understand... I used to use Arc. Yeah. I haven't used Arc. But I'm a big fan of the browser company. I think they're doing a lot of cool stuff in consumer. As far as I understand, it's a browser where you have a sidebar where you can, like, chat with it and it can control the local browser on your machine. So, if you imagine, like, what a consumer web agent is, which it lives alongside your browser, I think Google Chrome has Project Marina, I think. I almost call it Project Marinara for some reason. I don't know why. It's...swyx [00:31:17]: No, I think it's someone really likes the Waterworld. Oh, I see. The classic Kevin Costner. Yeah.Paul [00:31:22]: Okay. Project Marinara is a similar thing to the Dia Browser, in my mind, as far as I understand it. You have a browser that has an AI interface that will take over your mouse and keyboard and control the browser for you. Great for consumer use cases. But if you're building applications that rely on a browser and it's more part of a greater, like, AI app experience, you probably need something that's more like infrastructure, not a consumer app.swyx [00:31:44]: Just because I have explored a little bit in this area, do people want branching? So, I have the state. Of whatever my browser's in. And then I want, like, 100 clones of this state. Do people do that? Or...Paul [00:31:56]: People don't do it currently. Yeah. But it's definitely something we're thinking about. I think the idea of forking a browser is really cool. Technically, kind of hard. We're starting to see this in code execution, where people are, like, forking some, like, code execution, like, processes or forking some tool calls or branching tool calls. Haven't seen it at the browser level yet. But it makes sense. Like, if an AI agent is, like, using a website and it's not sure what path it wants to take to crawl this website. To find the information it's looking for. It would make sense for it to explore both paths in parallel. And that'd be a very, like... A road not taken. Yeah. And hopefully find the right answer. And then say, okay, this was actually the right one. And memorize that. And go there in the future. On the roadmap. For sure. Don't make my roadmap, please. You know?Alessio [00:32:37]: How do you actually do that? Yeah. How do you fork? I feel like the browser is so stateful for so many things.swyx [00:32:42]: Serialize the state. Restore the state. I don't know.Paul [00:32:44]: So, it's one of the reasons why we haven't done it yet. It's hard. You know? Like, to truly fork, it's actually quite difficult. The naive way is to open the same page in a new tab and then, like, hope that it's at the same thing. But if you have a form halfway filled, you may have to, like, take the whole, you know, container. Pause it. All the memory. Duplicate it. Restart it from there. It could be very slow. So, we haven't found a thing. Like, the easy thing to fork is just, like, copy the page object. You know? But I think there needs to be something a little bit more robust there. Yeah.swyx [00:33:12]: So, MorphLabs has this infinite branch thing. Like, wrote a custom fork of Linux or something that let them save the system state and clone it. MorphLabs, hit me up. I'll be a customer. Yeah. That's the only. I think that's the only way to do it. Yeah. Like, unless Chrome has some special API for you. Yeah.Paul [00:33:29]: There's probably something we'll reverse engineer one day. I don't know. Yeah.Alessio [00:33:32]: Let's talk about StageHand, the AI web browsing framework. You have three core components, Observe, Extract, and Act. Pretty clean landing page. What was the idea behind making a framework? Yeah.Stagehand: AI web browsing frameworkPaul [00:33:43]: So, there's three frameworks that are very popular or already exist, right? Puppeteer, Playwright, Selenium. Those are for building hard-coded scripts to control websites. And as soon as I started to play with LLMs plus browsing, I caught myself, you know, code-genning Playwright code to control a website. I would, like, take the DOM. I'd pass it to an LLM. I'd say, can you generate the Playwright code to click the appropriate button here? And it would do that. And I was like, this really should be part of the frameworks themselves. And I became really obsessed with SDKs that take natural language as part of, like, the API input. And that's what StageHand is. StageHand exposes three APIs, and it's a super set of Playwright. So, if you go to a page, you may want to take an action, click on the button, fill in the form, etc. That's what the act command is for. You may want to extract some data. This one takes a natural language, like, extract the winner of the Super Bowl from this page. You can give it a Zod schema, so it returns a structured output. And then maybe you're building an API. You can do an agent loop, and you want to kind of see what actions are possible on this page before taking one. You can do observe. So, you can observe the actions on the page, and it will generate a list of actions. You can guide it, like, give me actions on this page related to buying an item. And you can, like, buy it now, add to cart, view shipping options, and pass that to an LLM, an agent loop, to say, what's the appropriate action given this high-level goal? So, StageHand isn't a web agent. It's a framework for building web agents. And we think that agent loops are actually pretty close to the application layer because every application probably has different goals or different ways it wants to take steps. I don't think I've seen a generic. Maybe you guys are the experts here. I haven't seen, like, a really good AI agent framework here. Everyone kind of has their own special sauce, right? I see a lot of developers building their own agent loops, and they're using tools. And I view StageHand as the browser tool. So, we expose act, extract, observe. Your agent can call these tools. And from that, you don't have to worry about it. You don't have to worry about generating playwright code performantly. You don't have to worry about running it. You can kind of just integrate these three tool calls into your agent loop and reliably automate the web.swyx [00:35:48]: A special shout-out to Anirudh, who I met at your dinner, who I think listens to the pod. Yeah. Hey, Anirudh.Paul [00:35:54]: Anirudh's a man. He's a StageHand guy.swyx [00:35:56]: I mean, the interesting thing about each of these APIs is they're kind of each startup. Like, specifically extract, you know, Firecrawler is extract. There's, like, Expand AI. There's a whole bunch of, like, extract companies. They just focus on extract. I'm curious. Like, I feel like you guys are going to collide at some point. Like, right now, it's friendly. Everyone's in a blue ocean. At some point, it's going to be valuable enough that there's some turf battle here. I don't think you have a dog in a fight. I think you can mock extract to use an external service if they're better at it than you. But it's just an observation that, like, in the same way that I see each option, each checkbox in the side of custom GBTs becoming a startup or each box in the Karpathy chart being a startup. Like, this is also becoming a thing. Yeah.Paul [00:36:41]: I mean, like, so the way StageHand works is that it's MIT-licensed, completely open source. You bring your own API key to your LLM of choice. You could choose your LLM. We don't make any money off of the extract or really. We only really make money if you choose to run it with our browser. You don't have to. You can actually use your own browser, a local browser. You know, StageHand is completely open source for that reason. And, yeah, like, I think if you're building really complex web scraping workflows, I don't know if StageHand is the tool for you. I think it's really more if you're building an AI agent that needs a few general tools or if it's doing a lot of, like, web automation-intensive work. But if you're building a scraping company, StageHand is not your thing. You probably want something that's going to, like, get HTML content, you know, convert that to Markdown, query it. That's not what StageHand does. StageHand is more about reliability. I think we focus a lot on reliability and less so on cost optimization and speed at this point.swyx [00:37:33]: I actually feel like StageHand, so the way that StageHand works, it's like, you know, page.act, click on the quick start. Yeah. It's kind of the integration test for the code that you would have to write anyway, like the Puppeteer code that you have to write anyway. And when the page structure changes, because it always does, then this is still the test. This is still the test that I would have to write. Yeah. So it's kind of like a testing framework that doesn't need implementation detail.Paul [00:37:56]: Well, yeah. I mean, Puppeteer, Playwright, and Slenderman were all designed as testing frameworks, right? Yeah. And now people are, like, hacking them together to automate the web. I would say, and, like, maybe this is, like, me being too specific. But, like, when I write tests, if the page structure changes. Without me knowing, I want that test to fail. So I don't know if, like, AI, like, regenerating that. Like, people are using StageHand for testing. But it's more for, like, usability testing, not, like, testing of, like, does the front end, like, has it changed or not. Okay. But generally where we've seen people, like, really, like, take off is, like, if they're using, you know, something. If they want to build a feature in their application that's kind of like Operator or Deep Research, they're using StageHand to kind of power that tool calling in their own agent loop. Okay. Cool.swyx [00:38:37]: So let's go into Operator, the first big agent launch of the year from OpenAI. Seems like they have a whole bunch scheduled. You were on break and your phone blew up. What's your just general view of computer use agents is what they're calling it. The overall category before we go into Open Operator, just the overall promise of Operator. I will observe that I tried it once. It was okay. And I never tried it again.OpenAI's Operator and computer use agentsPaul [00:38:58]: That tracks with my experience, too. Like, I'm a huge fan of the OpenAI team. Like, I think that I do not view Operator as the company. I'm not a company killer for browser base at all. I think it actually shows people what's possible. I think, like, computer use models make a lot of sense. And I'm actually most excited about computer use models is, like, their ability to, like, really take screenshots and reasoning and output steps. I think that using mouse click or mouse coordinates, I've seen that proved to be less reliable than I would like. And I just wonder if that's the right form factor. What we've done with our framework is anchor it to the DOM itself, anchor it to the actual item. So, like, if it's clicking on something, it's clicking on that thing, you know? Like, it's more accurate. No matter where it is. Yeah, exactly. Because it really ties in nicely. And it can handle, like, the whole viewport in one go, whereas, like, Operator can only handle what it sees. Can you hover? Is hovering a thing that you can do? I don't know if we expose it as a tool directly, but I'm sure there's, like, an API for hovering. Like, move mouse to this position. Yeah, yeah, yeah. I think you can trigger hover, like, via, like, the JavaScript on the DOM itself. But, no, I think, like, when we saw computer use, everyone's eyes lit up because they realized, like, wow, like, AI is going to actually automate work for people. And I think seeing that kind of happen from both of the labs, and I'm sure we're going to see more labs launch computer use models, I'm excited to see all the stuff that people build with it. I think that I'd love to see computer use power, like, controlling a browser on browser base. And I think, like, Open Operator, which was, like, our open source version of OpenAI's Operator, was our first take on, like, how can we integrate these models into browser base? And we handle the infrastructure and let the labs do the models. I don't have a sense that Operator will be released as an API. I don't know. Maybe it will. I'm curious to see how well that works because I think it's going to be really hard for a company like OpenAI to do things like support CAPTCHA solving or, like, have proxies. Like, I think it's hard for them structurally. Imagine this New York Times headline, OpenAI CAPTCHA solving. Like, that would be a pretty bad headline, this New York Times headline. Browser base solves CAPTCHAs. No one cares. No one cares. And, like, our investors are bored. Like, we're all okay with this, you know? We're building this company knowing that the CAPTCHA solving is short-lived until we figure out how to authenticate good bots. I think it's really hard for a company like OpenAI, who has this brand that's so, so good, to balance with, like, the icky parts of web automation, which it can be kind of complex to solve. I'm sure OpenAI knows who to call whenever they need you. Yeah, right. I'm sure they'll have a great partnership.Alessio [00:41:23]: And is Open Operator just, like, a marketing thing for you? Like, how do you think about resource allocation? So, you can spin this up very quickly. And now there's all this, like, open deep research, just open all these things that people are building. We started it, you know. You're the original Open. We're the original Open operator, you know? Is it just, hey, look, this is a demo, but, like, we'll help you build out an actual product for yourself? Like, are you interested in going more of a product route? That's kind of the OpenAI way, right? They started as a model provider and then…Paul [00:41:53]: Yeah, we're not interested in going the product route yet. I view Open Operator as a model provider. It's a reference project, you know? Let's show people how to build these things using the infrastructure and models that are out there. And that's what it is. It's, like, Open Operator is very simple. It's an agent loop. It says, like, take a high-level goal, break it down into steps, use tool calling to accomplish those steps. It takes screenshots and feeds those screenshots into an LLM with the step to generate the right action. It uses stagehand under the hood to actually execute this action. It doesn't use a computer use model. And it, like, has a nice interface using the live view that we talked about, the iframe, to embed that into an application. So I felt like people on launch day wanted to figure out how to build their own version of this. And we turned that around really quickly to show them. And I hope we do that with other things like deep research. We don't have a deep research launch yet. I think David from AOMNI actually has an amazing open deep research that he launched. It has, like, 10K GitHub stars now. So he's crushing that. But I think if people want to build these features natively into their application, they need good reference projects. And I think Open Operator is a good example of that.swyx [00:42:52]: I don't know. Actually, I'm actually pretty bullish on API-driven operator. Because that's the only way that you can sort of, like, once it's reliable enough, obviously. And now we're nowhere near. But, like, give it five years. It'll happen, you know. And then you can sort of spin this up and browsers are working in the background and you don't necessarily have to know. And it just is booking restaurants for you, whatever. I can definitely see that future happening. I had this on the landing page here. This might be a slightly out of order. But, you know, you have, like, sort of three use cases for browser base. Open Operator. Or this is the operator sort of use case. It's kind of like the workflow automation use case. And it completes with UiPath in the sort of RPA category. Would you agree with that? Yeah, I would agree with that. And then there's Agents we talked about already. And web scraping, which I imagine would be the bulk of your workload right now, right?Paul [00:43:40]: No, not at all. I'd say actually, like, the majority is browser automation. We're kind of expensive for web scraping. Like, I think that if you're building a web scraping product, if you need to do occasional web scraping or you have to do web scraping that works every single time, you want to use browser automation. Yeah. You want to use browser-based. But if you're building web scraping workflows, what you should do is have a waterfall. You should have the first request is a curl to the website. See if you can get it without even using a browser. And then the second request may be, like, a scraping-specific API. There's, like, a thousand scraping APIs out there that you can use to try and get data. Scraping B. Scraping B is a great example, right? Yeah. And then, like, if those two don't work, bring out the heavy hitter. Like, browser-based will 100% work, right? It will load the page in a real browser, hydrate it. I see.swyx [00:44:21]: Because a lot of people don't render to JS.swyx [00:44:25]: Yeah, exactly.Paul [00:44:26]: So, I mean, the three big use cases, right? Like, you know, automation, web data collection, and then, you know, if you're building anything agentic that needs, like, a browser tool, you want to use browser-based.Alessio [00:44:35]: Is there any use case that, like, you were super surprised by that people might not even think about? Oh, yeah. Or is it, yeah, anything that you can share? The long tail is crazy. Yeah.Surprising use cases of BrowserbasePaul [00:44:44]: One of the case studies on our website that I think is the most interesting is this company called Benny. So, the way that it works is if you're on food stamps in the United States, you can actually get rebates if you buy certain things. Yeah. You buy some vegetables. You submit your receipt to the government. They'll give you a little rebate back. Say, hey, thanks for buying vegetables. It's good for you. That process of submitting that receipt is very painful. And the way Benny works is you use their app to take a photo of your receipt, and then Benny will go submit that receipt for you and then deposit the money into your account. That's actually using no AI at all. It's all, like, hard-coded scripts. They maintain the scripts. They've been doing a great job. And they build this amazing consumer app. But it's an example of, like, all these, like, tedious workflows that people have to do to kind of go about their business. And they're doing it for the sake of their day-to-day lives. And I had never known about, like, food stamp rebates or the complex forms you have to do to fill them. But the world is powered by millions and millions of tedious forms, visas. You know, Emirate Lighthouse is a customer, right? You know, they do the O1 visa. Millions and millions of forms are taking away humans' time. And I hope that Browserbase can help power software that automates away the web forms that we don't need anymore. Yeah.swyx [00:45:49]: I mean, I'm very supportive of that. I mean, forms. I do think, like, government itself is a big part of it. I think the government itself should embrace AI more to do more sort of human-friendly form filling. Mm-hmm. But I'm not optimistic. I'm not holding my breath. Yeah. We'll see. Okay. I think I'm about to zoom out. I have a little brief thing on computer use, and then we can talk about founder stuff, which is, I tend to think of developer tooling markets in impossible triangles, where everyone starts in a niche, and then they start to branch out. So I already hinted at a little bit of this, right? We mentioned more. We mentioned E2B. We mentioned Firecrawl. And then there's Browserbase. So there's, like, all this stuff of, like, have serverless virtual computer that you give to an agent and let them do stuff with it. And there's various ways of connecting it to the internet. You can just connect to a search API, like SERP API, whatever other, like, EXA is another one. That's what you're searching. You can also have a JSON markdown extractor, which is Firecrawl. Or you can have a virtual browser like Browserbase, or you can have a virtual machine like Morph. And then there's also maybe, like, a virtual sort of code environment, like Code Interpreter. So, like, there's just, like, a bunch of different ways to tackle the problem of give a computer to an agent. And I'm just kind of wondering if you see, like, everyone's just, like, happily coexisting in their respective niches. And as a developer, I just go and pick, like, a shopping basket of one of each. Or do you think that you eventually, people will collide?Future of browser automation and market competitionPaul [00:47:18]: I think that currently it's not a zero-sum market. Like, I think we're talking about... I think we're talking about all of knowledge work that people do that can be automated online. All of these, like, trillions of hours that happen online where people are working. And I think that there's so much software to be built that, like, I tend not to think about how these companies will collide. I just try to solve the problem as best as I can and make this specific piece of infrastructure, which I think is an important primitive, the best I possibly can. And yeah. I think there's players that are actually going to like it. I think there's players that are going to launch, like, over-the-top, you know, platforms, like agent platforms that have all these tools built in, right? Like, who's building the rippling for agent tools that has the search tool, the browser tool, the operating system tool, right? There are some. There are some. There are some, right? And I think in the end, what I have seen as my time as a developer, and I look at all the favorite tools that I have, is that, like, for tools and primitives with sufficient levels of complexity, you need to have a solution that's really bespoke to that primitive, you know? And I am sufficiently convinced that the browser is complex enough to deserve a primitive. Obviously, I have to. I'm the founder of BrowserBase, right? I'm talking my book. But, like, I think maybe I can give you one spicy take against, like, maybe just whole OS running. I think that when I look at computer use when it first came out, I saw that the majority of use cases for computer use were controlling a browser. And do we really need to run an entire operating system just to control a browser? I don't think so. I don't think that's necessary. You know, BrowserBase can run browsers for way cheaper than you can if you're running a full-fledged OS with a GUI, you know, operating system. And I think that's just an advantage of the browser. It is, like, browsers are little OSs, and you can run them very efficiently if you orchestrate it well. And I think that allows us to offer 90% of the, you know, functionality in the platform needed at 10% of the cost of running a full OS. Yeah.Open Operator: Browserbase's Open-Source Alternativeswyx [00:49:16]: I definitely see the logic in that. There's a Mark Andreessen quote. I don't know if you know this one. Where he basically observed that the browser is turning the operating system into a poorly debugged set of device drivers, because most of the apps are moved from the OS to the browser. So you can just run browsers.Paul [00:49:31]: There's a place for OSs, too. Like, I think that there are some applications that only run on Windows operating systems. And Eric from pig.dev in this upcoming YC batch, or last YC batch, like, he's building all run tons of Windows operating systems for you to control with your agent. And like, there's some legacy EHR systems that only run on Internet-controlled systems. Yeah.Paul [00:49:54]: I think that's it. I think, like, there are use cases for specific operating systems for specific legacy software. And like, I'm excited to see what he does with that. I just wanted to give a shout out to the pig.dev website.swyx [00:50:06]: The pigs jump when you click on them. Yeah. That's great.Paul [00:50:08]: Eric, he's the former co-founder of banana.dev, too.swyx [00:50:11]: Oh, that Eric. Yeah. That Eric. Okay. Well, he abandoned bananas for pigs. I hope he doesn't start going around with pigs now.Alessio [00:50:18]: Like he was going around with bananas. A little toy pig. Yeah. Yeah. I love that. What else are we missing? I think we covered a lot of, like, the browser-based product history, but. What do you wish people asked you? Yeah.Paul [00:50:29]: I wish people asked me more about, like, what will the future of software look like? Because I think that's really where I've spent a lot of time about why do browser-based. Like, for me, starting a company is like a means of last resort. Like, you shouldn't start a company unless you absolutely have to. And I remain convinced that the future of software is software that you're going to click a button and it's going to do stuff on your behalf. Right now, software. You click a button and it maybe, like, calls it back an API and, like, computes some numbers. It, like, modifies some text, whatever. But the future of software is software using software. So, I may log into my accounting website for my business, click a button, and it's going to go load up my Gmail, search my emails, find the thing, upload the receipt, and then comment it for me. Right? And it may use it using APIs, maybe a browser. I don't know. I think it's a little bit of both. But that's completely different from how we've built software so far. And that's. I think that future of software has different infrastructure requirements. It's going to require different UIs. It's going to require different pieces of infrastructure. I think the browser infrastructure is one piece that fits into that, along with all the other categories you mentioned. So, I think that it's going to require developers to think differently about how they've built software for, you know
In this episode of PodRocket, Dev Agrawal, dev advocate and developer, talks about building efficient asynchronous UIs, the challenges and solutions for handling complex state management, utilizing React and Solid frameworks, and the potential of suspense boundaries and transitions in modern web development. Links https://devagr.me https://github.com/devagrawal09 https://www.linkedin.com/in/dev-agrawal-88449b157 https://medium.com/@devagrawal09 https://www.youtube.com/channel/UCDXzM8ijdxkVA6NbQiQCKag https://x.com/devagrawal09 https://events.codemash.org/2025CodeMashConference#/agendaday=4&lang=en&sessionId=76186000004278631&viewMode=2 We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Let us know by sending an email to our producer, Emily, at emily.kochanekketner@logrocket.com (mailto:emily.kochanekketner@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understand where your users are struggling by trying it for free at [LogRocket.com]. Try LogRocket for free today.(https://logrocket.com/signup/?pdr) Special Guest: Dev Agrawal.
The full schedule for Latent Space LIVE! at NeurIPS has been announced, featuring Best of 2024 overview talks for the AI Startup Landscape, Computer Vision, Open Models, Transformers Killers, Synthetic Data, Agents, and Scaling, and speakers from Sarah Guo of Conviction, Roboflow, AI2/Meta, Recursal/Together, HuggingFace, OpenHands and SemiAnalysis. Join us for the IRL event/Livestream! Alessio will also be holding a meetup at AWS Re:Invent in Las Vegas this Wednesday. See our new Events page for dates of AI Engineer Summit, Singapore, and World's Fair in 2025. LAST CALL for questions for our big 2024 recap episode! Submit questions and messages on Speakpipe here for a chance to appear on the show!When we first observed that GPT Wrappers are Good, Actually, we did not even have Bolt on our radar. Since we recorded our Anthropic episode discussing building Agents with the new Claude 3.5 Sonnet, Bolt.new (by Stackblitz) has easily cleared the $8m ARR bar, repeating and accelerating its initial $4m feat.There are very many AI code generators and VS Code forks out there, but Bolt probably broke through initially because of its incredible zero shot low effort app generation:But as we explain in the pod, Bolt also emphasized deploy (Netlify)/ backend (Supabase)/ fullstack capabilities on top of Stackblitz's existing WebContainer full-WASM-powered-developer-environment-in-the-browser tech. Since then, the team has been shipping like mad (with weekly office hours), with bugfixing, full screen, multi-device, long context, diff based edits (using speculative decoding like we covered in Inference, Fast and Slow).All of this has captured the imagination of low/no code builders like Greg Isenberg and many others on YouTube/TikTok/Reddit/X/Linkedin etc:Just as with Fireworks, our relationship with Bolt/Stackblitz goes a bit deeper than normal - swyx advised the launch and got a front row seat to this epic journey, as well as demoed it with Realtime Voice at the recent OpenAI Dev Day. So we are very proud to be the first/closest to tell the full open story of Bolt/Stackblitz!Flow Engineering + Qodo/AlphaCodium UpdateIn year 2 of the pod we have been on a roll getting former guests to return as guest cohosts (Harrison Chase, Aman Sanger, Jon Frankle), and it was a pleasure to catch Itamar Friedman back on the pod, giving us an update on all things Qodo and Testing Agents from our last catchup a year and a half ago:Qodo (they renamed in September) went viral in early January this year with AlphaCodium (paper here, code here) beating DeepMind's AlphaCode with high efficiency:With a simple problem solving code agent:* The first step is to have the model reason about the problem. They describe it using bullet points and focus on the goal, inputs, outputs, rules, constraints, and any other relevant details.* Then, they make the model reason about the public tests and come up with an explanation of why the input leads to that particular output. * The model generates two to three potential solutions in text and ranks them in terms of correctness, simplicity, and robustness. * Then, it generates more diverse tests for the problem, covering cases not part of the original public tests. * Iteratively, pick a solution, generate the code, and run it on a few test cases. * If the tests fail, improve the code and repeat the process until the code passes every test.swyx has previously written similar thoughts on types vs tests for putting bounds on program behavior, but AlphaCodium extends this to AI generated tests and code.More recently, Itamar has also shown that AlphaCodium's techniques also extend well to the o1 models:Making Flow Engineering a useful technique to improve code model performance on every model. This is something we see AI Engineers uniquely well positioned to do compared to ML Engineers/Researchers.Full Video PodcastLike and subscribe!Show Notes* Itamar* Qodo* First episode* Eric* Bolt* StackBlitz* Thinkster* AlphaCodium* WebContainersChapters* 00:00:00 Introductions & Updates* 00:06:01 Generic vs. Specific AI Agents* 00:07:40 Maintaining vs Creating with AI* 00:17:46 Human vs Agent Computer Interfaces* 00:20:15 Why Docker doesn't work for Bolt* 00:24:23 Creating Testing and Code Review Loops* 00:28:07 Bolt's Task Breakdown Flow* 00:31:04 AI in Complex Enterprise Environments* 00:41:43 AlphaCodium* 00:44:39 Strategies for Breaking Down Complex Tasks* 00:45:22 Building in Open Source* 00:50:35 Choosing a product as a founder* 00:59:03 Reflections on Bolt Success* 01:06:07 Building a B2C GTM* 01:18:11 AI Capabilities and Pricing Tiers* 01:20:28 What makes Bolt unique* 01:23:07 Future Growth and Product Development* 01:29:06 Competitive Landscape in AI Engineering* 01:30:01 Advice to Founders and Embracing AI* 01:32:20 Having a baby and completing an Iron ManTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol.ai.Swyx [00:00:12]: Hey, and today we're still in our sort of makeshift in-between studio, but we're very delighted to have a former returning guest host, Itamar. Welcome back.Itamar [00:00:21]: Great to be here after a year or more. Yeah, a year and a half.Swyx [00:00:24]: You're one of our earliest guests on Agents. Now you're CEO co-founder of Kodo. Right. Which has just been renamed. You also raised a $40 million Series A, and we can get caught up on everything, but we're also delighted to have our new guest, Eric. Welcome.Eric [00:00:42]: Thank you. Excited to be here. Should I say Bolt or StackBlitz?Swyx [00:00:45]: Like, is it like its own company now or?Eric [00:00:47]: Yeah. Bolt's definitely bolt.new. That's the thing that we're probably the most known for, I imagine, at this point.Swyx [00:00:54]: Which is ridiculous to say because you were working at StackBlitz for so long.Eric [00:00:57]: Yeah. I mean, within a week, we were doing like double the amount of traffic. And StackBlitz had been online for seven years, and we were like, what? But anyways, yeah. So we're StackBlitz, the company behind bolt.new. If you've heard of bolt.new, that's our stuff. Yeah.Swyx [00:01:12]: Yeah.Itamar [00:01:13]: Excellent. I see, by the way, that the founder mode, you need to know to capture opportunities. So kudos on doing that, right? You're working on some technology, and then suddenly you can exploit that to a new world. Yeah.Eric [00:01:24]: Totally. And I think, well, not to jump, but 100%, I mean, a couple of months ago, we had the idea for Bolt earlier this year, but we haven't really shared this too much publicly. But we actually had tried to build it with some of those state-of-the-art models back in January, February, you can kind of imagine which, and they just weren't good enough to actually do the code generation where the code was accurate and it was fast and whatever have you without a ton of like rag, but then there was like issues with that. So we put it on the shelf and then we got kind of a sneak peek of some of the new models that have come out in the past couple of months now. And so once we saw that, once we actually saw the code gen from it, we were like, oh my God, like, okay, we can build a product around this. And so that was really the impetus of us building the thing. But with that, it was StackBlitz, the core StackBlitz product the past seven years has been an IDE for developers. So the entire user experience flow we've built up just didn't make sense. And so when we kind of went out to build Bolt, we just thought, you know, if we were inventing our product today, what would the interface look like given what is now possible with the AI code gen? And so there's definitely a lot of conversations we had internally, but you know, just kind of when we logically laid it out, we were like, yeah, I think it makes sense to just greenfield a new thing and let's see what happens. If it works great, then we'll figure it out. If it doesn't work great, then it'll get deleted at some point. So that's kind of how it actually came to be.Swyx [00:02:49]: I'll mention your background a little bit. You were also founder of Thinkster before you started StackBlitz. So both of you are second time founders. Both of you have sort of re-founded your company recently. Yours was more of a rename. I think a slightly different direction as well. And then we can talk about both. Maybe just chronologically, should we get caught up on where Kodo is first and then you know, just like what people should know since the last pod? Sure.Itamar [00:03:12]: The last pod was two months after we launched and we basically had the vision that we talked about. The idea that software development is about specification, test and code, etc. We are more on the testing part as in essence, we think that if you solve testing, you solve software development. The beautiful chart that we'll put up on screen. And testing is a really big field, like there are many dimensions, unit testing, the level of the component, how big it is, how large it is. And then there is like different type of testing, is it regression or smoke or whatever. So back then we only had like one ID extension with unit tests as in focus. One and a half year later, first ID extension supports more type of testing as context aware. We index local, local repos, but also 10,000s of repos for Fortune 500 companies. We have another agent, another tool that is called, the pure agent is the open source and the commercial one is CodoMerge. And then we have another open source called CoverAgent, which is not yet a commercial product coming very soon. It's very impressive. It could be that already people are approving automated pull requests that they don't even aware in really big open sources. So once we have enough of these, we will also launch another agent. So for the first one and a half year, what we did is grew in our offering and mostly on the side of, does this code actually works, testing, code review, et cetera. And we believe that's the critical milestone that needs to be achieved to actually have the AI engineer for enterprise software. And then like for the first year was everything bottom up, getting to 1 million installation. 2024, that was 2023, 2024 was starting to monetize, to feel like how it is to make the first buck. So we did the teams offering, it went well with a thousand of teams, et cetera. And then we started like just a few months ago to do enterprise with everything you need, which is a lot of things that discussed in the last post that was just released by Codelm. So that's how we call it at Codelm. Just opening the brackets, our company name was Codelm AI, and we renamed to Codo and we call our models Codelm. So back to my point, so we started Enterprise Motion and already have multiple Fortune 100 companies. And then with that, we raised a series of $40 million. And what's exciting about it is that enables us to develop more agents. That's our focus. I think it's very different. We're not coming very soon with an ID or something like that.Swyx [00:06:01]: You don't want to fork this code?Itamar [00:06:03]: Maybe we'll fork JetBrains or something just to be different.Swyx [00:06:08]: I noticed that, you know, I think the promise of general purpose agents has kind of died. Like everyone is doing kind of what you're doing. There's Codogen, Codomerge, and then there's a third one. What's the name of it?Itamar [00:06:17]: Yeah. Codocover. Cover. Which is like a commercial version of a cover agent. It's coming soon.Swyx [00:06:23]: Yeah. It's very similar with factory AI, also doing like droids. They all have special purpose doing things, but people don't really want general purpose agents. Right. The last time you were here, we talked about AutoGBT, the biggest thing of 2023. This year, not really relevant anymore. And I think it's mostly just because when you give me a general purpose agent, I don't know what to do with it.Eric [00:06:42]: Yeah.Itamar [00:06:43]: I totally agree with that. We're seeing it for a while and I think it will stay like that despite the computer use, et cetera, that supposedly can just replace us. You can just like prompt it to be, hey, now be a QA or be a QA person or a developer. I still think that there's a few reasons why you see like a dedicated agent. Again, I'm a bit more focused, like my head is more on complex software for big teams and enterprise, et cetera. And even think about permissions and what are the data sources and just the same way you manage permissions for users. Developers, you probably want to have dedicated guardrails and dedicated approvals for agents. I intentionally like touched a point on not many people think about. And of course, then what you can think of, like maybe there's different tools, tool use, et cetera. But just the first point by itself is a good reason why you want to have different agents.Alessio [00:07:40]: Just to compare that with Bot.new, you're almost focused on like the application is very complex and now you need better tools to kind of manage it and build on top of it. On Bot.new, it's almost like I was using it the other day. There's basically like, hey, look, I'm just trying to get started. You know, I'm not very opinionated on like how you're going to implement this. Like this is what I want to do. And you build a beautiful app with it. What people ask as the next step, you know, going back to like the general versus like specific, have you had people say, hey, you know, this is great to start, but then I want a specific Bot.new dot whatever else to do a more vertical integration and kind of like development or what's the, what do people say?Eric [00:08:18]: Yeah. I think, I think you kind of hit the, hit it head on, which is, you know, kind of the way that we've, we've kind of talked about internally is it's like people are using Bolt to go from like 0.0 to 1.0, like that's like kind of the biggest unlock that Bolt has versus most other things out there. I mean, I think that's kind of what's, what's very unique about Bolt. I think the, you know, the working on like existing enterprise applications is, I mean, it's crazy important because, you know, there's a, you look, when you look at the fortune 500, I mean, these code bases, some of these have been around for 20, 30 plus years. And so it's important to be going from, you know, 101.3 to 101.4, et cetera. I think for us, so what's been actually pretty interesting is we see there's kind of two different users for us that are coming in and it's very distinct. It's like people that are developers already. And then there's people that have never really written software and more if they have, it's been very, very minimal. And so in the first camp, what these developers are doing, like to go from zero to one, they're coming to Bolt and then they're ejecting the thing to get up or just downloading it and, you know, opening cursor, like whatever to, to, you know, keep iterating on the thing. And sometimes they'll bring it back to Bolt to like add in a huge piece of functionality or something. Right. But for the people that don't know how to code, they're actually just, they, they live in this thing. And that was one of the weird things when we launched is, you know, within a day of us being online, one of the most popular YouTube videos, and there's been a ton since, which was, you know, there's like, oh, Bolt is the cursor killer. And I originally saw the headlines and I was like, thanks for the views. I mean, I don't know. This doesn't make sense to me. That's not, that's not what we kind of thought.Swyx [00:09:44]: It's how YouTubers talk to each other. Well, everything kills everything else.Eric [00:09:47]: Totally. But what blew my mind was that there was any comparison because it's like cursor is a, is a local IDE product. But when, when we actually kind of dug into it and we, and we have people that are using our product saying this, I'm not using cursor. And I was like, what? And it turns out there are hundreds of thousands of people that we have seen that we're using cursor and we're trying to build apps with that where they're not traditional software does, but we're heavily leaning on the AI. And as you can imagine, it is very complicated, right? To do that with cursor. So when Bolt came out, they're like, wow, this thing's amazing because it kind of inverts the complexity where it's like, you know, it's not an IDE, it's, it's a, it's a chat-based sort of interface that we have. So that's kind of the split, which is rather interesting. We've had like the first startups now launch off of Bolt entirely where this, you know, tomorrow I'm doing a live stream with this guy named Paul, who he's built an entire CRM using this thing and you know, with backend, et cetera. And people have made their first money on the internet period, you know, launching this with Stripe or whatever have you. So that's, that's kind of the two main, the two main categories of folks that we see using Bolt though.Itamar [00:10:51]: I agree that I don't understand the comparison. It doesn't make sense to me. I think like we have like two type of families of tools. One is like we re-imagine the software development. I think Bolt is there and I think like a cursor is more like a evolution of what we already have. It's like taking the IDE and it's, it's amazing and it's okay, let's, let's adapt the IDE to an era where LLMs can do a lot for us. And Bolt is more like, okay, let's rethink everything totally. And I think we see a few tools there, like maybe Vercel, Veo and maybe Repl.it in that area. And then in the area of let's expedite, let's change, let's, let's progress with what we already have. You can see Cursor and Kodo, but we're different between ourselves, Cursor and Kodo, but definitely I think that comparison doesn't make sense.Alessio [00:11:42]: And just to set the context, this is not a Twitter demo. You've made 4 million of revenue in four weeks. So this is, this is actually working, you know, it's not a, what, what do you think that is? Like, there's been so many people demoing coding agents on Twitter and then it doesn't really work. And then you guys were just like, here you go, it's live, go use it, pay us for it. You know, is there anything in the development that was like interesting and maybe how that compares to building your own agents?Eric [00:12:08]: We had no idea, honestly, like we, we, we've been pretty blown away and, and things have just kind of continued to grow faster since then. We're like, oh, today is week six. So I, I kind of came back to the point you just made, right, where it's, you, you kind of outlined, it's like, there's kind of this new market of like kind of rethinking the software development and then there's heavily augmenting existing developers. I think that, you know, both of which are, you know, AI code gen being extremely good, it's allowed existing developers, it's allowing existing developers to camera out software far faster than they could have ever before, right? It's like the ultimate power tool for an existing developer. But this code gen stuff is now so good. And then, and we saw this over the past, you know, from the beginning of the year when we tried to first build, it's actually lowered the barrier to people that, that aren't traditionally software engineers. But the kind of the key thing is if you kind of think about it from, imagine you've never written software before, right? My co-founder and I, he and I grew up down the street from each other in Chicago. We learned how to code when we were 13 together and we've been building stuff ever since. And this is back in like the mid 2000s or whatever, you know, there was nothing for free to learn from online on the internet and how to code. For our 13th birthdays, we asked our parents for, you know, O'Reilly books cause you couldn't get this at the library, right? And so instead of like an Xbox, we got, you know, programming books. But the hardest part for everyone learning to code is getting an environment set up locally, you know? And so when we built StackBlitz, like kind of the key thesis, like seven years ago, the insight we had was that, Hey, it seems like the browser has a lot of new APIs like WebAssembly and service workers, et cetera, where you could actually write an operating system that ran inside the browser that could boot in milliseconds. And you, you know, basically there's this missing capability of the web. Like the web should be able to build apps for the web, right? You should be able to build the web on the web. Every other platform has that, Visual Studio for Windows, Xcode for Mac. The web has no built in primitive for this. And so just like our built in kind of like nerd instinct on this was like, that seems like a huge hole and it's, you know, it will be very valuable or like, you know, very valuable problem to solve. So if you want to set up that environments, you know, this is what we spent the past seven years doing. And the reality is existing developers have running locally. They already know how to set up that environment. So the problem isn't as acute for them. When we put Bolt online, we took that technology called WebContainer and married it with these, you know, state of the art frontier models. And the people that have the most pain with getting stuff set up locally is people that don't code. I think that's been, you know, really the big explosive reason is no one else has been trying to make dev environments work inside of a browser tab, you know, for the past if since ever, other than basically our company, largely because there wasn't an immediate demand or need. So I think we kind of find ourselves at the right place at the right time. And again, for this market of people that don't know how to write software, you would kind of expect that you should be able to do this without downloading something to your computer in the same way that, hey, I don't have to download Photoshop now to make designs because there's Figma. I don't have to download Word because there's, you know, Google Docs. They're kind of looking at this as that sort of thing, right? Which was kind of the, you know, our impetus and kind of vision from the get-go. But you know, the code gen, the AI code gen stuff that's come out has just been, you know, an order of magnitude multiplier on how magic that is, right? So that's kind of my best distillation of like, what is going on here, you know?Alessio [00:15:21]: And you can deploy too, right?Eric [00:15:22]: Yeah.Alessio [00:15:23]: Yeah.Eric [00:15:24]: And so that's, what's really cool is it's, you know, we have deployment built in with Netlify and this is actually, I think, Sean, you actually built this at Netlify when you were there. Yeah. It's one of the most brilliant integrations actually, because, you know, effectively the API that Sean built, maybe you can speak to it, but like as a provider, we can just effectively give files to Netlify without the user even logging in and they have a live website. And if they want to keep, hold onto it, they can click a link and claim it to their Netlify account. But it basically is just this really magic experience because when you come to Bolt, you say, I want a website. Like my mom, 70, 71 years old, made her first website, you know, on the internet two weeks ago, right? It was about her nursing days.Swyx [00:16:03]: Oh, that's fantastic though. It wouldn't have been made.Eric [00:16:06]: A hundred percent. Cause even in, you know, when we've had a lot of people building personal, like deeply personal stuff, like in the first week we launched this, the sales guy from the East Coast, you know, replied to a tweet of mine and he said, thank you so much for building this to your team. His daughter has a medical condition and so for her to travel, she has to like line up donors or something, you know, so ahead of time. And so he actually used Bolt to make a website to do that, to actually go and send it to folks in the region she was going to travel to ahead of time. I was really touched by it, but I also thought like, why, you know, why didn't he use like Wix or Squarespace? Right? I mean, this is, this is a solved problem, quote unquote, right? And then when I thought, I actually use Squarespace for my, for my, uh, the wedding website for my wife and I, like back in 2021, so I'm familiar, you know, it was, it was faster. I know how to code. I was like, this is faster. Right. And I thought back and I was like, there's a whole interface you have to learn how to use. And it's actually not that simple. There's like a million things you can configure in that thing. When you come to Bolt, there's a, there's a text box. You just say, I need a, I need a wedding website. Here's the date. Here's where it is. And here's a photo of me and my wife, put it somewhere relevant. It's actually the simplest way. And that's what my, when my mom came, she said, uh, I'm Pat Simons. I was a nurse in the seventies, you know, and like, here's the things I did and a website came out. So coming back to why is this such a, I think, why are we seeing this sort of growth? It's, this is the simplest interface I think maybe ever created to actually build it, a deploy a website. And then that website, my mom made, she's like, okay, this looks great. And there's, there's one button, you just click it, deploy, and it's live and you can buy a domain name, attach it to it. And you know, it's as simple as it gets, it's getting even simpler with some of the stuff we're working on. But anyways, so that's, it's, it's, uh, it's been really interesting to see some of the usage like that.Swyx [00:17:46]: I can offer my perspective. So I, you know, I probably should have disclosed a little bit that, uh, I'm a, uh, stack list investor.Alessio [00:17:53]: Canceled the episode. I know, I know. Don't play it now. Pause.Eric actually reached out to ShowMeBolt before the launch. And we, you know, we talked a lot about, like, the framing of, of what we're going to talk about how we marketed the thing, but also, like, what we're So that's what Bolt was going to need, like a whole sort of infrastructure.swyx: Netlify, I was a maintainer but I won't take claim for the anonymous upload. That's actually the origin story of Netlify. We can have Matt Billman talk about it, but that was [00:18:00] how Netlify started. You could drag and drop your zip file or folder from your desktop onto a website, it would have a live URL with no sign in.swyx: And so that was the origin story of Netlify. And it just persists to today. And it's just like it's really nice, interesting that both Bolt and CognitionDevIn and a bunch of other sort of agent type startups, they all use Netlify to deploy because of this one feature. They don't really care about the other features.swyx: But, but just because it's easy for computers to use and talk to it, like if you build an interface for computers specifically, that it's easy for them to Navigate, then they will be used in agents. And I think that's a learning that a lot of developer tools companies are having. That's my bolt launch story and now if I say all that stuff.swyx: And I just wanted to come back to, like, the Webcontainers things, right? Like, I think you put a lot of weight on the technical modes. I think you also are just like, very good at product. So you've, you've like, built a better agent than a lot of people, the rest of us, including myself, who have tried to build these things, and we didn't get as far as you did.swyx: Don't shortchange yourself on products. But I think specifically [00:19:00] on, on infra, on like the sandboxing, like this is a thing that people really want. Alessio has Bax E2B, which we'll have on at some point, talking about like the sort of the server full side. But yours is, you know, inside of the browser, serverless.swyx: It doesn't cost you anything to serve one person versus a million people. It doesn't, doesn't cost you anything. I think that's interesting. I think in theory, we should be able to like run tests because you can run the full backend. Like, you can run Git, you can run Node, you can run maybe Python someday.swyx: We talked about this. But ideally, you should be able to have a fully gentic loop, running code, seeing the errors, correcting code, and just kind of self healing, right? Like, I mean, isn't that the dream?Eric: Totally.swyx: Yeah,Eric: totally. At least in bold, we've got, we've got a good amount of that today. I mean, there's a lot more for us to do, but one of the nice things, because like in web container, you know, there's a lot of kind of stuff you go Google like, you know, turn docker container into wasm.Eric: You'll find a lot of stuff out there that will do that. The problem is it's very big, it's slow, and that ruins the experience. And so what we ended up doing is just writing an operating system from [00:20:00] scratch that was just purpose built to, you know, run in a browser tab. And the reason being is, you know, Docker 2 awesome things will give you an image that's like out 60 to 100 megabits, you know, maybe more, you know, and our, our OS, you know, kind of clocks in, I think, I think we're in like a, maybe, maybe a megabyte or less or something like that.Eric: I mean, it's, it's, you know, really, really, you know, stripped down.swyx: This is basically the task involved is I understand that it's. Mapping every single, single Linux call to some kind of web, web assembly implementation,Eric: but more or less, and, and then there's a lot of things actually, like when you're looking at a dev environment, there's a lot of things that you don't need that a traditional OS is gonna have, right?Eric: Like, you know audio drivers or you like, there's just like, there's just tons of things. Oh, yeah. Right. Yeah. That goes . Yeah. You can just kind, you can, you can kind of tos them. Or alternatively, what you can do is you can actually be the nice thing. And this is, this kind of comes back to the origins of browsers, which is, you know, they're, they're at the beginning of the web and, you know, the late nineties, there was two very different kind of visions for the web where Alan Kay vehemently [00:21:00] disagree with the idea that should be document based, which is, you know, Tim Berners Lee, you know, that, and that's kind of what ended up winning, winning was this document based kind of browsing documents on the web thing.Eric: Alan Kay, he's got this like very famous quote where he said, you know, you want web browsers to be mini operating systems. They should download little mini binaries and execute with like a little mini virtualized operating system in there. And what's kind of interesting about the history, not to geek out on this aspect, what's kind of interesting about the history is both of those folks ended up being right.Eric: Documents were actually the pragmatic way that the web worked. Was, you know, became the most ubiquitous platform in the world to the degree now that this is why WebAssembly has been invented is that we're doing, we need to do more low level things in a browser, same thing with WebGPU, et cetera. And so all these APIs, you know, to build an operating system came to the browser.Eric: And that was actually the realization we had in 2017 was, holy heck, like you can actually, you know, service workers, which were designed for allowing your app to work offline. That was the kind of the key one where it was like, wait a second, you can actually now run. Web servers within a [00:22:00] browser, like you can run a server that you open up.Eric: That's wild. Like full Node. js. Full Node. js. Like that capability. Like, I can have a URL that's programmatically controlled. By a web application itself, boom. Like the web can build the web. The primitive is there. Everyone at the time, like we talked to people that like worked on, you know Chrome and V8 and they were like, uhhhh.Eric: You know, like I don't know. But it's one of those things you just kind of have to go do it to find out. So we spent a couple of years, you know, working on it and yeah. And, and, and got to work in back in 2021 is when we kind of put the first like data of web container online. Butswyx: in partnership with Google, right?swyx: Like Google actually had to help you get over the finish line with stuff.Eric: A hundred percent, because well, you know, over the years of when we were doing the R and D on the thing. Kind of the biggest challenge, the two ways that you can kind of test how powerful and capable a platform are, the two types of applications are one, video games, right, because they're just very compute intensive, a lot of calculations that have to happen, right?Eric: The second one are IDEs, because you're talking about actually virtualizing the actual [00:23:00] runtime environment you are in to actually build apps on top of it, which requires sophisticated capabilities, a lot of access to data. You know, a good amount of compute power, right, to effectively, you know, building app in app sort of thing.Eric: So those, those are the stress tests. So if your platform is missing stuff, those are the things where you find out. Those are, those are the people building games and IDEs. They're the ones filing bugs on operating system level stuff. And for us, browser level stuff.Eric [00:23:47]: yeah, what ended up happening is we were just hammering, you know, the Chromium bug tracker, and they're like, who are these guys? Yeah. And, and they were amazing because I mean, just making Chrome DevTools be able to debug, I mean, it's, it's not, it wasn't originally built right for debugging an operating system, right? They've been phenomenal working with us and just kind of really pushing the limits, but that it's a rising tide that's kind of lifted all boats because now there's a lot of different types of applications that you can debug with Chrome Dev Tools that are running a browser that runs more reliably because just the stress testing that, that we and, you know, games that are coming to the web are kind of pushing as well, but.Itamar [00:24:23]: That's awesome. About the testing, I think like most, let's say coding assistant from different kinds will need this loop of testing. And even I would add code review to some, to some extent that you mentioned. How is testing different from code review? Code review could be, for example, PR review, like a code review that is done at the point of when you want to merge branches. But I would say that code review, for example, checks best practices, maintainability, and so on. It's not just like CI, but more than CI. And testing is like a more like checking functionality, et cetera. So it's different. We call, by the way, all of these together code integrity, but that's a different story. Just to go back to the, to the testing and specifically. Yeah. It's, it's, it's since the first slide. Yeah. We're consistent. So if we go back to the testing, I think like, it's not surprising that for us testing is important and for Bolt it's testing important, but I want to shed some light on a different perspective of it. Like let's think about autonomous driving. Those startups that are doing autonomous driving for highway and autonomous driving for the city. And I think like we saw the autonomous of the highway much faster and reaching to a level, I don't know, four or so much faster than those in the city. Now, in both cases, you need testing and quote unquote testing, you know, verifying validation that you're doing the right thing on the road and you're reading and et cetera. But it's probably like so different in the city that it could be like actually different technology. And I claim that we're seeing something similar here. So when you're building the next Wix, and if I was them, I was like looking at you and being a bit scared. That's what you're disrupting, what you just said. Then basically, I would say that, for example, the UX UI is freaking important. And because you're you're more aiming for the end user. In this case, maybe it's an end user that doesn't know how to develop for developers. It's also important. But let alone those that do not know to develop, they need a slick UI UX. And I think like that's one reason, for example, I think Cursor have like really good technology. I don't know the underlying what's under the hood, but at least what they're saying. But I think also their UX UI is great. It's a lot because they did their own ID. While if you're aiming for the city AI, suddenly like there's a lot of testing and code review technology that it's not necessarily like that important. For example, let's talk about integration tests. Probably like a lot of what you're building involved at the moment is isolated applications. Maybe the vision or the end game is maybe like having one solution for everything. It could be that eventually the highway companies will go into the city and the other way around. But at the beginning, there is a difference. And integration tests are a good example. I guess they're a bit less important. And when you think about enterprise software, they're really important. So to recap, like I think like the idea of looping and verifying your test and verifying your code in different ways, testing or code review, et cetera, seems to be important in the highway AI and the city AI, but in different ways and different like critical for the city, even more and more variety. Actually, I was looking to ask you like what kind of loops you guys are doing. For example, when I'm using Bolt and I'm enjoying it a lot, then I do see like sometimes you're trying to catch the errors and fix them. And also, I noticed that you're breaking down tasks into smaller ones and then et cetera, which is already a common notion for a year ago. But it seems like you're doing it really well. So if you're willing to share anything about it.Eric [00:28:07]: Yeah, yeah. I realized I never actually hit the punchline of what I was saying before. I mentioned the point about us kind of writing an operating system from scratch because what ended up being important about that is that to your point, it's actually a very, like compared to like a, you know, if you're like running cursor on anyone's machine, you kind of don't know what you're dealing with, with the OS you're running on. There could be an error happens. It could be like a million different things, right? There could be some config. There could be, it could be God knows what, right? The thing with WebConnect is because we wrote the entire thing from scratch. It's actually a unified image basically. And we can instrument it at any level that we think is going to be useful, which is exactly what we did when we started building Bolt is we instrumented stuff at like the process level, at the runtime level, you know, et cetera, et cetera, et cetera. Stuff that would just be not impossible to do on local, but to do that in a way that works across any operating system, whatever is, I mean, would just be insanely, you know, insanely difficult to do right and reliably. And that's what you saw when you've used Bolt is that when an error actually will occur, whether it's in the build process or the actual web application itself is failing or anything kind of in between, you can actually capture those errors. And today it's a very primitive way of how we've implemented it largely because the product just didn't exist 90 days ago. So we're like, we got some work ahead of us and we got to hire some more a little bit, but basically we present and we say, Hey, this is, here's kind of the things that went wrong. There's a fix it button and then a ignore button, and then you can just hit fix it. And then we take all that telemetry through our agent, you run it through our agent and say, kind of, here's the state of the application. Here's kind of the errors that we got from Node.js or the browser or whatever, and like dah, dah, dah, dah. And it can take a crack at actually solving it. And it's actually pretty darn good at being able to do that. That's kind of been a, you know, closing the loop and having it be a reliable kind of base has seemed to be a pretty big upgrade over doing stuff locally, just because I think that's a pretty key ingredient of it. And yeah, I think breaking things down into smaller tasks, like that's, that's kind of a key part of our agent. I think like Claude did a really good job with artifacts. I think, you know, us and kind of everyone else has, has kind of taken their approach of like actually breaking out certain tasks in a certain order into, you know, kind of a concrete way. And, and so actually the core of Bolt, I know we actually made open source. So you can actually go and check out like the system prompts and et cetera, and you can run it locally and whatever have you. So anyone that's interested in this stuff, I'd highly recommend taking a look at. There's not a lot of like stuff that's like open source in this realm. It's, that was one of the fun things that we've we thought would be cool to do. And people, people seem to like it. I mean, there's a lot of forks and people adding different models and stuff. So it's been cool to see.Swyx [00:30:41]: Yeah. I'm happy to add, I added real-time voice for my opening day demo and it was really fun to hack with. So thank you for doing that. Yeah. Thank you. I'm going to steal your code.Eric [00:30:52]: Because I want that.Swyx [00:30:52]: It's funny because I built on top of the fork of Bolt.new that already has the multi LLM thing. And so you just told me you're going to merge that in. So then you're going to merge two layers of forks down into this thing. So it'll be fun.Eric [00:31:03]: Heck yeah.Alessio [00:31:04]: Just to touch on like the environment, Itamar, you maybe go into the most complicated environments that even the people that work there don't know how to run. How much of an impact does that have on your performance? Like, you know, it's most of the work you're doing actually figuring out environment and like the libraries, because I'm sure they're using outdated version of languages, they're using outdated libraries, they're using forks that have not been on the public internet before. How much of the work that you're doing is like there versus like at the LLM level?Itamar [00:31:32]: One of the reasons I was asking about, you know, what are the steps to break things down, because it really matters. Like, what's the tech stack? How complicated the software is? It's hard to figure it out when you're dealing with the real world, any environment of enterprise as a city, when I'm like, while maybe sometimes like, I think you do enable like in Bolt, like to install stuff, but it's quite a like controlled environment. And that's a good thing to do, because then you narrow down and it's easier to make things work. So definitely, there are two dimensions, I think, actually spaces. One is the fact just like installing our software without yet like doing anything, making it work, just installing it because we work with enterprise and Fortune 500, etc. Many of them want on prem solution.Swyx [00:32:22]: So you have how many deployment options?Itamar [00:32:24]: Basically, we had, we did a metric metrics, say 96 options, because, you know, they're different dimensions. Like, for example, one dimension, we connect to your code management system to your Git. So are you having like GitHub, GitLab? Subversion? Is it like on cloud or deployed on prem? Just an example. Which model agree to use its APIs or ours? Like we have our Is it TestGPT? Yeah, when we started with TestGPT, it was a huge mistake name. It was cool back then, but I don't think it's a good idea to name a model after someone else's model. Anyway, that's my opinion. So we gotSwyx [00:33:02]: I'm interested in these learnings, like things that you change your mind on.Itamar [00:33:06]: Eventually, when you're building a company, you're building a brand and you want to create your own brand. By the way, when I thought about Bolt.new, I also thought about if it's not a problem, because when I think about Bolt, I do think about like a couple of companies that are already called this way.Swyx [00:33:19]: Curse companies. You could call it Codium just to...Itamar [00:33:24]: Okay, thank you. Touche. Touche.Eric [00:33:27]: Yeah, you got to imagine the board meeting before we launched Bolt, one of our investors, you can imagine they're like, are you sure? Because from the investment side, it's kind of a famous, very notorious Bolt. And they're like, are you sure you want to go with that name? Oh, yeah. Yeah, absolutely.Itamar [00:33:43]: At this point, we have actually four models. There is a model for autocomplete. There's a model for the chat. There is a model dedicated for more for code review. And there is a model that is for code embedding. Actually, you might notice that there isn't a good code embedding model out there. Can you name one? Like dedicated for code?Swyx [00:34:04]: There's code indexing, and then you can do sort of like the hide for code. And then you can embed the descriptions of the code.Itamar [00:34:12]: Yeah, but you do see a lot of type of models that are dedicated for embedding and for different spaces, different fields, etc. And I'm not aware. And I know that if you go to the bedrock, try to find like there's a few code embedding models, but none of them are specialized for code.Swyx [00:34:31]: Is there a benchmark that you would tell us to pay attention to?Itamar [00:34:34]: Yeah, so it's coming. Wait for that. Anyway, we have our models. And just to go back to the 96 option of deployment. So I'm closing the brackets for us. So one is like dimensional, like what Git deployment you have, like what models do you agree to use? Dotter could be like if it's air-gapped completely, or you want VPC, and then you have Azure, GCP, and AWS, which is different. Do you use Kubernetes or do not? Because we want to exploit that. There are companies that do not do that, etc. I guess you know what I mean. So that's one thing. And considering that we are dealing with one of all four enterprises, we needed to deal with that. So you asked me about how complicated it is to solve that complex code. I said, it's just a deployment part. And then now to the software, we see a lot of different challenges. For example, some companies, they did actually a good job to build a lot of microservices. Let's not get to if it's good or not, but let's first assume that it is a good thing. A lot of microservices, each one of them has their own repo. And now you have tens of thousands of repos. And you as a developer want to develop something. And I remember me coming to a corporate for the first time. I don't know where to look at, like where to find things. So just doing a good indexing for that is like a challenge. And moreover, the regular indexing, the one that you can find, we wrote a few blogs on that. By the way, we also have some open source, different than yours, but actually three and growing. Then it doesn't work. You need to let the tech leads and the companies influence your indexing. For example, Mark with different repos with different colors. This is a high quality repo. This is a lower quality repo. This is a repo that we want to deprecate. This is a repo we want to grow, etc. And let that be part of your indexing. And only then things actually work for enterprise and they don't get to a fatigue of, oh, this is awesome. Oh, but I'm starting, it's annoying me. I think Copilot is an amazing tool, but I'm quoting others, meaning GitHub Copilot, that they see not so good retention of GitHub Copilot and enterprise. Ooh, spicy. Yeah. I saw snapshots of people and we have customers that are Copilot users as well. And also I saw research, some of them is public by the way, between 38 to 50% retention for users using Copilot and enterprise. So it's not so good. By the way, I don't think it's that bad, but it's not so good. So I think that's a reason because, yeah, it helps you auto-complete, but then, and especially if you're working on your repo alone, but if it's need that context of remote repos that you're code-based, that's hard. So to make things work, there's a lot of work on that, like giving the controllability for the tech leads, for the developer platform or developer experience department in the organization to influence how things are working. A short example, because if you have like really old legacy code, probably some of it is not so good anymore. If you just fine tune on these code base, then there is a bias to repeat those mistakes or old practices, etc. So you need, for example, as I mentioned, to influence that. For example, in Coda, you can have a markdown of best practices by the tech leads and Coda will include that and relate to that and will not offer suggestions that are not according to the best practices, just as an example. So that's just a short list of things that you need to do in order to deal with, like you mentioned, the 100.1 to 100.2 version of software. I just want to say what you're doing is extremelyEric [00:38:32]: impressive because it's very difficult. I mean, the business of Stackplus, kind of before bulk came online, we sold a version of our IDE that went on-prem. So I understand what you're saying about the difficulty of getting stuff just working on-prem. Holy heck. I mean, that is extremely hard. I guess the question I have for you is, I mean, we were just doing that with kind of Kubernetes-based stuff, but the spread of Fortune 500 companies that you're working with, how are they doing the inference for this? Are you kind of plugging into Azure's OpenAI stuff and AWS's Bedrock, you know, Cloud stuff? Or are they just like running stuff on GPUs? Like, what is that? How are these folks approaching that? Because, man, what we saw on the enterprise side, I mean, I got to imagine that that's a huge challenge. Everything you said and more, like,Itamar [00:39:15]: for example, like someone could be, and I don't think any of these is bad. Like, they made their decision. Like, for example, some people, they're, I want only AWS and VPC on AWS, no matter what. And then they, some of them, like there is a subset, I will say, I'm willing to take models only for from Bedrock and not ours. And we have a problem because there is no good code embedding model on Bedrock. And that's part of what we're doing now with AWS to solve that. We solve it in a different way. But if you are willing to run on AWS VPC, but run your run models on GPUs or inferentia, like the new version of the more coming out, then our models can run on that. But everything you said is right. Like, we see like on-prem deployment where they have their own GPUs. We see Azure where you're using OpenAI Azure. We see cases where you're running on GCP and they want OpenAI. Like this cross, like a case, although there is Gemini or even Sonnet, I think is available on GCP, just an example. So all the options, that's part of the challenge. I admit that we thought about it, but it was even more complicated. And it took us a few months to actually, that metrics that I mentioned, to start clicking each one of the blocks there. A few months is impressive. I mean,Eric [00:40:35]: honestly, just that's okay. Every one of these enterprises is, their networking is different. Just everything's different. Every single one is different. I see you understand. Yeah. So that just cannot be understated. That it is, that's extremely impressive. Hats off.Itamar [00:40:50]: It could be, by the way, like, for example, oh, we're only AWS, but our GitHub enterprise is on-prem. Oh, we forgot. So we need like a private link or whatever, like every time like that. It's not, and you do need to think about it if you want to work with an enterprise. And it's important. Like I understand like their, I respect their point of view.Swyx [00:41:10]: And this primarily impacts your architecture, your tech choices. Like you have to, you can't choose some vendors because...Itamar [00:41:15]: Yeah, definitely. To be frank, it makes us hard for a startup because it means that we want, we want everyone to enjoy all the variety of models. By the way, it was hard for us with our technology. I want to open a bracket, like a window. I guess you're familiar with our Alpha Codium, which is an open source.Eric [00:41:33]: We got to go over that. Yeah. So I'll do that quickly.Itamar [00:41:36]: Yeah. A pin in that. Yeah. Actually, we didn't have it in the last episode. So, so, okay.Swyx [00:41:41]: Okay. We'll come back to that later, but let's talk about...Itamar [00:41:43]: Yeah. So, so just like shortly, and then we can double click on Alpha Codium. But Alpha Codium is a open source tool. You can go and try it and lets you compete on CodeForce. This is a website and a competition and actually reach a master level level, like 95% with a click of a button. You don't need to do anything. And part of what we did there is taking a problem and breaking it to different, like smaller blocks. And then the models are doing a much better job. Like we all know it by now that taking small tasks and solving them, by the way, even O1, which is supposed to be able to do system two thinking like Greg from OpenAI like hinted, is doing better on these kinds of problems. But still, it's very useful to break it down for O1, despite O1 being able to think by itself. And that's what we presented like just a month ago, OpenAI released that now they are doing 93 percentile with O1 IOI left and International Olympiad of Formation. Sorry, I forgot. Exactly. I told you I forgot. And we took their O1 preview with Alpha Codium and did better. Like it just shows like, and there is a big difference between the preview and the IOI. It shows like that these models are not still system two thinkers, and there is a big difference. So maybe they're not complete system two. Yeah, they need some guidance. I call them system 1.5. We can, we can have it. I thought about it. Like, you know, I care about this philosophy stuff. And I think like we didn't see it even close to a system two thinking. I can elaborate later. But closing the brackets, like we take Alpha Codium and as our principle of thinking, we take tasks and break them down to smaller tasks. And then we want to exploit the best model to solve them. So I want to enable anyone to enjoy O1 and SONET and Gemini 1.5, etc. But at the same time, I need to develop my own models as well, because some of the Fortune 500 want to have all air gapped or whatever. So that's a challenge. Now you need to support so many models. And to some extent, I would say that the flow engineering, the breaking down to two different blocks is a necessity for us. Why? Because when you take a big block, a big problem, you need a very different prompt for each one of the models to actually work. But when you take a big problem and break it into small tasks, we can talk how we do that, then the prompt matters less. What I want to say, like all this, like as a startup trying to do different deployment, getting all the juice that you can get from models, etc. is a big problem. And one need to think about it. And one of our mitigation is that process of taking tasks and breaking them down. That's why I'm really interested to know how you guys are doing it. And part of what we do is also open source. So you can see.Swyx [00:44:39]: There's a lot in there. But yeah, flow over prompt. I do believe that that does make sense. I feel like there's a lot that both of you can sort of exchange notes on breaking down problems. And I just want you guys to just go for it. This is fun to watch.Eric [00:44:55]: Yeah. I mean, what's super interesting is the context you're working in is, because for us too with Bolt, we've started thinking because our kind of existing business line was going behind the firewall, right? We were like, how do we do this? Adding the inference aspect on, we're like, okay, how does... Because I mean, there's not a lot of prior art, right? I mean, this is all new. This is all new. So I definitely am going to have a lot of questions for you.Itamar [00:45:17]: I'm here. We're very open, by the way. We have a paper on a blog or like whatever.Swyx [00:45:22]: The Alphacodeum, GitHub, and we'll put all this in the show notes.Itamar [00:45:25]: Yeah. And even the new results of O1, we published it.Eric [00:45:29]: I love that. And I also just, I think spiritually, I like your approach of being transparent. Because I think there's a lot of hype-ium around AI stuff. And a lot of it is, it's just like, you have these companies that are just kind of keep their stuff closed source and then just max hype it, but then it's kind of nothing. And I think it kind of gives a bad rep to the incredible stuff that's actually happening here. And so I think it's stuff like what you're doing where, I mean, true merit and you're cracking open actual code for others to learn from and use. That strikes me as the right approach. And it's great to hear that you're making such incredible progress.Itamar [00:46:02]: I have something to share about the open source. Most of our tools are, we have an open source version and then a premium pro version. But it's not an easy decision to do that. I actually wanted to ask you about your strategy, but I think in your case, there is, in my opinion, relatively a good strategy where a lot of parts of open source, but then you have the deployment and the environment, which is not right if I get it correctly. And then there's a clear, almost hugging face model. Yeah, you can do that, but why should you try to deploy it yourself, deploy it with us? But in our case, and I'm not sure you're not going to hit also some competitors, and I guess you are. I wanted to ask you, for example, on some of them. In our case, one day we looked on one of our competitors that is doing code review. We're a platform. We have the code review, the testing, et cetera, spread over the ID to get. And in each agent, we have a few startups or a big incumbents that are doing only that. So we noticed one of our competitors having not only a very similar UI of our open source, but actually even our typo. And you sit there and you're kind of like, yeah, we're not that good. We don't use enough Grammarly or whatever. And we had a couple of these and we saw it there. And then it's a challenge. And I want to ask you, Bald is doing so well, and then you open source it. So I think I know what my answer was. I gave it before, but still interestingEric [00:47:29]: to hear what you think. GeoHot said back, I don't know who he was up to at this exact moment, but I think on comma AI, all that stuff's open source. And someone had asked him, why is this open source? And he's like, if you're not actually confident that you can go and crush it and build the best thing, then yeah, you should probably keep your stuff closed source. He said something akin to that. I'm probably kind of butchering it, but I thought it was kind of a really good point. And that's not to say that you should just open source everything, because for obvious reasons, there's kind of strategic things you have to kind of take in mind. But I actually think a pretty liberal approach, as liberal as you kind of can be, it can really make a lot of sense. Because that is so validating that one of your competitors is taking your stuff and they're like, yeah, let's just kind of tweak the styles. I mean, clearly, right? I think it's kind of healthy because it keeps, I'm sure back at HQ that day when you saw that, you're like, oh, all right, well, we have to grind even harder to make sure we stay ahead. And so I think it's actually a very useful, motivating thing for the teams. Because you might feel this period of comfort. I think a lot of companies will have this period of comfort where they're not feeling the competition and one day they get disrupted. So kind of putting stuff out there and letting people push it forces you to face reality soon, right? And actually feel that incrementally so you can kind of adjust course. And that's for us, the open source version of Bolt has had a lot of features people have been begging us for, like persisting chat messages and checkpoints and stuff. Within the first week, that stuff was landed in the open source versions. And they're like, why can't you ship this? It's in the open, so people have forked it. And we're like, we're trying to keep our servers and GPUs online. But it's been great because the folks in the community did a great job, kept us on our toes. And we've got to know most of these folks too at this point that have been building these things. And so it actually was very instructive. Like, okay, well, if we're going to go kind of land this, there's some UX patterns we can kind of look at and the code is open source to this stuff. What's great about these, what's not. So anyways, NetNet, I think it's awesome. I think from a competitive point of view for us, I think in particular, what's interesting is the core technology of WebContainer going. And I think that right now, there's really nothing that's kind of on par with that. And we also, we have a business of, because WebContainer runs in your browser, but to make it work, you have to install stuff from NPM. You have to make cores bypass requests, like connected databases, which all require server-side proxying or acceleration. And so we actually sell WebContainer as a service. One of the core reasons we open-sourced kind of the core components of Bolt when we launched was that we think that there's going to be a lot more of these AI, in-your-browser AI co-gen experiences, kind of like what Anthropic did with Artifacts and Clod. By the way, Artifacts uses WebContainers. Not yet. No, yeah. Should I strike that? I think that they've got their own thing at the moment, but there's been a lot of interest in WebContainers from folks doing things in that sort of realm and in the AI labs and startups and everything in between. So I think there'll be, I imagine, over the coming months, there'll be lots of things being announced to folks kind of adopting it. But yeah, I think effectively...Swyx [00:50:35]: Okay, I'll say this. If you're a large model lab and you want to build sandbox environments inside of your chat app, you should call Eric.Itamar [00:50:43]: But wait, wait, wait, wait, wait, wait. I have a question about that. I think OpenAI, they felt that people are not using their model as they would want to. So they built ChatGPT. But I would say that ChatGPT now defines OpenAI. I know they're doing a lot of business from their APIs, but still, is this how you think? Isn't Bolt.new your business now? Why don't you focus on that instead of the...Swyx [00:51:16]: What's your advice as a founder?Eric [00:51:18]: You're right. And so going into it, we, candidly, we were like, Bolt.new, this thing is super cool. We think people are stoked. We think people will be stoked. But we were like, maybe that's allowed. Best case scenario, after month one, we'd be mind blown if we added a couple hundred K of error or something. And we were like, but we think there's probably going to be an immediate huge business. Because there was some early poll on folks wanting to put WebContainer into their product offerings, kind of similar to what Bolt is doing or whatever. We were actually prepared for the inverse outcome here. But I mean, well, I guess we've seen poll on both. But I mean, what's happened with Bolt, and you're right, it's actually the same strategy as like OpenAI or Anthropic, where we have our ChatGPT to OpenAI's APIs is Bolt to WebContainer. And so we've kind of taken that same approach. And we're seeing, I guess, some of the similar results, except right now, the revenue side is extremely lopsided to Bolt.Itamar [00:52:16]: I think if you ask me what's my advice, I think you have three options. One is to focus on Bolt. The other is to focus on the WebContainer. The third is to raise one billion dollars and do them both. I'm serious. I think otherwise, you need to choose. And if you raise enough money, and I think it's big bucks, because you're going to be chased by competitors. And I think it will be challenging to do both. And maybe you can. I don't know. We do see these numbers right now, raising above $100 million, even without havingEric [00:52:49]: a product. You can see these. It's excellent advice. And I think what's been amazing, but also kind of challenging is we're trying to forecast, okay, well, where are these things going? I mean, in the initial weeks, I think us and all the investors in the company that we're sharing this with, it was like, this is cool. Okay, we added 500k. Wow, that's crazy. Wow, we're at a million now. Most things, you have this kind of the tech crunch launch of initiation and then the thing of sorrow. And if there's going to be a downtrend, it's just not coming yet. Now that we're kind of looking ahead, we're six weeks in. So now we're getting enough confidence in our convictions to go, okay, this se
In dieser Episode tauchen wir tief in die Chrome DevTools ein und beschäftigen uns mit der Frage, wie wir das Performance-Tab optimal nutzen können, um wertvolle Einsichten in die Ladezeiten und Rende…
From our Sponsors at SimmerGo to TeamSimmer and use the coupon code DEVIATE for 10% on individual course purchases.The Technical Marketing Handbook provides a comprehensive journey through technical marketing principles.A new course is out now! Chrome DevTools for Digital MarketersLatest content from Juliana & SimoArticle: Cookie Access With Shopify Checkout And SGTM by Simo AhavaArticle: Unlocking Real-Time Insights: How does Piwik PRO's Real-Time Dashboarding Feature work? by Juliana JacksonAlso mentioned in the EpisodeStape's blog: https://stape.io/blogStape website: https://stape.ioMeasure Slack: https://www.measure.chat/Connect with Denis Golubovskyi This podcast is brought to you by Juliana Jackson and Simo Ahava. Intro jingle by Jason Packer and Josh Silverbauer.
Join us in this episode as we delve into new performance features in Chrome DevTools with Umar Hansa. Learn about preloading, debugging techniques, and how to optimize website performance by focusing on key metrics. Links https://umaar.com https://x.com/umaar https://github.com/umaar https://www.youtube.com/c/UmarHansa https://www.linkedin.com/in/umarhansa https://www.tiktok.com/@umarhansaofficial https://dev.to/umaar https://www.debugbear.com/blog/fix-web-performance-devtools We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Let us know by sending an email to our producer, Emily, at emily.kochanekketner@logrocket.com (mailto:emily.kochanekketner@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understand where your users are struggling by trying it for free at [LogRocket.com]. Try LogRocket for free today.(https://logrocket.com/signup/?pdr) Special Guest: Umar Hansa.
From our Sponsors at SimmerGo to TeamSimmer and use the coupon code DEVIATE for 10% on individual course purchases.The Technical Marketing Handbook provides a comprehensive journey through technical marketing principles.A new course is out now! Chrome DevTools for Digital MarketersLatest content from Juliana & SimoArticle: GA4 to Piwik PRO Using Server-side Google Tag Manager by Simo AhavaArticle: Unlocking Real-Time Insights: How does Piwik PRO's Real-Time Dashboarding Feature work? by Juliana JacksonAlso mentioned in the EpisodeKick Point Playbook content consumption tracking recipe from DanaKick Point Playbook Newsletter - The HuddleDana's LinkedIn Learning CoursesGoogle Developers AcademyConnect with Dana DiTomasoDana's LinkedinKick Point Playbook website This podcast is brought to you by Juliana Jackson and Simo Ahava. Intro jingle by Jason Packer and Josh Silverbauer.
React and JavaScript expert Cory House discusses the creation of custom development tools for React applications, sharing insights from his recent talk at React Rally and exploring how the right tools can shape development workflows and enhance automated testing strategies. Links https://www.bitnative.com https://github.com/coryhouse/ama https://x.com/housecor https://github.com/coryhouse https://stackoverflow.com/users/26180/cory-house https://www.linkedin.com/in/coryhouse https://www.pluralsight.com/authors/cory-house https://www.reactjsconsulting.com We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Let us know by sending an email to our producer, Emily, at emily.kochanekketner@logrocket.com (mailto:emily.kochanekketner@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understand where your users are struggling by trying it for free at [LogRocket.com]. Try LogRocket for free today.(https://logrocket.com/signup/?pdr) Special Guest: Cory House.
From our Sponsors at SimmerGo to TeamSimmer and use the coupon code DEVIATE for 10% on individual course purchases.The Technical Marketing Handbook provides a comprehensive journey through technical marketing principles.A new course is out now! Chrome DevTools for Digital MarketersLatest content from Juliana & SimoNew Piwik PRO Templates In Server-side Google Tag Manager by Simo AhavaArticle: Unlocking Real-Time Insights: How does Piwik PRO's Real-Time Dashboarding Feature work? by Juliana JacksonAlso mentioned in the EpisodePiwik PROServer Side Webinar with Simo and Piwik PROGTM ToolsStape.ioGoogle Algo Leak explainedGenerative Engine OptimisationGoogle Satisfaction Score PaperSMX LondonConnect with Michael KingLinkedinhttps://ipullrank.com/ This podcast is brought to you by Juliana Jackson and Simo Ahava. Intro jingle by Jason Packer and Josh Silverbauer.
From our Sponsors at SimmerGo to TeamSimmer and use the coupon code DEVIATE for 10% on individual course purchases.The Technical Marketing Handbook provides a comprehensive journey through technical marketing principles.A new course is out now! Chrome DevTools for Digital MarketersLatest content from Juliana & SimoArticle: AUTOMATIC PAGE VIEW HITS IN SGTM AFTER CONSENT GRANTED by Simo AhavaArticle: Unlocking Real-Time Insights: How does Piwik PRO's Real-Time Dashboarding Feature work? by Juliana JacksonAlso mentioned in the EpisodeConversion Jam 2024 - https://www.conversionjam.com/ John Ekman - https://www.linkedin.com/in/johnekman/ Firefox Devtools doc - https://firefox-source-docs.mozilla.org/devtools-user/Superweek Analytics Summit - https://superweek.hu/Connect with Michael AagaardLinkedinhttps://www.michaelaagaard.com/ This podcast is brought to you by Juliana Jackson and Simo Ahava. Intro jingle by Jason Packer and Josh Silverbauer.
From our Sponsors at SimmerGo to TeamSimmer and use the coupon code DEVIATE for 10% on individual course purchases.The Technical Marketing Handbook is live and provides a comprehensive journey through technical marketing principles.A new course is out now! Chrome DevTools for Digital MarketersLatest content from Juliana & SimoArticle: AUTOMATIC PAGE VIEW HITS IN SGTM AFTER CONSENT GRANTED by Simo AhavaArticle: Unlocking Real-Time Insights: How does Piwik PRO's Real-Time Dashboarding Feature work? by Juliana JacksonAlso mentioned in the EpisodeJuliana's NLP Case Study: MediaMonks - AI Customer Voice Analysis Tool for Starbucks EMEAGA4BigqueryConversion Jam 2024Also Asked Tool: AlsoAskedConnect with Jordan PeckLinkedinSnowplow This podcast is brought to you by Juliana Jackson and Simo Ahava. Intro jingle by Jason Packer and Josh Silverbauer.
Jack Franklin is a Senior Software Engineer at Google. They dive deep into the world of performance optimization. They explore the sophisticated capabilities of Chrome DevTools, focusing on the performance and insights panels. Jack shares invaluable tips on utilizing tools like Lighthouse and the flame chart to prioritize and analyze web performance, along with practical advice for maintaining a clean environment for accurate profiling.Join them as tehy decode the intricacies of debugging, from handling long tasks and layout thrashing to understanding the context of flame charts and network requests. Plus, they discuss the collaboration efforts between Chrome and Microsoft Edge, valuable educational resources, and even touch on topics like involvement in local politics and upcoming movie releases. Whether you're a seasoned developer or a tech enthusiast, this episode is packed with knowledge, humor, and practical advice to help you master web performance optimization. Tune in now!SocialsLinkedIn: Jack FranklinPicksCharles - Legendary: A Marvel Deck Building Game – SHIELD (2019)Dan - Dan Shappir: How to Maximize Web PerformanceJack - Sky Team | Board GameBecome a supporter of this podcast: https://www.spreaker.com/podcast/javascript-jabber--6102064/support.
This is a rapid fire episode of news topics today because (as always) there's plenty going on in the front-end development world.Evan You, the creator of the popular Vue.js framework and Vite build tool, is back with a new static site generator named VitePress. VitePress allows users to build fast, content-centric websites with Markdown, a fully customizable theme, and Vue-enhancements for greater interactivity, and it will generate static HTML pages that can be deployed anywhere.There's also two new component library frameworks taking a page from the shadcn/ui open source component library: JollyUI and Ark UI. JollyUI provides shadcn/ui compatible, react aria components that you can copy and paste into your apps. They're accessible, customizable, open source, and look darn good at first glance. Ark UI takes a slightly different approach billing itself as a headless library for building reusable, scalable design systems that work for a wide range of JS frameworks.And the Angular team is back at it again with the twice a year release of a new major version of Angular. We're up to v18 now, and Angular is encouraging users to move away from zone.js for change detection. It's been a staple of Angular for years, but the library came with a number of developer experience and performance downsides and so the Angular team's been hard at work building new APIs that don't rely on zone.js and they're ready for devs to try them out.In bonus news, Google now offers its Gemini AI in Chrome DevTools to help developers better understand the errors and warnings that pop up in the console, Kyle Shevlin shares a very well written design system retrospective based off his own experiences building cross platform design systems for clients and dev teams, and IBM watsonx brings its own Code Assistant AI tool to the table. A unique twist with Code Assistant is that it offers not only code generation, but also code modernization (i.e. refactoring legacy code or translating code from one language to another).News:Paige - VitePress SSG frameworkJack - Ark UI 3.0 and JollyUI react aria compatible componentsTJ - Angular 18 is now availableBonus news:Google Gemini AI in Chrome DevToolsDesign System Retrospective article by Kyle ShevlinIBM watsonx Code AssistantWhat Makes Us Happy this Week:Paige - BUBM double layer cable travel bagJack - Creo Chocolate tourTJ - Sharp Tech podcastThanks as always to our sponsor, the Blue Collar Coder channel on YouTube. You can join us in our Discord channel, explore our website and reach us via email, or Tweet us on X @front_end_fire.Front-end Fire websiteBlue Collar Coder on YouTubeBlue Collar Coder on DiscordReach out via emailTweet at us on X @front_end_fire
Join us in this enlightening episode of Planet Odoo as we sit down with Samuel Degueldre, a seasoned developer specializing in JavaScript at Odoo. Dive deep into the nuances of JavaScript performance, from memory management to runtime efficiency. Samuel shares his expertise on optimizing client-side frameworks, managing memory leaks, and leveraging Chrome DevTools for peak performance. Whether you're a developer looking to sharpen your skills or just curious about the inner workings of JavaScript, this episode is packed with practical insights and stories from the front lines at Odoo.Whether you're a developer or just starting your journey with Odoo, this episode promises to provide valuable insights into the inner workings of framework optimization.______________________________________________________Don't forget to support us by clicking the subscribe button, leaving a review, and sharing your favorite episode! To further engage with you, we've created a pad (click here) where you can freely share your thoughts, ask questions, suggest episode ideas, or just drop by to say hi (it's always welcome).- See Odoo in action by trying it: https://odoo.com/trial- Listen to our episode about Owl: Owl, the Fastest Javascript FrameworkConcept and realization: Ludvig AuvensRecording and mixing: Lèna Noiset, Judith MorisetHost: Olivier Colson
In deze aflevering klaagt Rick over zijn zoektocht naar een nieuwe auto met fatsoenlijk infotainmentsysteem, hebben we het over allerlei manieren waardoor je code uit Figma kunt krijgen of zelfs code in Figma kunt krijgen, Font Awesome begint een web component library genaamd Web Awesome en CSS is echt super in 2024. 0:00 - Intro 01:30 - Ranten over infotainment systemen 13:51 - Device eigenschappen simuleren in Chrome Devtools - https://twitter.com/jh3yy/status/1782176868080210159, Focussed pagina emuleren - https://frontendmasters.com/blog/devtools-tips-tricks/ 16:45 - Font Awesome komt met een nieuw project, Web Awesome - https://www.kickstarter.com/projects/fontawesome/web-awesome/description 25:50 - Rick is blij met nieuwe CSS Features 30:00 - Code connect voor Figma - https://github.com/figma/code-connect 35:15 - Figma styling op basis van Tailwind classes - https://twitter.com/burcs/status/1782829359050445295 37:45 - Zigbee led strip die ook werkt met Hue - https://www.aliexpress.com/item/1005005686420822.html 42:08 - Figma Typografie starter kit - https://www.figma.com/community/file/1362098125068472724/typography-variables-starter-kit 45:18 - Pocket Casts - https://pocketcasts.com/
Michael Hablich is the product lead for Chrome DevTools and Puppeteer. They delve into a comprehensive discussion on various features and uses of the network tab for monitoring API calls, performance debugging with cache, simulating network conditions, and visual understanding of page loading. They cover topics such as debugging, PHP, and the history of dev tools. Michael Hablich shares insights into the development and evolution of Chrome DevTools, highlighting its migration to TypeScript and the team behind it.Tune in to uncover the challenges and advancements in debugging tools, the potential integration of AI, and a range of powerful features within Chrome DevTools.SponsorsChuck's Resume TemplateDeveloper Book ClubBecome a Top 1% Dev with a Top End Devs MembershipSocialsLinkedIn: Michael HablichPicksDan - Killing EveMichael - Spirit IslandsSteve - Victory GripsSupport this podcast at — https://redcircle.com/javascript-jabber/donationsPrivacy & Opt-Out: https://redcircle.com/privacy
Dan Shappir takes the lead in explaining all of the acronyms and metrics for measuring the performance of your web applications. He leads a discussion through the ins and outs of monitoring performance and then how to improve and check up on how your website is doing.SponsorsChuck's Resume TemplateDeveloper Book ClubBecome a Top 1% Dev with a Top End Devs MembershipLinks: The Picture element - HTML: Hypertext Markup Language | MDNPicksAJ - The Way of KingsAJ - Taco BellAimee - web.devAimee - @DanShappirDan - New accessibility feature in Chrome Dev Tools: simulate vision deficiencies, including blurred vision & various types of color blindness. In Canary at the bottom of the Rendering tab.Dan - Better Call SaulSupport this podcast at — https://redcircle.com/javascript-jabber/donationsPrivacy & Opt-Out: https://redcircle.com/privacy
In this episode, Simon and Aaron Berezkin discuss the results of the State of React Native 2023 survey. They cover various topics such as state management, data fetching, navigation, or styling and share their own take on the outcome and trends of the different categories based on real-life observations.Learn React Native - https://galaxies.devÁron BerezkinAron Twitter: https://twitter.com/AronBerezkinAron Blog: https://www.aronberezkin.com/Aron Github: https://github.com/AronBeTakeawaysReact Native developers are generally happy with the state of the framework and its various features.State management libraries like Redux and Zustand are widely used, but React Query is gaining popularity.React Native Paper and React Native Elements are still popular UI component libraries, but custom solutions are becoming more common.React Native Reanimated and the Animated API are the most popular choices for graphics and animations.The most commonly used debugging tools are console logs, Flipper, and Chrome DevTools.The new architecture and bridge-less mode are highly interesting to React Native developers.Expo modules are making it easier to create and use native libraries in React Native projects.React Native is moving in the right direction, with most developers agreeing.Building React Native apps is considered complex but not overly so.The React Native community is highly valued and supportive.Pain points include debugging, un-maintained packages, dealing with native code, and upgrades.Missing features include better debugging and Android shadows.The React Native ecosystem is stable but not boring, with ongoing improvements and innovations.
Episode 190 contains the Digital Marketing News and Updates from the week of Dec 4-8, 2023.1. Google Ads Responds: Temporary Opt-Out for Search Partner Network in PMax Campaigns - Last week in Ep 189, I highlighted the study by Adalytics that revealed Google search ads were appearing on websites violating Google's own publisher policies. This revelation raised significant concerns among advertisers about the placement of their ads. It seems Google has taken note of these concerns, as they have now introduced a temporary opt-out feature for advertisers in Performance Max (PMax) and Universal App campaigns, allowing them to avoid placements in the Search Partner Network (SPN). This move, unprecedented in the realm of automated campaigns, marks a significant shift in Google's advertising approach.Key Points from the Update: Temporary Opt-Out for SPN: Advertisers can now temporarily opt out of placing their ads on SPN for all campaign types, including PMax and App campaigns. This change, effective until March 1st, is a direct response to the concerns raised by the Adalytics report. Google's Response to the Report: While Google has contested the methodology and conclusions of the Adalytics report, they acknowledge the importance of improving their products to meet partner needs. Opt-Out Process: The opt-out feature is currently gated and requires a Google representative to enable it for advertisers. This process is being developed for wider accessibility. For small business owners using Google PMax Ads, this update is crucial. It offers more control over ad placements, ensuring that ads are displayed in contexts that align with brand values and reach the intended audience. The ability to opt out of SPN in PMax campaigns allows for more targeted advertising and potentially better ROI. Google's introduction of a temporary opt-out feature for SPN in PMax campaigns is a significant response to concerns about ad placements. This development serves as a crucial reminder for business owners: always check your ad placements and don't blindly trust any ad platform. Regularly verifying where your ads are displayed is essential. Failing to do so could mean unintentionally leaving money on the table or giving more to Google than necessary. Trust but verify – it's a key principle in digital advertising to ensure your investments are yielding the best possible returns.2. Google's 3 Essential Tips for Technical Troubleshooting - Google has shared insights for diagnosing and resolving technical SEO issues that can affect a website's indexing and ranking. This guidance is particularly crucial for business owners who may not have an in-depth understanding of SEO but want to ensure their website performs optimally in search results.Google's Three Key Tips for Technical SEO Troubleshooting:Check if the Page is Indexed or Indexable: Use Google Search Console's URL Inspection Tool to determine if a page is indexed and indexable. This tool provides information on whether a page is indexed and, if not, offers suggestions for why Google might be having trouble indexing it. It also shows the last crawl date, indicating Google's interest in the page.Identify Duplicate Content and Canonical Issues: Google advises checking if a page is being ignored as a duplicate and if another page is selected as the canonical URL. While it might not be the canonical URL you expected, if the content is indexed, it can still appear in search results. This is generally acceptable and not a cause for concern.Review Rendered HTML for Code-Related Issues: Inspecting the rendered HTML, which shows the HTML after all JavaScript has been executed, is crucial. This can reveal issues related to JavaScript or other technical aspects that might not be visible in the source code HTML. Google suggests using Search Console and Chrome DevTools to view the rendered HTML.Bonus Tip: Google cautioned against using the cache or site:search operator for any kind of diagnostic purposes. For example, a page can be indexed but not show up in a site:search. The site search operator, just like all the other site operators, is completely disconnected from the search index. This has always been the case, even when there was a site search operator for showing backlinks.Understanding and addressing technical SEO issues is vital for ensuring your website is properly indexed and ranked by search engines. By following Google's tips, you can identify and fix problems that might be hindering your website's performance. This can lead to better visibility in search results, potentially driving more traffic and customers to your business. Regularly checking for indexing issues, understanding duplicate content and canonicalization, and reviewing rendered HTML are key steps in maintaining a healthy and search-friendly website.3. Google Ads Gambling and Games Advertising Policy Update - Google Ads is updating its policy regarding gambling and games advertising, effective from December 12. This change is particularly relevant for businesses in the sports betting and lottery sectors, as it opens new advertising opportunities in specific regions.Key Changes in Google Ads Policy: Starting December 12, Google will begin accepting and running ads from state-approved entities for sports betting in Vermont and lottery couriers in Arkansas, Montana, New Jersey, and New York. This marks a significant expansion of the types of gambling-related ads that can be run through Google Ads. To be eligible to promote these ads, advertisers must apply for certification. This process involves filling out an online gambling application form in the Google Ads Help Center, providing all requested information to prevent delays. Different forms are available for privately-licensed operators, state-run entities, and social casino game operators. 4. Google's Cryptocurrency Advertising Policy Update - Google has announced significant changes to its cryptocurrency advertising policy, set to take effect on January 29, 2024. This update is particularly relevant for businesses involved in advertising cryptocurrency coin trusts, a type of financial product that allows investors to trade shares in trusts holding large amounts of digital currency. These trusts offer equity in cryptocurrencies without direct ownership and can diversify investment portfolios.Key Aspects of the Policy Update: Scope and Requirements: Advertisers targeting the United States will be able to promote these products and services, provided they comply with specific policies and obtain certification from Google. Global Application: The updated policy is not limited to the United States but will apply globally to all accounts advertising cryptocurrency coin trusts. Compliance with Local Laws: Advertisers must comply with local laws in the areas where the ads are targeted. This is a crucial aspect of the policy, emphasizing the need for advertisers to be aware of and adhere to regional regulations. Warning Before Suspension: In case of policy violations, Google will first issue a warning, giving advertisers at least seven days to rectify non-compliance issues before imposing an account suspension. This approach provides an opportunity for advertisers to address any issues and align with the revised guidelines. 5. Google's Criteria for Video Content in Search Results - Google has updated its requirements for video search results, particularly affecting the "Video Mode" feature in Google Search. This significant change mandates that videos must be the main content of a webpage to be featured in search results and Video Mode.Video Mode is a feature in Google Search that displays under the video tab. It specifically showcases videos that are the central element of their respective web pages. This mode is designed to provide users with direct access to video content, ensuring that the videos they find are the primary focus of the pages they visit.Key Aspects of the Update: Stricter Video Requirements: Google now requires that for a video to show up in search results and Video Mode, it must be the primary content of the page. This means pages where videos are supplementary to text or other content will not be featured in these search results. Impact on Video Mode: With this update, Video Mode will only show videos that are the main focus of their respective pages, enhancing the user experience by directing them to pages where the video is the central element. Examples of Non-Primary Video Content: Google provided examples of pages where videos are not the primary focus, such as blog posts with complementary videos, product detail pages with additional videos, and video category pages listing multiple videos of equal prominence. 6. Google Analytics 4 Audiences Now Integrated with Google Ads - Google has made a significant update to Google Analytics 4 (GA4), integrating GA4 audiences directly with Google Ads. This update, announced on December 6, 2023, allows advertisers to create and apply GA4 audiences, including predictive audiences, within Google Ads. This integration streamlines the process of audience creation and application, enhancing the effectiveness of advertising campaigns.Key Features of the Update: Direct Integration with Google Ads: Advertisers can now build analytics audiences, including predictive audiences, using the Audience Manager feature in Google Ads. This integration simplifies the audience creation process and makes it more efficient. Predictive Audiences: A predictive audience is defined by at least one condition related to a predictive metric. For example, an audience named "likely 7-day purchasers" consists of users predicted to purchase in the next 7 days. This feature allows for more targeted and effective advertising. 7. Threads Expands Keyword Search and Content Ranking Insights - Threads, the X (formerly Twitter) alternative social media platform, has recently made two significant updates. On November 30, 2023, Threads expanded its keyword search functionality to all regions, enhancing users' ability to find relevant content. Additionally, Instagram Chief Adam Mosseri provided insights into Threads' content ranking approach, emphasizing a mindful strategy to avoid the pitfalls of spammy tactics and divisive content.Key Updates from Threads: Expanded Keyword Search: Initially tested in selected regions, Threads' keyword search is now available globally. This feature allows users to search for posts using specific terms, including hashtagged and topic-tagged content. The expansion supports all languages available in the app, with more search enhancements promised soon. Mindful Approach to Tags and Search: Threads is cautious about adding tags and search features due to their potential to amplify spam and irrelevant content. To mitigate this, users can only add one topic tag per post, and hashtag use is not straightforwardly enabled. This approach aligns with Meta's vision for Threads as a more positive public conversation app. Content Ranking Insights from Adam Mosseri: Mosseri explained that Threads aims to avoid the negatives of engagement-focused platforms, which often amplify divisive content. Threads' content ranking strategy is designed to foster healthier, more engaging conversations without over-promising reach and engagement to news outlets.
Tue, 27 Sep 2022 07:05:00 +0000 https://omt-magazin.podigee.io/9196-neue-episode f93332187b14cab61d80a2250a74a991 ℹ️ Eric Warsow beim OMT ℹ️ OMT-Webinare ℹ️ OMT Konferenz ℹ️ Agency Day 2022 9196 full no
Richard talks with Jack Franklin, a programmer working on the Chrome Dev Tools team at Google.
Guest Geoff Harcourt, CTO of CommonLit, joins Joël to talk about a thing that comes up with a lot with clients: the performance of their test suite. It's often a concern because with test suites, until it becomes a problem, people tend to not treat it very well, and people ask for help on making their test suites faster. Geoff shares how he handles a scenario like this at CommonLit. This episode is brought to you by Airbrake (https://airbrake.io/?utm_campaign=Q3_2022%3A%20Bike%20Shed%20Podcast%20Ad&utm_source=Bike%20Shed&utm_medium=website). Visit Frictionless error monitoring and performance insight for your app stack. Geoff Harcourt (https://twitter.com/geoffharcourt) Common Lit (https://www.commonlit.org/) Cuprite driver (https://cuprite.rubycdp.com/) Chrome DevTools Protocol (CDP) (https://chromedevtools.github.io/devtools-protocol/) Factory Doctor (https://test-prof.evilmartians.io/#/profilers/factory_doctor) Joël's RailsConf talk (https://www.youtube.com/watch?v=LOlG4kqfwcg) Formal Methods (https://www.hillelwayne.com/post/formally-modeling-migrations/) Rails multi-database support (https://guides.rubyonrails.org/active_record_multiple_databases.html) Knapsack pro (https://knapsackpro.com/) Prior episode with Eebs (https://www.bikeshed.fm/353) Shopify article on skipping specs (https://shopify.engineering/spark-joy-by-running-fewer-tests) Transcript: JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville. And today, I'm joined by Geoff Harcourt, who is the CTO of CommonLit. GEOFF: Hi, Joël. JOËL: And together, we're here to share a little bit of what we've learned along the way. Geoff, can you briefly tell us what is CommonLit? What do you do? GEOFF: CommonLit is a 501(c)(3) non-profit that delivers a literacy curriculum in English and Spanish to millions of students around the world. Most of our tools are free. So we take a lot of pride in delivering great tools to teachers and students who need them the most. JOËL: And what does your role as CTO look like there? GEOFF: So we have a small engineering team. There are nine of us, and we run a Rails monolith. I'd say a fair amount of the time; I'm hands down in the code. But I also do the things that an engineering head has to do, so working with vendors, and figuring out infrastructure, and hiring, and things like that. JOËL: So that's quite a variety of things that you have to do. What is new in your world? What's something that you've encountered recently that's been fun or interesting? GEOFF: It's the start of the school year in America, so traffic has gone from a very tiny amount over the summer to almost the highest load that we'll encounter all year. So we're at a new hosting provider this fall. So we're watching our infrastructure and keeping an eye on it. The analogy that we've been using to describe this is like when you set up a bunch of plumbing, it looks like it all works, but until you really pump water through it, you don't see if there are any leaks. So things are in good shape right now, but it's a very exciting time of year for us. JOËL: Have you ever done some actual plumbing yourself? GEOFF: I am very, very bad at home repair. But I have fixed a toilet or two. I've installed a water filter but nothing else. What about you? JOËL: I've done a little bit of it when I was younger with my dad. Like, I actually welded copper pipes and that kind of thing. GEOFF: Oh, that's amazing. That's cool. Nice. JOËL: So I've definitely felt that thing where you turn the water source back on, and it's like, huh, let's see, is this joint going to leak, or are we good? GEOFF: Yeah, they don't have CI for plumbing, right? JOËL: [laughs] You know, test it in production, right? GEOFF: Yeah. [laughs] So we're really watching right now traffic starting to rise as students and teachers are coming back. And we're also figuring out all kinds of things that we want to do to do better monitoring of our application, so some of this is watching metrics to see if things happen. But some of this is also doing some simulated user activity after we do deploys. So we're using some automated browsers with Cypress to log into our application and do some user flows, and then report back on the results. JOËL: So is this kind of like a feature test in CI, except that you're running it in production? GEOFF: Yeah. Smoke test is the word that we've settled on for it, but we run it against our production server every time we deploy. And it's a small suite. It's nowhere as big as our big Capybara suite that we run in CI, but we're trying to get feedback in less than six minutes. That's sort of the goal. In addition to running tests, we also take screenshots with a tool called Percy, and that's a visual regression testing tool. So we get to see the screenshots, and if they differ by more than one pixel, we get a ping that lets us know that maybe our CSS has moved around or something like that. JOËL: Has that caught some visual bugs for you? GEOFF: Definitely. The state of CSS at CommonLit was very messy when I arrived, and it's gotten better, but it still definitely needs some love. There are some false positives, but it's been really, really nice to be able to see visual changes on our production pages and then be able to approve them or know that there's something we have to go back and fix. JOËL: I'm curious, for this smoke test suite, how long does it take to run? GEOFF: We run it in parallel. It runs on Buildkite, which is the same tool that we use to orchestrate our CI, and the longest test takes about five minutes. It signs in as a teacher, creates an account. It creates a class; it invites the student to that class. It then logs out, logs in as that student creates the student account, signs in as the student, joins the class. It then assigns a lesson to the student then the student goes and takes the lesson. And then, when the student submits the lesson, then the test is over. And that confirms all of the most critical flows that we would want someone to drop what they were doing if it's broken, you know, account creation, class creation, lesson creation, and students taking a lesson. JOËL: So you're compressing the first few weeks of school into five minutes. GEOFF: Yes. And I pity the school that has thousands of fake teachers, all named Aaron McCarronson at the school. JOËL: [laughs] GEOFF: But we go through and delete that data every once in a while. But we have a marketer who just started at CommonLit maybe a few weeks ago, and she thought that someone was spamming our signup form because she said, "I see hundreds of teachers named Aaron McCarronson in our user list." JOËL: You had to admit that you were the spammer? GEOFF: Yes, I did. [laughs] We now have some controls to filter those people out of reports. But it's always funny when you look at the list, and you see all these fake people there. JOËL: Do you have any rate limiting on your site? GEOFF: Yeah, we do quite a bit of it, actually. Some of it we do through Cloudflare. We have tools that limit a certain flow, like people trying to credential stuffing our password, our user sign-in forms. But we also do some further stuff to prevent people from hitting key endpoints. We use Rack::Attack, which is a really nice framework. Have you had to do that in client work with clients setting that stuff up? JOËL: I've used Rack:Attack before. GEOFF: Yeah, it's got a reasonably nice interface that you can work with. And I always worry about accidentally setting those things up to be too sensitive, and then you get lots of stuff back. One issue that we sometimes find is that lots of kids at the same school are sharing an IP address. So that's not the thing that we want to use for rate limiting. We want to use some other criteria for rate limiting. JOËL: Right, right. Do you ever find that you rate limit your smoke tests? Or have you had to bypass the rate limiting in the smoke tests? GEOFF: Our smoke tests bypass our rate limiting and our bot detection. So they've got some fingerprints they use to bypass that. JOËL: That must have been an interesting day at the office. GEOFF: Yes. [laughter] With all of these things, I think it's a big challenge to figure out, and it's similar when you're making tests for development, how to make tests that are high signal. So if a test is failing really frequently, even if it's testing something that's worthwhile, if people start ignoring it, then it stops having value as a piece of signal. So we've invested a ton of time in making our test suite as reliable as possible, but you sometimes do have these things that just require a change. I've become a really big fan of...there's a Ruby driver for Capybara called Cuprite, and it doesn't control chrome with Chrome Driver or with Selenium. It controls it with the Chrome DevTools protocol, so it's like a direct connection into the browser. And we find that it's very, very fast and very, very reliable. So we saw that our Capybara specs got significantly more reliable when we started using this as our driver. JOËL: Is this because it's not actually moving the mouse around and clicking but instead issuing commands in the background? GEOFF: Yeah. My understanding of this is a little bit hazy. But I think that Selenium and ChromeDriver are communicating over a network pipe, and sometimes that network pipe is a little bit lossy. And so it results in asynchronous commands where maybe you don't get the feedback back after something happens. And CDP is what Chrome's team and I think what Puppeteer uses to control things directly. So it's great. And you can even do things with it. Like, you can simulate different time zone for a user almost natively. You can speed up or slow down the traveling of time and the direction of time in the browser and all kinds of things like that. You can flip it into mobile mode so that the device reports that it's a touch browser, even though it's not. We have a set of mobile specs where we flip it with CDP into mobile mode, and that's been really good too. Do you find when you're doing client work that you have a demand to build mobile-specific specs for system tests? JOËL: Generally not, no. GEOFF: You've managed to escape it. JOËL: For something that's specific to mobile, maybe one or two tests that have a weird interaction that we know is different on mobile. But in general, we're not doing the whole suite under mobile and the whole suite under desktop. GEOFF: When you hand off a project...it's been a while since you and I have worked together. JOËL: For those who don't know, Geoff used to be with us at thoughtbot. We were colleagues. GEOFF: Yeah, for a while. I remember my very first thoughtbot Summer Summit; you gave a really cool lightning talk about Eleanor of Aquitaine. JOËL: [laughs] GEOFF: That was great. So when you're handing a project off to a client after your ending, do you find that there's a transition period where you're educating them about the norms of the test suite before you leave it in their hands? JOËL: It depends a lot on the client. With many clients, we're working alongside an existing dev team. And so it's not so much one big handoff at the end as it is just building that in the day-to-day, making sure that we are integrating with the team from the outset of the engagement. So one thing that does come up a lot with clients is the performance of their test suite. That's often a concern because the test suite until it becomes a problem, people tend to not treat it very well. And by the time that you're bringing on an external consultant to help, generally, that's one of the areas of the code that's been a little bit neglected. And so people ask for help on making their test suite faster. Is that something that you've had to deal with at CommonLit as well? GEOFF: Yeah, that's a great question. We have struggled a lot with the speed that our test suite...the time it takes for our test suite to run. We've done a few things to improve it. The first is that we have quite a bit of caching that we do in our CI suite around dependencies. So gems get cached separately from NPM packages and browser assets. So all three of those things are independently cached. And then, we run our suites in parallel. Our Jest specs get split up into eight containers. Our Ruby non-system tests...I'd like to say unit tests, but we all know that some of those are actually integration tests. JOËL: [laughs] GEOFF: But those tests run in 15 containers, and they start the moment gems are built. So they don't wait for NPM packages. They don't wait for assets. They immediately start going. And then our system specs as soon as the assets are built kick off and start running. And we actually run that in 40 parallel containers so we can get everything finished. So our CI suite can finish...if there are no dependency bumps and no asset bumps, our specs suite you can finish in just under five minutes. But if you add up all of that time, cumulatively, it's something like 75 minutes is the total execution as it goes. Have you tried FactoryDoctor before for speeding up test suites? JOËL: This is the gem from Evil Martians? GEOFF: Yeah, it's part of TestProf, which is their really, really unbelievable toolkit for improving specs, and they have a whole bunch of things. But one of them will tell you how many invocations of FactoryBot factories each factory got. So you can see a user factory was fired 13,000 times in the test suite. It can even do some tagging where it can go in and add metadata to your specs to show which ones might be candidates for optimization. JOËL: I gave a talk at RailsConf this year titled Your Tests Are Making Too Many Database Calls. GEOFF: Nice. JOËL: And one of the things I talked about was creating a lot more data via factories than you think that you are. And I should give a shout-out to FactoryProf for finding those. GEOFF: Yeah, it's kind of a silent killer with the test suite, and you really don't think that you're doing a whole lot with it, and then you see how many associations. How do you fight that tension between creating enough data that things are realistic versus the streamlining of not creating extraneous things or having maybe mystery guests via associations and things like that? JOËL: I try to have my base factories be as minimal as possible. So if there's a line in there that I can remove, and the factory or the model still saves, then it should be removed. Some associations, you can't do that if there's a foreign key constraint, and so then I'll leave it in. But I am a very hardcore minimalist, at least with the base factory. GEOFF: I think that makes a lot of sense. We use foreign keys all over the place because we're always worried about somehow inserting student data that we can't recover with a bug. So we'd rather blow up than think we recorded it. And as a result, sometimes setting up specs for things like a student answering a multiple choice question on a quiz ends up being this sort of if you give a mouse a cookie thing where it's you need the answer options. You need the question. You need the quiz. You need the activity. You need the roster, the students to be in the roster. There has to be a teacher for the roster. It just balloons out because everything has a foreign key. JOËL: The database requires it, but the test doesn't really care. It's just like, give me a student and make it valid. GEOFF: Yes, yeah. And I find that that challenge is really hard. And sometimes, you don't see how hard it is to enforce things like database integrity until you have a lot of concurrency going on in your application. It was a very rude surprise to me to find out that browser requests if you have multiple servers going on might not necessarily be served in the order that they were made. JOËL: [laughs] So you're talking about a scenario where you're running multiple instances of your app. You make two requests from, say, two browser tabs, and somehow they get served from two different instances? GEOFF: Or not even two browser tabs. Imagine you have a situation where you're auto-saving. JOËL: Oooh, background requests. GEOFF: Yeah. So one of the coolest features we have at CommonLit is that students can annotate and highlight a text. And then, the teachers can see the annotations and highlights they've made, and it's actually part of their assignment often to highlight key evidence in a passage. And those things all fire in the background asynchronously so that it doesn't block the student from doing more stuff. But it also means that potentially if they make two changes to a highlight really quickly that they might arrive out of order. So we've had to do some things to make sure that we're receiving in the right order and that we're not blowing away data that was supposed to be there. Just think about in a Heroku environment, for example, which is where we used to be, you'd have four dynos running. If dyno one takes too long to serve the thing for dyno two, request one may finish after request two. That was a very, very rude surprise to learn that the world was not as clean and neat as I thought. JOËL: I've had to do something similar where I'm making a bunch of background requests to a server. And even with a single dyno, it is possible for your request to come back out of order just because of how TCP works. So if it's waiting for a packet and you have two of these requests that went out not too long before each other, there's no guarantee that all the packets for request one come back before all the packets from request two. GEOFF: Yeah, what are the strategies for on the client side for dealing with that kind of out-of-order response? JOËL: Find some way to effectively version the requests that you make. Timestamp is an easy one. Whenever a request comes in, you take the response from the latest timestamp, and that wins out. GEOFF: Yeah, we've started doing some unique IDs. And part of the unique ID is the browser's timestamp. We figure that no one would try to hack themselves and intentionally screw up their own data by submitting out of order. JOËL: Right, right. GEOFF: It's funny how you have to pick something to trust. [laughs] JOËL: I'd imagine, in this case, if somebody did mess around with it, they would really only just be screwing up their own UI. It's not like that's going to then potentially crash the server because of something, and then you've got a potential vector for a denial of service. GEOFF: Yeah, yeah, that's always what we're worried about, and we have to figure out how to trust these sorts of requests as what's a valid thing and what is, as you're saying, is just the user hurting themselves as opposed to hurting someone else's stuff? MID-ROLL AD: Debugging errors can be a developer's worst nightmare...but it doesn't have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help cut your debugging time in half. So why do developers love Airbrake? It has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking! Airbrake's debugging tool catches all of your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted. In addition to stellar error monitoring, Airbrake's lightweight APM helps developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction. Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality. Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps to include modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back. Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. Head on over to airbrake.io/try/bikeshed to create your FREE developer account today! GEOFF: You were talking about test suites. What are some things that you have found are consistently problems in real-world apps, but they're really, really hard to test in a test suite? JOËL: Difficult to test or difficult to optimize for performance? GEOFF: Maybe difficult to test. JOËL: Third-party integrations. Anything that's over the network that's going to be difficult. Complex interactions that involve some heavy frontend but then also need a lot of backend processing potentially with asynchronous workers or something like that, there are a lot of techniques that we can use to make all those play together, but that means there's a lot of complexity in that test. GEOFF: Yeah, definitely. I've taken a deep interest in what I'm sure there's a better technical term for this, but what I call network hostile environments or bandwidth hostile environments. And we see this a lot with kids. Especially during the pandemic, kids would often be trying to do their assignments from home. And maybe there are five kids in the house, and they're all trying to do their homework at the same time. And they're all sharing a home internet connection. Maybe they're in the basement because they're trying to get some peace and quiet so they can do their assignment or something like that. And maybe they're not strongly connected. And the challenge of dealing with intermittent connectivity is such an interesting problem, very frustrating but very interesting to deal with. JOËL: Have you explored at all the concept of Formal Methods to model or verify situations like that? GEOFF: No, but I'm intrigued. Tell me more. JOËL: I've not tried it myself. But I've read some articles on the topic. Hillel Wayne is a good person to follow for this. GEOFF: Oh yeah. JOËL: But it's really fascinating when you'll see, okay, here are some invariants and things. And then here are some things that you set up some basic properties for a system. And then some of these modeling languages will then poke holes and say, hey, it's possible for this 10-step sequence of events to happen that will then crash your server. Because you didn't think that it's possible for five people to be making concurrent requests, and then one of them fails and retries, whatever the steps are. So it's really good at modeling situations that, as developers, we don't always have great intuition, things like parallelism. GEOFF: Yeah, that sounds so interesting. I'm going to add that to my list of reading for the fall. Once the school year calms down, I feel like I can dig into some technical topics again. I've got this book sitting right next to my desk, Designing Data-Intensive Applications. I saw it referenced somewhere on Twitter, and I did the thing where I got really excited about the book, bought it, and then didn't have time to read it. So it's just sitting there unopened next to my desk, taunting me. JOËL: What's the 30-second spiel for what is a data-intensive app, and why should we design for it differently? GEOFF: You know, that's a great question. I'd probably find out if I'd dug further into the book. JOËL: [laughs] GEOFF: I have found at CommonLit that we...I had a couple of clients at thoughtbot that dealt with data at the scale that we deal with here. And I'm sure there are bigger teams doing, quote, "bigger data" than we're doing. But it really does seem like one of our key challenges is making sure that we just move data around fast enough that nothing becomes a bottleneck. We made a really key optimization in our application last year where we changed the way that we autosave students' answers as they go. And it resulted in a massive increase in throughput for us because we went from trying to store updated versions of the students' final answers to just storing essentially a draft and often storing that draft in local storage in the browser and then updating it on the server when we could. And then, as a result of this, we're making key updates to the table where we store a student's answers much less frequently. And that has a huge impact because, in addition to being one of the biggest tables at CommonLit...it's got almost a billion recorded answers that we've gotten from students over the years. But because we're not writing to it as often, it also means that reads that are made from the table, like when the teacher is getting a report for how the students are doing in a class or when a principal is looking at how a school is doing, now, those queries are seeing less contention from ongoing writes. And so we've seen a nice improvement. JOËL: One strategy I've seen for that sort of problem, especially when you have a very write-heavy table but that also has a different set of users that needs to read from it, is to set up a read replica. So you have your main that is being written to, and then the read replica is used for reports and people who need to look at the data without being in contention with the table being written. GEOFF: Yeah, Rails multi-DB support now that it's native to the framework is excellent. It's so nice to be able to just drop that in and fire it up and have it work. We used to use a solution that Instacart had built. It was great for our needs, but it wasn't native to the framework. So every single time we upgraded Rails, we had to cross our fingers and hope that it didn't, you know, whatever private APIs of ActiveRecord it was using hadn't broken. So now that that stuff, which I think was open sourced from GitHub's multi-database implementation, so now that that's all native in Rails, it's really, really nice to be able to use that. JOËL: So these kinds of database tricks can help make the application much more performant. You'd mentioned earlier that when you were trying to make your test performant that you had introduced parallelism, and I feel like that's maybe a bit of an intimidating thing for a lot of people. How would you go about converting a test suite that's just vanilla RSpec, single-threaded, and then moving it in a direction of being more parallel? GEOFF: There's a really, really nice tool called Knapsack, which has a free version. But the pro version, I feel like if you're spending any money at all on CI, it's immediately worth the cost. I think it's something like $75 a month for each suite that you run on it. And Knapsack does this dynamic allocation of tests across containers. And it interfaces with several of the popular CI providers so that it looks at environment variables and can tell how many containers you're splitting across. It'll do some things, like if some of your containers start early and some of them start late, it will distribute the work so that they all end at the same time, which is really nice. We've preferred CI providers that charge by the minute. So rather than just paying for a service that we might not be using, we've used services like Semaphore, and right now, we're on Buildkite, which charge by the minute, which means that you can decide to do as much parallelism as you want. You're just paying for the compute time as you run things. JOËL: So that would mean that two minutes of sequential build time costs just the same as splitting it up in parallel and doing two simultaneous minutes of build time. GEOFF: Yeah, that is almost true. There's a little bit of setup time when a container spins up. And that's one of the key things that we optimize. I guess if we ran 200 containers if we were like Shopify or something like that, we could technically make our CI suite finish faster, but it might cost us three times as much. Because if it takes a container 30 seconds to spin up and to get ready, that's 30 seconds of dead time when you're not testing, but you're paying for the compute. So that's one of the key optimizations that we make is figuring out how many containers do we need to finish fast when we're not just blowing time on starting and finishing? JOËL: Right, because there is a startup cost for each container. GEOFF: Yeah, and during the work day when our engineers are working along, we spin up 200 EC2 machines or 150 EC2 machines, and they're there in the fleet, and they're ready to go to run CI jobs for us. But if you don't have enough machines, then you have jobs that sit around waiting to start, that sort of thing. So there's definitely a tension between figuring out how much parallelism you're going to do. But I feel like to start; you could always break your test suite into four pieces or two pieces and just see if you get some benefit to running a smaller number of tests in parallel. JOËL: So, manually splitting up the test suite. GEOFF: No, no, using something like Knapsack Pro where you're feeding it the suite, and then it's dividing up the tests for you. I think manually splitting up the suite is probably not a good practice overall because I'm guessing you'll probably spend more engineering time on fiddling with which tests go where such that it wouldn't be cost-effective. JOËL: So I've spent a lot of time recently working to improve a parallel test suite. And one of the big problems that you have is trying to make sure that all of your parallel surfaces are being used efficiently, so you have to split the work evenly. So if you said you have 70 minutes worth of work, if you give 50 minutes to one worker and 20 minutes to the other, that means that your total test suite is still 50 minutes, and that's not good. So ideally, you split it as evenly as possible. So I think there are three evolutionary steps on the path here. So you start off, and you're going to manually split things out. So you're going to say our biggest chunk of tests by time are the feature specs. We'll make them almost like a separate suite. Then we'll make the models and controllers and views their own thing, and that's roughly half and half, and run those. And maybe you're off by a little bit, but it's still better than putting them all in one. It becomes difficult, though, to balance all of these because then one might get significantly longer than the other then, you have to manually rebalance it. It works okay if you're only splitting it among two workers. But if you're having to split it among 4, 8, 16, and more, it's not manageable to do this, at least not by hand. If you want to get fancy, you can try to automate that process and record a timing file of how long every file takes. And then when you kick off the build process, look at that timing file and say, okay, we have 70 minutes, and then we'll just split the file so that we have roughly 70 divided by number of workers' files or minutes of work in each process. And that's what gems like parallel_tests do. And Knapsack's Classic mode works like this as well. That's decently good. But the problem is you're working off of past information. And so if the test has changed or just if it's highly variable, you might not get a balanced set of workers. And as you mentioned, there's a startup cost, and so not all of your workers boot up at the same time. And so you might still have a very uneven amount of work done by each worker by statically determining the work to be done via a timing file. So the third evolution here is a dynamic or a self-balancing approach where you just put all of the tests or the files in a queue and then just have every worker pull one or two tests when it's ready to work. So that way, if something takes a lot longer than expected, well, it's just not pulling more from the queue. And everybody else still pulls, and they end up all balancing each other out. And then ideally, every worker finishes work at exactly the same time. And that's how you know you got the most value you could out of your parallel processes. GEOFF: Yeah, there's something about watching all the jobs finish in almost exactly, you know, within 10 seconds of each other. It just feels very, very satisfying. I think in addition to getting this dynamic splitting where you're getting either per file or per example split across to get things finishing at the same time, we've really valued getting fast feedback. So I mentioned before that our Jest specs start the moment NPM packages get built. So as soon as there's JavaScripts that can be executed in test, those kick-off. As soon as our gems are ready, the RSpec non-system tests go off, and they start running specs immediately. So we get that really, really fast feedback. Unfortunately, the browser tests take the longest because they have to wait for the most setup. They have the most dependencies. And then they also run the slowest because they run in the browser and everything. But I think when things are really well-oiled, you watch all of those containers end at roughly the same time, and it feels very satisfying. JOËL: So, a few weeks ago, on an episode of The Bike Shed, I talked with Eebs Kobeissi about dependency graphs and how I'm super excited about it. And I think I see a dependency graph in what you're describing here in that some things only depend on the gem file, and so they can start working. But other things also depend on the NPM packages. And so your build pipeline is not one linear process or one linear process that forks into other linear processes; it's actually a dependency graph. GEOFF: That is very true. And the CI tool we used to use called Semaphore actually does a nice job of drawing the dependency graph between all of your steps. Buildkite does not have that, but we do have a bunch of steps that have to wait for other steps to finish. And we do it in our wiki. On our repo, we do have a diagram of how all of this works. We found that one of the things that was most wasteful for us in CI was rebuilding gems, reinstalling NPM packages (We use Yarn but same thing.), and then rebuilding browser assets. So at the very start of every CI run, we build hashes of a bunch of files in the repository. And then, we use those hashes to name Docker images that contain the outputs of those files so that we are able to skip huge parts of our CI suite if things have already happened. So I'll give an example if Ruby gems have not changed, which we would know by the Gemfile.lock not having changed, then we know that we can reuse a previously built gems image that has the gems that just gets melted in, same thing with yarn.lock. If yarn.lock hasn't changed, then we don't have to build NPM packages. We know that that already exists somewhere in our Docker registry. In addition to skipping steps by not redoing work, we also have started to experiment...actually, in response to a comment that Chris Toomey made in a prior Bike Shed episode, we've started to experiment with skipping irrelevant steps. So I'll give an example of this if no Ruby files have changed in our repository, we don't run our RSpec unit tests. We just know that those are valid. There's nothing that needs to be rerun. Similarly, if no JavaScript has changed, we don't run our Jest tests because we assume that everything is good. We don't lint our views with erb-lint if our view files haven't changed. We don't lint our factories if the model or the database hasn't changed. So we've got all these things to skip key types of processing. I always try to err on the side of not having a false pass. So I'm sure we could shave this even tighter and do even less work and sometimes finish the build even faster. But I don't want to ever have a thing where the build passes and we get false confidence. JOËL: Right. Right. So you're using a heuristic that eliminates the really obvious tests that don't need to be run but the ones that maybe are a little bit more borderline, you keep them in. Shaving two seconds is not worth missing a failure. GEOFF: Yeah. And I've read things about big enterprises doing very sophisticated versions of this where they're guessing at which CI specs might be most relevant and things like that. We're nowhere near that level of sophistication right now. But I do think that once you get your test suite parallelized and you're not doing wasted work in the form of rebuilding dependencies or rebuilding assets that don't need to be rebuilt, there is some maybe not low, maybe medium hanging fruit that you can use to get some extra oomph out of your test suite. JOËL: I really like that you brought up this idea of infrastructure and skipping. I think in my own way of thinking about improving test suites, there are three broad categories of approaches you can take. One variable you get to work with is that total number of time single-threaded, so you mentioned 70 minutes. You can make that 70 minutes shorter by avoiding database writes where you don't need them, all the common tricks that we would do to actually change the test themselves. Then we can change...as another variable; we get to work with parallelism, we talked about that. And then finally, there's all that other stuff that's not actually executing RSpec like you said, loading the gems, installing NPM packages, Docker images. All of those, if we can skip work running migrations, setting up a database, if there are situations where we can improve the speed there, that also improves the total time. GEOFF: Yeah, there are so many little things that you can pick at to...like, one of the slowest things for us is Elasticsearch. And so we really try to limit the number of specs that use Elasticsearch if we can. You actually have to opt-in to using Elasticsearch on a spec, or else we silently mock and disable all of the things that happen there. When you're looking at that first variable that you were talking about, just sort of the overall time, beyond using FactoryDoctor and FactoryProf, is there anything else that you've used to just identify the most egregious offenders in a test suite and then figure out if they're worth it? JOËL: One thing you can do is hook into Active Support notification to try to find database writes. And so you can find, oh, here's where all of the...this test is making way too many database writes for some reason, or it's making a lot, maybe I should take a look at it; it's a hotspot. GEOFF: Oh, that's really nice. There's one that I've always found is like a big offender, which is people doing negative expectations in system specs. JOËL: Oh, for their Capybara wait time. GEOFF: Yeah. So there's a really cool gem, and the name of it is eluding me right now. But there's a gem that raises a special exception if Capybara waits the full time for something to happen. So it lets you know that those things exist. And so we've done a lot of like hunting for...Knapsack will report the slowest examples in your test suite. So we've done some stuff to look for the slowest files and then look to see if there are examples of these negative expectations that are waiting 10 seconds or waiting 8 seconds before they fail. JOËL: Right. Some files are slow, but they're slow for a reason. Like, a feature spec is going to be much slower than a model test. But the model tests might be very wasteful and because you have so many of them, if you're doing the same pattern in a bunch of them or if it's a factory that's reused across a lot of them, then a small fix there can have some pretty big ripple effects. GEOFF: Yeah, I think that's true. Have you ever done any evaluation of test suite to see what files or examples you could throw away? JOËL: Not holistically. I think it's more on an ad hoc basis. You find a place, and you're like, oh, these tests we probably don't need them. We can throw them out. I have found dead tests, tests that are not executed but still committed to the repo. GEOFF: [laughs] JOËL: It's just like, hey, I'm going to get a lot of red in my diff today. GEOFF: That always feels good to have that diff-y check-in, and it's 250 lines or 1,000 lines of red and 1 line of green. JOËL: So that's been a pretty good overview of a lot of different areas related to performance and infrastructure around tests. Thank you so much, Geoff, for joining us today on The Bike Shed to talk about your experience at CommonLit doing this. Do you have any final words for our listeners? GEOFF: Yeah. CommonLit is hiring a senior full-stack engineer, so if you'd like to work on Rails and TypeScript in a place with a great test suite and a great team. I've been here for five years, and it's a really, really excellent place to work. And also, it's been really a pleasure to catch up with you again, Joël. JOËL: And, Geoff, where can people find you online? GEOFF: I'm Geoff with a G, G-E-O-F-F Harcourt, @geoffharcourt. And that's my name on Twitter, and it's my name on GitHub, so you can find me there. JOËL: And we'll make sure to include a link to your Twitter profile in the show notes. The show notes for this episode can be found at bikeshed.fm. This show is produced and edited by Mandy Moore. If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. If you have any feedback, you can reach us at @_bikeshed or reach me at @joelquen on Twitter or at hosts@bikeshed.fm via email. Thank you so much for listening to The Bike Shed, and we'll see you next week. Byeeeeeee!!!!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
An airhacks.fm conversation with Jack Franklin (@Jack_Franklin) about: A thick, chunky Dell Laptop, Playing Tycoon, creating a soccer website with DreamWeaver, learning PHP and CSS, learning python, Java and prolog at the university, writing Rails code, the popularity of Ruby on Rails, Python vs. Ruby, switching from Angular to React, Angular 1 vs. Angular 2, backward compatibility and React, React Hooks, hooks vs. lifecycle methods, starting at Google Chrome Dev Tools Team, working on Chrome Performance Insights, Chrome Dev Tools is a Web Application, from custom framework to Web Components and lit-html, Chrome SDK manages state, Polymer was chatty, lit-html is a tagged template literal, lit-html performs partial updates, the bar for using frameworks gets higher, lit-html optimises the rendering, console.begin and console.end for better developer experience, lit-html is used in Chrome, what happens if FaceBook looses interests on React, what is the worst case scenario for loosing a dependency, using Chrome's ninja and rollup.js for bunding, Chrome supports import maps, chrome -custom-devtools-frontend storybook for WebComponents, adding JS-comments with JSDoc for type annotations for better refactoring in plain ES 6, any and unkonwn in typescript, Performance Insights panel lowers the bar for website optimizations, the Chrome Recorder generates pupeteer script, the Recorder panel is also implemented with Web Components, big UI features are implemented as Web Components, Jack's post: "Why I don't miss React: a story about using the platform", Jack Franklin on twitter: @Jack_Franklin, Jack's blog jackfranklin.co.uk
Der spannendste Link der Shownotes? Das RegEx Kreuzworträtsel. ;)Außerdem sprechen wir über die Chrome Dev Tools und vorrangig über die Funktionalitäten des Recorders, das automatisierte Tests vereinfachen kann.GitHub führt Repositories ein, die nur von Sponsor:innen eingesehen werden dürfen.Bei Flutter gibt es bereits den nächsten großen Release 2.10, diesmal mit offiziellem Windows-Support und vielen Performance-Optimierungen.Babel 7.17 kommt in einer neuen Version und Sebi ist vor allem aufgefallen, dass es für Regular Expressions jetzt Unicode Sets gibt, die kombiniert werden können.Schreibt uns! Schickt uns eure Themenwünsche und euer Feedback: podcast@programmier.barFolgt uns! Bleibt auf dem Laufenden über zukünftige Folgen und virtuelle Meetups und beteiligt euch an Community-Diskussionen. TwitterInstagramFacebookMeetupYouTube
I jesteśmy na nowo! Co się dzieje na początku roku? ### Topics Astro.js Svelte i SvelteKit Fuite - mem. leaks Najwiecej gwiazdek w 2021 ma? Faker.js - Co się stało? Windows i fork grpc_bench Szorty: AngularJS - bye bye, D3 v7.3, FF 93, Chrome DevTools, Brave, NodeJS ### Prowadzący: Julek i Piotr https://www.linkedin.com/in/juliusz-jakubowski/ https://www.linkedin.com/in/piotrzaborow/ ### Słuchaj jak Ci wygodnie Youtube https://youtu.be/UwOQiUfZ2Wk Spotify http://bit.ly/devspresso_spotify Google Podcast http://bit.ly/devspresso_google_podcast iTunes http://bit.ly/devspresso_itunes SoundCloud https://soundcloud.com/devspresso/js-news-66 Amazon Podcast: http://bit.ly/devspresso_amazon_podcast ### Timestamps 00:00 - Intro 01:30 - Shorts 08:32 - Astro 14:54 - Svelte 17:05 - Fuite 23:20 - RisingStars.js 37:55 - faker.js
In questo breve podcast scopriamo i parametri che Google utilizza per valutare l'esperienza utente nel nostro sito.Oggi la SEO di un sito non è solo collegata alle parole chiave ma molto del posizionamento è influenzato dalla realizzazione tecnica del nostro sito.Velocità e sicurezza del server, caricamento delle pagine veloce e preciso scopriamo i parametri essenziali che Google richiede al nostro sito per una buona indicizzazione.#corewebvitals #pageexperience #indicizzazione
An airhacks.fm conversation with Oleg Selajev (@shelajev) about: the red glowing mic, GraalVM is the runtime for your applications, GraalVM is high-performance, embeddable and polyglot, GraalVM comes with top tier Just In Time Compiler (JIT), GraalVM ships as community edition and enterprise edition, Twitter gains 10% throughput and performance improvement with GraalVM Community Edition, GraalVM is a drop-in replacement, GraalVM is based on openJDK builds, the Jikes Java compiler was fast, but not always compatible, Jikes was able to compile Java 5, the dcevm project, javac is written in Java, javac can be compiled to native code, Oracle Aurora JVM - early Java in the database, Oracle AuroraVM - Java in the database, GraalVM comes with better performance by maintaining the compatibility, the different tiers of compilers, GraalVM Enterprise Edition is currently part of the Oracle's Java subscription, GraalVM Enterprise Edition compiler is smarter and better, GraalVM supports Ruby, Python, JavaScript, R and WebAssembly, Truffle provides an API for interpreter API for a non-JVM language, with Truffle you can describe the semantics of your language, Truffle specializes the interpreter to execute your program, GraalVM already ships with several languages built-in, GraalVM supports Ruby with its native extensions, GraalVM is able to optimize multiple languages running in a single process, GraalVM ships with Nashorn compatibility mode, GraalVM supports modern Python, WebAssembly can run on GraalVM, GraalVM supports "BigNumber" types for JavaScript, debug support is implemented via Chrome DevTools, with GraalVM Espresso Java runs on Java, GraalVM team at Oracle Labs is the bleeding edge resource of language research, Java is well suited for language research, Java BeanShell was a Java sourcecode interpreter, Java runs on Java which runs on Java, Truffle ships with sandbox-like isolation, Espresso is Java on Truffle, performance is relative, Java on Truffle allows easier code reloading, wasm runs on browsers and backends, running wasm in a database, the GraalVM team blog at medium, Oleg Selajev on twitter: @shelajev, Oleg's youtube channel, the GraalVM team on medium medium.com/graalvm
This week Chris talks about Bifunctor optics and introduces an app he's been liking recently called CleanShot X, which is a replacement for the built-in screenshot utilities on OSX. Steph talks about her experience using New Relic Browser Stats to troubleshoot a slow page and burnout. Who's feeling it? (Raise your hand.) How do we identify it? What do we do about it? Svelte Is Beloved! - Stack Overflow Survey (https://twitter.com/sveltesociety/status/1422372693827985409?s=21) Bifunctors (https://stackoverflow.com/questions/41073862/what-are-bifunctors/41075765#41075765) CleanShot X (https://cleanshot.com/) Next.js Image (https://nextjs.org/docs/api-reference/next/image) Transcript: CHRIS: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. And we're off the rails already, everybody. It's going to be a good one. Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey. STEPH: And I'm Steph Viccari. CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, how's your week going? STEPH: Hey, Chris, it's going really well. We talked recently that I have a new laptop. So I have been migrating the things that I'm accustomed to over to my new laptop, but I also love that clean, fresh start. So as part of that fresh start, I was like, what if I use Safari? What if I just switch? I'm a Chrome user, for the record. I'm pretty sure you know that, but just to share that. I was like, well, what if I just switched and I try out Safari for a while? So that was the thing. CHRIS: So I heard the words try and the phrase that was the thing, but I'm going to probe a little deeper. How'd that go? Was it good, great, not so great? STEPH: Honestly, it was fine. I did enjoy being in a new environment to see how Safari handles bookmarks and then also the inspector. So it was novel to be in a different browser where I really don't spend much time in a different browser other than when I need to test this specific UI bug or things like that. But the reason that I ended up migrating back to Chrome was frankly for Chrome profiles because I really like that I can have this clear separation now between my work life and my personal life, and then it also keeps me signed in. So my personal email versus my thoughtbot one versus before the Chrome profiles, which I'm not sure how recent of an addition that is where Chrome introduced that feature. But before, I just always had to be signed into both, and it was just all together in one spot. But now I really like that I can separate. And it's more intentional where I'm like, oh, I'm going into work mode, so I just want that profile versus I do need to hop over to my personal side for a while. So that was the thing that brought me back. CHRIS: Interesting. I don't take advantage of that at all. I know of the feature, but it's never really called to me. And if anything, I do the opposite. So specifically, this doesn't work in the browser, but on my phone, I use the iOS Gmail client, and I use the unified inbox. So I just have everything come together. And I subscribe to the idea of, I don't know, it's all work and stuff. And hopefully, people aren't sending me a lot on the weekends, and I will defer and snooze and all of that. But that holistic view pulls me in. And so it's interesting that you're just on the other side of that. It totally makes sense. I actually think I'm wrong here. I think I'm doing the wrong, bad thing. But it's interesting just the way we're on the two sides of that. STEPH: I can see the merits for your approach where it all goes to one place. So you have one place to go and triage, and I think that makes total sense. I haven't triaged my personal life well enough that I want it to come into my work life. And that's the one that needs the more immediate response typically. So I want to prioritize all of my work emails and focus on that and then have my personal ones more like, okay, I've got some time, and I want to check on this. But I don't want to blend those together. Because frankly, I need to do some more triage on the personal side if I'm just going to bring it all into one space. CHRIS: Interesting. Yeah, I would almost view it from the other point of view of I want to protect my personal space, and this is obviously not what I do based on what I just said but protect my personal time so that when it's evenings or weekends or whatever, that I'm not seeing work emails in there. I do my best to snooze them and get them out of the way. But if they are coming in, maybe there's something I need to respond to or, I don't know, maybe it's FOMO in a certain way, FOMO but professional FOMO. I don't really know. It's interesting that that's the feature that brought you back. But overall, how was your experience using Safari? I have heard loosely that now most of the browsers are evergreen; even Edge has really caught up and is now implementing features in a similar way. And so Chrome, Firefox, and Edge are very similar. And my understanding is Safari is the one that actually lags behind or even holds back web standards and implementations and things like that. So, did you find any rough edges of that sort, or was it otherwise just fine, and it was mostly the profile stuff that made you switch? STEPH: It was honestly just fine. I also may have not used it long enough to run into any of those rough edges. But overall, it was just fine. It worked. I just got to that point where I've run into this situation before where I'm signed into my personal email, and then I'm signed into my work email. And then I'm going to Google Hangouts, and Google Hangouts gets confused, and it's like, which person are you? And then I have that moment of where I have to sign out of one, or sometimes it just gets complicated. And what I found with profiles is that that's just never an issue. I don't have to worry about it anymore. So yeah, overall, Safari was fine. I wouldn't mind going back to using it. I just really like the profile feature. This was one of those moments where it helped me notice how much I really liked that feature because I had just opted into it a while back. But this was that moment where I was like, oh yeah, I really miss that. So I'm going to go back to it. CHRIS: The thing you said a minute ago about which person am I? [chuckles] There's a deep philosophical underpinning there, but I've definitely struggled with that and trying to trick the browser into it. So Trello is the one that I'm struggling with that right now. I am one human in the world, I assure you. But GitHub gets this; GitHub plays the game correctly where I'm one human, but I have multiple internet identities. I am the work person. I am the open-source person. And I'm able to route notifications and things to the different inboxes based on the organization that they're part of, et cetera. And I really liked that, that GitHub seems to understand that I am a human that is multifaceted, whereas Trello does not. And so I use Trello in a professional context a lot, but it's my personal...like, I would have to create a distinct account on Trello. And I'm like, Trello, that's not true; I'm still one person. Just understand this Trello organization is a separate facet of my existence. Got on my soapbox on that part. The other thing I want to say is I do feel bad about the fact that I'm just on Chrome. Because increasingly, Chrome has got so much of the market share, and it's becoming the new deeply dominant thing. And so I want to be the agent of change or like, no, we should use different browsers, and we should support them and make sure that we're testing against them and all of that. And then I'm just on Chrome all the time, and I feel bad about it. But it's one of those like; I have so much muscle memory and built-up knowledge around how to use Chrome. And I've just used it for so long now that the switching cost would be pretty high, I assume. I actually haven't even really tried, but I feel bad about it. I'm now saying two things which are I feel bad about it, and I've never even really tried. So I don't feel great in this moment, but these are my truths. STEPH: [laughs] Well, I can plus-one your truths. Those resonate with me. Well, there's always stuff that I am trying that's new all the time. So I feel like I need some constant in my life until I'm ready for other things to be the constant in my life. And then I can muck around with which browser I'm using and change other things. And you have to be in that moment. You have to be ready for it. So going back to a thing that you said a minute ago about separating your work life and your personal life, I very much like that framing, and that's a nice segue, frankly. And it's something that I've been thinking about where you and I often start with technical topics. But I have a very people-centric topic that I'd love to chat with you about today, and it is emphasis on burnout. And who's feeling it? How do we identify it? What do we do about it? And that's been very much on my mind because I have noticed a lot of people around me, including myself; I just feel like we are more inclined to experience burnout right now or are going through it actively. And it feels even more important to have those conversations with each other and with ourselves to talk about what does it look like when we're burned out? How do we recognize when we're there? Because often, when we are burned out, it's not something that happens gradually, or at least it's not something that we notice happening gradually. It's like, okay, I'm fine. And then suddenly, okay, I'm burned out. And I'm at a place where then I can't really focus. I feel overwhelmed. I'm drained. For anyone that is less familiar with burnout, one, hooray, and then two, burnout is a state of emotional, physical, and mental exhaustion that can be caused by excessive and prolonged stress. So then that's what leads to us feeling very overwhelmed, emotionally drained, unable to meet constant demands, or those are the things that are causing our burnout. So I have been doing what I typically do when I'm thinking about a particular topic, and then I'm looking to gather knowledge and information is I try to go pretty wide where I start looking for podcasts, books, just people that are having similar conversations and trying to synthesize a lot of the information that they're sharing. And I have found some really good stuff. The question is now, what does one do once one has that content, and then how do you bring it back and help apply that content to yourself and then people that are around you? So I have some thoughts there, but before I go further, I'm curious, what's your experience with burnout? CHRIS: I think for me, I've been somewhat lucky in that I don't think I've ever really got into an acute period of burnout, but I've definitely had periods where I just felt weighed down. And there are ebbs and flows of life, and of work, all of those things. There is something that I've been thinking about recently, which is the inherent nature of the work that we do where consistently we're working on something where we don't quite know how to do it, and we're struggling, and we're struggling. And finally, we figure out how to do the thing, how to trick the computer into doing what we want. And then the minute we do that, in trying to encode, ideally, we're automating that away. And we take that solved problem, and we just ship it off. And then, we pick up the next unsolved problem from the list. This isn't entirely true, but it feels like sometimes unspecificity in sort of I'm just working on things that are underdefined, underspecified, and I'm trying to solve little puzzles constantly. And, I don't know, I was feeling that recently of the nature of the work was burning me out. And I think earlier on in my career; I definitely experienced this in a certain way. And then, in the middle of my career, there was this perfect inflection point of my skills and the level of tasks that I was going for. But then, the further I got into my career, the more I tend to take the weird underspecified stuff and anything that's relatively clear I'm giving to other folks on the team. I'm like, "Oh, here's this well-defined piece of work here. Can you go implement this admin page?" Whereas I am doing the investigate integrating with third-party platform XYZ that uses a SOAP API that isn't documented. I'm like, okay, cool. Let me roll up my sleeves and figure out what that means. And I noticed in my work that that was starting to weigh on me, and I was able to shake that off and shift around some of the tasks that I was doing. But that was a particular form where the work itself was weighing on me. And I took a step back, and I was like, why do we do the things that we do as developers? Because there's something just fundamental like, you have to enjoy that nature of challenge and constantly escalating challenge to a certain degree, I think, to really like this work, but it can be a lot sometimes. So I feel like maybe that's a slight digression from the topic, but it was a thing that I was feeling in this space. And that's a little bit of my story. STEPH: Well, the beautiful thing about that is it highlights everybody experiences burnout differently. So that could really be how someone is experiencing burnout where they're taking on all these very complicated, different tasks, and they're feeling just worn down by that and that they're not able to meet demand. And they get to that point where maybe they lose their interest in tech and coding because they have pressed too hard in one direction. And so then they need to take a step back. As for me, I've been thinking back over the last couple of months because there was once or twice where you and I had, I think a conversation here on The Bike Shed where I shared that things were okay, but I didn't feel like my normal self. I was losing some of my interest and energy for technology and coding. And I'm very fortunate; I love what I do. So the fact that I wasn't feeling that interest was a really big sign to me that something's different; something feels off. And it does vary depending on the client that I'm working with. And I think feeling that burnout then was a mix of some of those client pressures that I was feeling and that I was working perhaps too many hours as I was very interested in that client's success. And then the other stuff was more personal because we only have, to borrow from the spoon theory, we only have so many spoons to give. And so if you have a lot going on in your personal life as well, that's going to detract from the energy that you also have to give to work. Are you familiar with the spoon theory? CHRIS: I am not. STEPH: I recently heard about it, and I can't stop using it now because I really like it. But it essentially...and there's a really great article that we can link to so others can read about it because I'm not going to remember exactly who came up with this theory. But the idea is that each spoon represents a unit of energy. And let's say if you start each day with only ten units of energy and you use spoons to represent that, as someone needs energy from you, maybe it's work, maybe it's a personal commitment, maybe you're dealing with a chronic illness, then you are giving a spoon away to each of those. So at some point, you're going to run out of spoons. And you want to also be mindful of who you're giving these spoons to because you are giving that energy away. CHRIS: I definitely liked the idea of we start each day with a certain amount of energy, and different things can pull from that pool and whatnot. I'm intrigued by spoons as the unit. It just feels like a weird...I got this little bag of spoons that I walk around with, [chuckles], and I give them out throughout the day. I guess it could be anything in there, you know, objects. But, I don't know, spoons are interesting to me. STEPH: I think it's because this person who came up with the idea was literally having a conversation with their friend in a cafe. And so that was just something that was in front of them. And they're like, oh, I can use spoons to represent. Well, we'll have to double-check the article to make sure, but I think that's why spoons became the representation. So circling back to once you're in burnout, what do you do with it? And that is one of my questions right now. And that's what I'm trying to synthesize a lot of information around. Because once you're in that state, I don't know of a lot of great ways to help other than take time off because, at that point, you're in a crisis state. And you need to step away, and you need to find out how you can recover from having entered this state of crisis. So that feels really important to identify ways that once someone is in that state, that then we can help them. And that feels good. We can advise someone to take PTO. I still don't feel great about it in terms that then, as a manager myself, I don't really know of other helpful ways to then help someone through that period. So then I really started thinking about the fact that once someone is in that burnout stage, frankly, it's too late. We have let someone get to that point that now they are in that crisis instead of addressing it early on. So that is the other thing that's on my mind is one, how do we help people that are already in that crisis state? But then two, how do we start identifying that someone is starting to go in that direction? And then how do we help them tell us? How do we then triage those situations? How do we prevent them from getting to that burnout state? And that's where I've also found some really good content. And specifically, there is a podcast that I've started listening to called The Burnout Show. They essentially share their experience with burnout, and what they did about it, how they recovered from it, and then how they continue to fight it because a lot of people then still go back to the workforce. So then, once you do find a way to recover, then how do you go back to work? And there have been some really great episodes. And I'll be sure to include a link for it in the show notes. There's one particular episode with Grant Gurewitz, who is a guest on the show. And he speaks specifically to the strategy of Three Good Pockets. And this speaks to the idea that there are many things that we can't control in our day. It could be work, family, other commitments, but we can strive for Three Good Pockets of time where we focus on something that's just for us. This is time that's reserved for you and any activity that you find restorative or joyful. And each pocket can vary in size. So perhaps that first pocket is spent just reading a few pages from a book that you're enjoying, and then the next pocket of time is spent outside or calling a friend. And Grant also has a great suggestion around if you're worried that you'll get sidetracked and not actually step away, which I felt called out for that one because that one's definitely me. I will have good intentions, but then I won't actually take the break that I set for myself. So Grant recommends creating a list of restorative activities so that way when it is time for that break when your calendar is reminding you, then you have a list of these activities to choose from. So it makes it easier to say, okay, then I can do this for a couple of minutes, and I can truly step away from work and step away from my screen. But especially now, when so many of us when we're sharing our workspace with our restorative space, for everybody who is still working from home or working remotely, then creating those daily breaks are incredibly important to our wellbeing. And so, it has me thinking about what restorative activities can I add to my day? How can I encourage other people to add more restorative activities to their day? So I really appreciated that advice. And I have noticed that the idea of burnout, but not so much burnout specifically, I've been thinking of it as recovery and balance is a theme for me. And it is something that I am purposely choosing as a theme right now where I want to research and understand more of how we handle these situations and continue to make progress not just for myself but also for my team. CHRIS: I think finding that right cadence and structure and way to reinvest in yourself and ideally gain more spoons if that is at all possible or at least defend the spoons that you have, those all feel very meaningful. I do have a question. I'm interested in your thoughts on this. I feel like we hear about burnout a lot in our industry. I get the sense maybe that it is a more common thing. Like, I hear so many developers talking about how their dream is just to give up tech and go get a cabin and just farm in the woods or something like that. And I wonder, is it a more pervasive thing in our industry? So that's one question. Another is just an observation that we actually do work in a wonderfully...it's an amazing industry where being a developer, there are so many jobs out there. And I don't want to discredit anyone's efforts if they're earlier on and struggling with that. But broadly speaking, it is a developer's market trying to go out there and get jobs and extremely well-compensated, as a general rule. But does that come with this inherent burnout? And if so, which I'm not sure is true, I wonder if maybe we're just more vocal and maybe we actually share more in public. We have more blogs and podcasts and things like that. And that's just a common thing for developers, and so we hear the stories more often, whereas maybe in other industries, it is actually very common, but people are suffering in silence. But also I do wonder, our industry is still so young. The work that we're doing is changing constantly, and that churn and that working in the unknown maybe there is an inherent nature. So that's a bunch of pontifications off the top of my head. And I have no idea what the answer to any of them is. But I am intrigued because it does feel like the shape of burnout as a concept in the developer world is perhaps a little overrepresented, or maybe it shows up more than I would expect. And, I don't know, is the work that hard? I don't know. But then I hear these stories constantly, and I definitely have felt it myself, so maybe. STEPH: Yeah, maybe. Yes, I do think the work is that hard for the record. It's challenging work. I enjoy it, but it is challenging work between figuring out the tech but then also everything else that comes with that. I don't have anything to back this up, but I suspect that a lot of other industries are also experiencing burnout. And I just happen to be more aware of it right now because I'm hearing it more from my friends and the people that I work with. And I suspect that's more directly related to we all just went through 2020, and probably a number of us were trying to forge ahead and get through that time. And so there may be a lot of us that are just now dealing with those consequences of where we just pushed ourselves through a very hard time. And now a lot of that is manifesting and surfacing around really identifying the damage that we may have done to ourselves by just prioritizing work and trying to put our head down and get the work done even though there was so much happening around us. And I suspect that may be a contributing factor is that now people are really starting to recognize, like, oh, I feel this way. And maybe there's time for me to address it. Or frankly, it may not even be that there's time, but your body is just like, okay, I'm done. I made it through the past year or however many months, and I'm going to start shutting down on you. I've given you all the warning signs, but now we're here. We're at a breaking point. So I don't know about the other industries, but I do know the reason that it's more on my mind is because I'm just hearing it more from people, and they're just expressing it. And so, it has become more of a focal point for me, and I've experienced it myself more recently. I'm sure I experienced this back early on in my career, but I took a strategy of well, I'm just a junior, and I just have to get through this. And I have to build experience. For the record, that is not a healthy mentality. I'm just being honest about where I was in my life. And so, I didn't really stop to think about it, but perhaps it is becoming more normalized where people are having more open, honest discussions about where they're at. And if other industries aren't talking about this, I would love for them to. So to round that out a bit, this is something that is just very interesting to me. It's very top of mind. So I suspect I will be sharing a lot more content in future episodes that are just around this. How do we recover? And then how do we balance? How do we work hard without burning out? CHRIS: Work hard, play hard; those are the two placards that you have. Well, I look forward to continued conversations on all of those topics because they are sort of that's the story that underpins all of the work that we do. So I'm very interested to chat more about that. STEPH: Thanks. So what's going on in your world? How's your week been? CHRIS: Oh, my week has been fine. My topics are going to be way more mundane and tech-focused. But let's see, a couple of things, so one is that Stack Overflow has their I think it's Annual Developer Survey. And this year, the results came out, and there was an interesting standout, which was that Svelte was the most beloved framework, which was very exciting to see. Granted, you always have to take these sorts of stats with a grain of salt. But Svelte was 71% loved and 23% dreaded, which they give it as a ratio of how many people really love this thing versus how many people really hate this thing. And so Svelte, 23% of people who have used it are like, I hate that, but 71% loved, so that's a 48% net approval rating. Versus React which was 69% loved, 31% hated or dreaded as the word would be, so that's a 38% net approval. And then Vue, interestingly, was 64% loved, 36% dreaded for a 28% net approval rating. So, yeah, Svelte was decidedly winning in that. But again, the big grain of salt there is looking at the usage stats. React has 40% usage. So of all the respondents, 40% of the people responding to the survey were like, yeah, I've done React professionally, which is a wildly high number for a JavaScript framework. Vue was at 19%, so roughly half of React's usage, which I'm actually impressed that Vue is that high. And Svelte came in at 3%, so it's definitely still in the early adopter strong fan phase. So it makes sense that they would have this outsized high rating. I'm actually surprised that Vue wasn't higher than React, given that. Because I feel like more people are cajoled into React versus Vue can be more of a choice. And I would have expected this to shape out a little bit differently, but yeah, that's the story. STEPH: That's really cool. I liked how you described that as in the very early adopters' strong fan base stage. CHRIS: But nonetheless, the people that are using Svelte do seem to really like it; that's coming through in these numbers. And that definitely is my experience. I love Svelte and would love to continue using it for as long as possible. But really, I want a lot of other people to start using it. I want to really grow the usage base so that there are more libraries, and frameworks, and blog posts, and just mindshare in that space because I really do believe there are some wonderful ideas in Svelte. And it's just so straightforward to implement things that I just want more people hanging out. So that's one quick thing. Another quick thing is, I've been using a utility lately or a program called CleanShot X, which is a replacement for the built-in screenshot utilities on OSX, and it is just fantastic. So I can capture a screenshot. I can capture a window. You can capture a GIF or a video. And then you can do little trims and annotations. And then it has this really nice feature where after you take a screenshot, it just hovers in the bottom corner of your screen and is easily accessible. So if you take a video, and then you want to upload it to a Trello card, it's just floating there waiting for you. You can actually dismiss it and push it down, but it's still peeking up from the bottom of your screen, and you can pull it back up, and you can have a couple of them. But it just really makes the whole workflow of grabbing screenshots or videos so easy. And I cared deeply about that because now that I have this tool, I'm all the more inclined to grab a screenshot or a video with just about every piece of work that I do. So it's going into pull requests; it's going into Trello cards. And it's so nice to have a utility that just really makes that as easy as possible. STEPH: I really liked how you mentioned that you can annotate because I often...I'm laughing as I'm thinking about this. When I am taking a video of something that I'm going to share with someone, I will use my mouse to indicate, oh, this is important. And so I circle around it and do silly things with my mouse to try to indicate but being able to annotate would be so much nicer. I know there is another tool that you're really excited about that I can't remember off the top of my head right now. Do you know the name of the tool I'm thinking of? CHRIS: Was it Loom? STEPH: Yes, Loom, because I also used that for a little while, and I've really enjoyed it. So I'm curious, how does Loom and CleanShot X stack up? Is one replacing the other, or are they complementary tools? CHRIS: Mostly complimentary. Loom is great because it hosts the videos, and you can also do audio capture, although I wonder if CleanShot has that as well. CleanShot also, I think, has a hosting thing. So I think there's a strong overlap in their functionality, but right now, I'm using both. And definitely for screenshots and things, CleanShot owns that end of it. And I think it's more likely that I could have CleanShot as the entire tool that I'm using. But I'm still using Loom for this is a walkthrough where I'm going to talk to you about a thing. I want to make it available at a URL that everyone can see rather than actually getting a GIF or MOV artifact file on my computer. So ever so slightly different, but I think of them, CleanShot X is probably the ideal one. But yeah, I'm still reaching for both. So the one other thing I did want to talk about is I have been expanding our use of the dry-monads within the project that I'm working on. And I've done some things. I did some stuff, Steph, and I think it's good. STEPH: Shtuff with Shteph. [chuckles] CHRIS: Shtuff with Shteph, yeah. I'm definitely pushing the envelope of how much we're leaning on these concepts within the app, and I continue to question it. I'm really intrigued to see what happens when other folks come into the project, and they're like, "Why can't I just get the value? It should be a string. Why isn't it a string? Why is it a string that I have to do a ceremony and a dance to get at?" And I'm like, "Well, because everything can fail, you know, like life." But what I have done here so dry-monads is the project that we're using, particularly their result type. So the result represents something that can either succeed or fail. And so we either have a success, which is this wrapper around the value that's successfully executed. So say we make an API call, we get back a response. If we get a 200 or maybe even a 300, then we get the data, and that's a success, or we get a failure and the error message. But fundamentally, we're modeling that in our system in a way that downstream from that, we have to basically determine if it was success or failure. So we're really encoding into the system; listen, pretty much everything can fail, so let's be careful with that. Let's be intentional and purposeful with it. But there is an interesting thing where these objects have fmap as a method on them. So fmap is a way to transform that wrapped value, but fmap works specifically on the success case. So if you make an API request, you get back the data. Everything's great. You can call fmap, and it will yield into a block that data. You can transform that data in some way, and then it will rewrap it up as a success object. So you can operate on this thing as if it has been successful. But in the case that it's a failure, it will just ignore that transformation because you don't want to transform the failure. It's going to be a totally different shape of data. So you want to separate those. We're getting into functors and monads here. So I'm going to handwave a bunch. But fundamentally, that's the thing that we're going for here. But we found ourselves really wanting to work with both sides. So we make this API request, and in the case that it succeeds, we actually want to transform and actually slice out a piece of data from the nested object that we get back. So that's one transformation that we want to apply on the success portion of the aisle. But then, we also want to transform the failure message. It turns out this backend is giving us very unfriendly error messages. So we want to take those and transform them into friendlier user-facing error messages. So it turns out we want to map both sides. And so I went to dry-monads, and I was like, what do you got? I want to know about this in the world. And it turns out they did not have anything. So I started looking into it, and it turns out this is a concept in the world of functors, specifically. Or, more specifically, I reached out to a former colleague, Sid Raval, a former thoughtboter as well. And he likes the functional programming stuff, so I knew he was the right person to ask about this. And he pointed me at bifunctors. So I found myself in a new space and category theory which I never thought I would explore a category theory in this way, but here we were. So a bifunctor basically is exactly what I was talking about where there are these two branches. In our case, it's either success or failure, but it allows you to operate on both sides, both branches. So the method or the function that gets applied there is bimap. So it's fmap which I don't know why it's f why that's typically what it's called. Success map would be a really great word in this context in my mind. So success map only deals with the success side, but bimap takes two different transformations, one for the successful outcome and one for the failure outcome. And it allows you to very directly talk about what you want to do with that. To be clear, dry-monads has a function called Either or Either, depending on how you want to pronounce it. And that takes two Lambda proc-type things because it's Ruby, and functions are kind of weird in Ruby. But it yields you either the successful value or the failure value, but then it doesn't rewrap them. So it's meant to be the terminal. You use that in a controller when you're either redirect or render or whatever it is you want to do. What I wanted was something for mapping, so staying in the success object or the failure object but yeah, bimaps. So I introduced my own extra wrapping layer. This is where things go off the rails, I think. We now have our own internal result objects. I thought about monkey patching for a while. I convinced myself monkey patching was a bad idea. Now that I've implemented as an extra layer of wrapping and I got the wrapping wrong like four times, or I kept recursively wrapping and re-wrapping, and there's a reason people aren't supposed to write these things themselves. But I think monkey patching may have been a better idea here, or maybe I should have never done any of this. We ended up with a stable working implementation and a nice test suite that covers it. But I introduced bimap and failmap as two different methods on our success object. And I did it by doubly wrapping the result objects. So we have our internal result, which wraps the dry-monad result. And I'm worried about that future situation where a junior developer comes on the team and is like, "I don't know what any of this is." STEPH: I love the weekly progression of I've done some things, and let's talk about it. And then seeing this glimpse into your argument with yourself as to yes, but we need it, and we want it, and it's not something that's defined. So let's go ahead and implement it. You ended on a high there where you talked about the fact that it is nice to work with. It does the thing that you'd like it to do, and it's well tested; I love that part. I'm deciding which thread to go with because there were a lot of interesting bits in everything that you shared. And I'm intrigued about the monkey patching as to why you think that could have been a better approach. Could you talk more about that one? CHRIS: Sure. So I ended up having to introduce the secondary object that then wraps the result. And so if you poke at that even a tiny bit, you start to see this like Russian doll nesting of it's our result wrapping a result, wrapping a value. And it's a burrito with three different tortillas wrapped around it. And if you want to add guacamole, you got to unwrap all three burritos. I'm sorry, this is a terrible monad joke here. [laughs] STEPH: For the record, if Taco Bell is not offering that, they should now, now that you've created that. [laughter] CHRIS: There is one, but there's usually cheese in between the layers. They're not just extra layers of tortillas, so that's what I've got here. It's way too much tortilla, and that's sad. You don't want that. And it's confusing. You're like, wait, does this one have two or three layers of tortilla? So when I was working on it and when I was implementing our additional wrapping layer, I tricked myself multiple times. And my test suite was telling me that things were working, but I was testing incorrectly. And I was like, oh man; this is very subtle. And even though I'm deeply immersed in the context here, I'm still struggling with this. So the question is, did I successfully encapsulate all of this? And now anyone downstream just gets to use it,, and everything will be fine. Am I the heroic programmer that made the perfect abstraction that no one's ever going to struggle against, or was that pain that I was feeling representative of the complexity of what I'm trying to do here, and maybe I got it wrong? And so then the monkey patching side is we've got this one layer of wrapping. What if we just monkey patch the result such that it's got these new methods? I'm not adding an additional layer. I don't need to deal with double mapping through multiple layers, and I just get to deal with the context that I have. So I did an initial spike of an implementation that way, but I talked myself out of it because #monkeypatchingisbad, but I don't know. I don't know where I'm ending here. I am happy with where we're at, but I am aware that I may be sad down the road. STEPH: I'm just dying over here [laughs] from everything you're saying. I have this image of you staring out the window thinking, am I the hero, or am I the villain? And figuring out who you are in this scenario with this abstraction that you've created. CHRIS: To be clear, it's raining, and I have a nice rocks glass of scotch in my hand. And I'm just wistfully looking out the window trying to determine what's true. That's pretty accurate, actually. That's pretty much what's going on here. STEPH: I think we're just going to need updates as it progresses along. CHRIS: I will say overall this paradigm...so failures can happen at all levels. These command objects, which are the core of where this is coming in in the application, really represent the workflows of the app in a wonderful, testable, straightforward way. So I love that part. And if I have that, it implies that I have to have this other stuff. I don't think I can get away from that. The other thing is I've always loved Gary Bernhardt's Functional Core, Imperative Shell, which is a conference talk that he gave a while back. I talked to Gary about it actually when he was on the show, and we can link to both that and the conference talk because they are fantastic. And I just love the way he thinks about software. But that was always a little bit abstract for me. Like, what does that actually look like, though, in say, a Rails architecture? And I didn't have a great answer. And now this thing that I'm doing is the closest I think to that where the innards of the system are almost functional even though it's Ruby. We're leaning into that. And so we have these command objects that take in some data, and then they operate on it, and they yield out these results. Did it go well, or did it not? We've got the railway-oriented stuff, which again I'll link to. I link to now every third episode, apparently, but here we are. And that just models the reality of these programs in a really great way. And then we've been trying to introduce some, not rules per se, but guidelines as to how we interact with these things. So inside of the core of the application, we're trying to be as functional as possible. We do transformations on these result objects, but that's it. And then it's only really in the controllers or the mailers that we are doing unwrap or deal with the actual nested value, but everything else is working on conceptual values or whatever it is…these result objects. And that's actually been really nice, and it's allowed us to have really nice error handling within the app. The logging is very straightforward. A lot of apps that I've worked on in the past, I've just silently thrown away many error cases or edge cases. And this is a really great way to sequence the work that needs to be done but never throw away data that you would want. And so far, I'm finding it to be really great. And I'm seeing very obvious ways to hook into it like, oh, this is where we need to find the user-facing error message versus this is where we figure out what we want to log. And so, in some ways, it's great, but I am still open to the idea that this was a terrible idea. I always remain open to that idea to be clear. [chuckles] STEPH: I love how that's the feedback that you're always open to it. You're always trying something new, and then you're constantly going back to revisit; was this a good idea or a bad idea? To go back just a little bit, I do absolutely love the priority and focus that you're giving to the failure state because I feel like that is an area that we, as developers we're very skittish of that failure state. And I realize I'm projecting here. So if you're listening, feel free to take that however you like; maybe it doesn't apply to you, maybe it does. But the failure state is like you said, it's life, it's important, and it's something that is going to happen. And it is something that we should make accommodations for. And I find that we're often very hand-wavy with a failure state. So I love, love how much you always prioritize the failure state and make that something that people can work with and understand. Versus then when something goes wrong, then that's when we have to start to understand the failure state. I recognize there's a balance there because you're not going to know the failure state until you encounter it, but there are ways that we can still optimize to have observability into that failure state for when we do encounter that failure. CHRIS: I've definitely seen that as an evolution in my own thinking, how much am I focused on how easily can I do the core thing, the happy path versus how robustly can I do all of the variations of what this app needs to do? What if the network's down? What do we do there? I do occasionally worry that I've overcorrected on that. And it's like, you know what? This thing that I'm worried about that I'm protecting against in the application is a 0.01% edge case. It's going to affect almost no users, and we're both putting time into trying to avoid it. But also, there's code complexity that comes from trying to handle all the different variants. And so there's definitely an optimization, and I feel like, at different points in my life, I've been undercorrected or overcorrected on that. But I think if I were to describe the arc of my career, it is desperately searching for that optimal path and trying to find exactly the right amount of error handling to apply, and then yeah, then I'll be happy, then it'll be great. But it is like when I look at DHH's classic 15-minute I'm making a blog, look how much I'm not doing, I'm like, sure, sure, sure. Show me that in three years, though. What does the blog look like? How easily can I add a new feature? What happens when there's a bug in production, and a user reports it? Can I chase it down? Can I figure it? Can I fix it? These are the questions that I care about now, almost to the exclusion of what's the first run experience like? I almost don't care about that at this point. Because I spend my time…six months and on that's where the hard work is. And so the first couple of months where you're figuring things out, that's not the hard work of this thing. And so, I'm very strongly focused on those later periods of time. But again, I'm open to the idea that maybe I am overcorrected there. STEPH: I think it does highlight more of a shift in our career. We're still building, but we have experienced the maintenance side as well, and we felt that pain. And so that has led to perhaps overcorrecting, or maybe it's the correct amount of correction. But I do like how you highlighted there is always a cost to each side and those are usually the questions that I'm asking myself when I'm thinking about the failure mode and how much I want to optimize for the failure mode is how much does it cost when this fails? Who's it going to impact? And how much does it cost for me to make this more observable or to address this failure state? And then I try to find the balance between those two. Because you're right, it's not free to address that failure state. And so I may not want to fully optimize to handle that if it's going to be a very small percentage of users that actually are impacted by this failure state, or it seems very rare this is going to happen. But then still finding ways to know that if it does fail, then I can say, "Okay, I'll come back, and now it's worth the investment to improve this." CHRIS: When you said earlier that this is really hard work that we do, I don't know that I believed you. What you just described sounds super easy. You just handle all the stuff, and you dynamically optimize for the needs at the point in time. And that seems easy. [laughs] STEPH: Super easy, yeah. [laughs] Way to bring it back. Well, speaking about observability and failure states, that does lead nicely into a bug that I was working on this past week where there was a particular page that was loading very slowly. And it was something that we'd heard from users that then they let us know that this page was either taking a very long time to load or, frankly, it was just crashing. And then they were never getting to that page. So I happened to be the one that then picked up that ticket. And I went to reproduce the issue, and sure enough, when I clicked on this particular link and then started counting, it took about 14 seconds for that page to load, which is a very long time. And then also sometimes it was just crashing. So the first place that I went was to our error tracking. So I went to New Relic to then look to see okay; maybe there's a slow query. There's something here that's creating this performance issue, but I couldn't find anything. And New Relic does a great job of breaking down all the different response times so I can see how long Postgres is taking, Redis, and Ruby. All of those looked very normal. I couldn't find anything that seemed alarming that was indicating that the page was struggling to load even though I could reproduce the problem. Because I was clicking on it several times thinking, okay, well, if I just do this a couple of times, New Relic's going to notice, and then I'll get to see something, a little breadcrumb that's going to lead me in the right direction. And while I was waiting for New Relic to surface something helpful to me, I mentioned to another developer the issue that I was triaging. And they said, "Yeah, that page has been getting progressively slower, and we don't know why." And I thought, ooh, okay, I'm intrigued even more now as this is something that has been escalating over time, and now we've hit this threshold that we're working on it. And I discovered that in New Relic, I can look specifically at Postgres, Redis, Ruby, all those different response times. But there's a browser monitoring tool that I had not used before. And it showed a lot of helpful information around first paint, First Contentful Paint, window load, all of those areas. So I started diving in and found session tracing, and it was there that then I saw New Relic was telling me, "Hey, you have a page that's taking about 14 to 15 seconds to load." And I thought, okay, I feel validated now that at least New Relic is recognizing this issue. I have seen this issue, but I still didn't know why it's occurring. So the next tool that I used that I don't know if I've used before or it's just been a very long time; it felt fresh in the moment, but it's the Chrome DevTools, the performance tool. And so you can open that up in your inspector, and then you can go to the page that you want to track the performance. And then you can essentially say, "Hey, go ahead and start profiling and reload this page." And it has so many stats when it finally does load. It has CPU flames charts, which essentially it visualizes a collection of all the stack traces. It has a film strip, so you can actually see the rendering progress of your website along different time points. So if you wanted to go back to a specific time, you can see what did the webpage look like at this point? And then if you go a little further, okay, how much was loaded at this point? So there's a lot of interesting and a little overwhelming information that's there. But the thing that did catch my attention is there's a chart. I don't actually know what this chart is called. It's not a pie chart because there's no center to it. So it looks like a donut chart, and it's broken down. And it shows you the loading times, scripting, rendering, painting, all of those different values. And the rendering time was taking 35 seconds. And I was like, ooh, okay, that is meaningful right there. So then further investigation, now that I knew what I was looking for, I wasn't looking for something more on the back end. I was looking for something more on the front end. And I didn't think it was necessarily JavaScript because we also have JavaScript on this page. So at least this was helping me get a little bit closer before then I went into the codebase to start seeing what's happening. So once I knew it was a rendering issue, I went to look specifically at that view, and we have a form on that page that was generating an empty HTML select option for every record that's in the system. So let's say that you're ordering from a restaurant. On this page, there was a form where it had a list of all the restaurants, and, in our particular case, we had about 17,000 restaurants. So there were 17,000 empty HTML selection options, which could have some significant impact on the DOM and page load time. And that was the piece that was really leading to the performance, is the fact that we were rendering that empty select option. So from there, it was then just triaging okay; we don't really need to render all of these restaurants. There are ways we can scope this down. And that way, we're only showing a little bit at a time versus creating all of these empty options. I should clarify they're empty because part of this form is you select from the first dropdown, and then it populates the other one, and it gives you more information. But the way that this form was implemented, it was actually trying to show all of them at once. But it didn't actually have the data yet, but it was doing like a restaurant.all type of count. And so then that's how we were getting that many empty options. So it was a very interesting journey. It was very helpful to learn that New Relic has this browser monitoring tool. And I really appreciated their performance tool. And circling back to Chrome, Safari may have something similar. But I found Chrome's performance tool very helpful because then it helped me realize that it was the rendering. And so then I could really focus on the markup and the view versus knowing it wasn't more in the database layer. CHRIS: I really love the description, almost like a mystery novel of these bugs when we encounter them. Because if you just get to the end and you're like, oh, I was rendering a select, and it had all of this, that loses so much of this story because again, the coding is not the hard part of the work that we do; it's the figuring out what needs to be done. And in this case, that journey that you went on to find the bug. I really like the point where you said, "And someone mentioned this page has just been getting progressively slower over time." I was like, ooh, that's interesting. Now we got a clue. Now we've got a lead, and we'll chase it down, and then finding the browser tools and all of that. And also, as an aside, browsers are just such an immensely impressive piece of technology, everything that they do. And then you add the DevTools on top of them and magical stuff going on there. But yeah, also probably don't render 17,000 empty selects. [laughs] That seems like it will get you in trouble pretty quickly. But also very easy to get to and especially if there is this incremental, slow creep over time where it's like, oh, that page seems like it's a little slower. It's a little slower still, and it just keeps creeping up over time. But yes, I appreciate you taking us on that journey with you. STEPH: Yeah, it was a fun discovery. And it made me realize that while we have alerting set up for some of our other queries, we don't have anything set up for the browser time. So that would be a good optimization on our side is to start alerting us before a page gets to the point that it's taking that long to load to notify us sooner. So we don't have to wait for a user to reach out to us, but we can triage sooner. CHRIS: I also do love the idea of extending the metrics that we hold ourselves accountable to all the way through to the user, and so the First Contentful Paint and all of that. The one that I really love recently that has captured an idea that I struggled to put words to is the Cumulative Layout Shift. Are you familiar with this piece? STEPH: Uh-uh. CHRIS: So there's like, how quickly does the page render? That's the thing that we want to know. But a lot of applications these days, particularly single-page apps, render pretty quickly, but they render what ends up being a skeleton or a shell of the page. And then behind the scenes, there are like ten different AJAX requests happening. And as the data comes in, suddenly, a part of the screen will render. And they'll render a list of items that they just got back from the back end, but they're still waiting for the information to populate the header. And so if you look at that page, it's constantly shifting as it's loading and just feels, I don't know, flimsy in my mind. But I didn't have a good word, or I didn't have a metric or a number to attach to that. And then I learned about Cumulative Layout Shift, and I was like, oh, that's the one. Now you've mapped the thing that I was feeling. And I like when that happens. STEPH: Is that the difference between the first paint and First Contentful paint? Is that similar? CHRIS: I think it's more than that because there are not just two discrete events in this. It can be multiple. And so it's like, how many different times did the thing that's rendered on the screen move around? And so if images are loading, but you didn't have a proper image height and width set, that's another way that this can happen. Then initially, the browser is not going to reserve any space for that because it's like, I don't know how big this is. And then, when the image shows up, it now knows the intrinsic height and width of the image. So suddenly, your page is going to jump from that. You can get ahead of that by putting the height and width on your image, and that's great to do. And frameworks like Next.js have done some really amazing work of making that a build time step as opposed to something you have to do manually. But then also, more generally, how do we handle this? React is doing some interesting work with Suspense, where you can aggregate together multiple different loading states into one collective thing. It's almost like promise.all, but for your page. I haven't followed that too closely, but I know that that's framework-level work that's happening over there. GraphQL does a really great job of allowing you to group queries together. So there's a solution on that side. But broadly, if you just render some HTML on the server and you send it to the front end, then you don't have this problem because you just have one ball of HTML. The browser is pretty good at rendering that in one pass versus if you have single-page applications that are making a handful of AJAX requests that will resolve in their own timelines and eventually paint to the screen. You get this different shape. And then the worst case of it in my mind is you render half the page. And then suddenly, one of the requests realizes the JWT has expired, and suddenly, you get thrashed over to the login page. Please don't give me that experience, developers, please. Please do something else that isn't that. That makes me sad in my heart. STEPH: Prioritize the failure state. That's what I'm hearing. CHRIS: Callbacks. STEPH: Well, on that wonderful circular reference, shall we wrap up? CHRIS: Wait, I thought circular references were bad...Never mind. Let's wrap up. The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes,, as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us @bikeshed, or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey. STEPH: Or you can reach us at hosts@bikeshed.fm via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. All: Byeeeeeeeeee. Announcer: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Craig Buckler joins the panel to jabber about Chrome Dev-Tools and some things you may not know you can do with them to empower your own front-end development. Some of the basics you may already know like Incognito mode. Some others you may not know like black boxing libraries you don’t control or throttling connections to simulate poor connections. He also talks through searching through network requests to see how your domain’s specific requests perform. Panel Aimee Knight AJ O'Neal Charles Max Wood Dan Shappir Steve Edwards Guest Craig Buckler Sponsors Dev Influences Accelerator JavaScript Error and Performance Monitoring | Sentry Links Firefox Developer Tools 15 DevTool Secrets for JavaScript Developers CSS-Tricks Screencasts: #173: Ooooops I guess we’re full-stack developers now. Browser Devtool Secrets Windows Subsystem for Linux 2: The Complete Guide Docker for Web Developers Docker course samples and excerpts ( discount code dock30 ) Jump Start Web Performance Craig Buckler - YouTube Craig Buckler, Author at SitePoint Craig Buckler :: freelance UK web developer, writer, and speaker Craig Buckler Twitter: Craig Buckler ( @craigbuckler ) Picks Aimee- AWS flash cards Aimee- Normatec 2.0 Leg System AJ- Emulate Mobile Hardware AJ- The Black Prism (Lightbringer) AJ- webinstall.dev/wsl Charles- Having a workout buddy Charles- Water Balloon Launcher Charles- Camp Stove and Griddle Combo Craig- How to Favicon in 2021 by Andrey Sitnik Craig- When you're trying to print something by Stevie Martin Dan- Master of the Five Magics Dan- Introducing WebContainers: Run Node.js natively in your browser Contact Aimee: Aimee Knight – Software Architect, and International Keynote Speaker GitHub: Aimee Knight ( AimeeKnight ) Twitter: Aimee Knight ( @Aimee_Knight ) LinkedIn: Aimee K. aimeemarieknight | Instagram Aimee Knight | Facebook Contact AJ: AJ ONeal CoolAJ86 on GIT Beyond Code Bootcamp Beyond Code Bootcamp | GitHub Follow Beyond Code Bootcamp | Facebook Twitter: Beyond Code Bootcamp ( @_beyondcode ) Contact Charles: Devchat.tv DevChat.tv | Facebook Twitter: DevChat.tv ( @devchattv ) Contact Dan: GitHub: Dan Shappir ( DanShappir ) LinkedIn: Dan Shappir Twitter: Dan Shappir ( @DanShappir ) Contact Steve: Twitter: Steve Edwards ( @wonder95 ) GitHub: Steve Edwards ( wonder95 ) LinkedIn: Steve Edwards
Craig Buckler joins the panel to jabber about Chrome Dev-Tools and some things you may not know you can do with them to empower your own front-end development. Some of the basics you may already know like Incognito mode. Some others you may not know like black boxing libraries you don’t control or throttling connections to simulate poor connections. He also talks through searching through network requests to see how your domain’s specific requests perform. Panel Aimee Knight AJ O'Neal Charles Max Wood Dan Shappir Steve Edwards Guest Craig Buckler Sponsors Dev Influences Accelerator JavaScript Error and Performance Monitoring | Sentry Links Firefox Developer Tools 15 DevTool Secrets for JavaScript Developers CSS-Tricks Screencasts: #173: Ooooops I guess we’re full-stack developers now. Browser Devtool Secrets Windows Subsystem for Linux 2: The Complete Guide Docker for Web Developers Docker course samples and excerpts ( discount code dock30 ) Jump Start Web Performance Craig Buckler - YouTube Craig Buckler, Author at SitePoint Craig Buckler :: freelance UK web developer, writer, and speaker Craig Buckler Twitter: Craig Buckler ( @craigbuckler ) Picks Aimee- AWS flash cards Aimee- Normatec 2.0 Leg System AJ- Emulate Mobile Hardware AJ- The Black Prism (Lightbringer) AJ- webinstall.dev/wsl Charles- Having a workout buddy Charles- Water Balloon Launcher Charles- Camp Stove and Griddle Combo Craig- How to Favicon in 2021 by Andrey Sitnik Craig- When you're trying to print something by Stevie Martin Dan- Master of the Five Magics Dan- Introducing WebContainers: Run Node.js natively in your browser Contact Aimee: Aimee Knight – Software Architect, and International Keynote Speaker GitHub: Aimee Knight ( AimeeKnight ) Twitter: Aimee Knight ( @Aimee_Knight ) LinkedIn: Aimee K. aimeemarieknight | Instagram Aimee Knight | Facebook Contact AJ: AJ ONeal CoolAJ86 on GIT Beyond Code Bootcamp Beyond Code Bootcamp | GitHub Follow Beyond Code Bootcamp | Facebook Twitter: Beyond Code Bootcamp ( @_beyondcode ) Contact Charles: Devchat.tv DevChat.tv | Facebook Twitter: DevChat.tv ( @devchattv ) Contact Dan: GitHub: Dan Shappir ( DanShappir ) LinkedIn: Dan Shappir Twitter: Dan Shappir ( @DanShappir ) Contact Steve: Twitter: Steve Edwards ( @wonder95 ) GitHub: Steve Edwards ( wonder95 ) LinkedIn: Steve Edwards
Craig Buckler joins the panel to jabber about Chrome Dev-Tools and some things you may not know you can do with them to empower your own front-end development. Some of the basics you may already know like Incognito mode. Some others you may not know like black boxing libraries you don’t control or throttling connections to simulate poor connections. He also talks through searching through network requests to see how your domain’s specific requests perform. Panel Aimee Knight AJ O'Neal Charles Max Wood Dan Shappir Steve Edwards Guest Craig Buckler Sponsors Dev Influences Accelerator JavaScript Error and Performance Monitoring | Sentry Links Firefox Developer Tools 15 DevTool Secrets for JavaScript Developers CSS-Tricks Screencasts: #173: Ooooops I guess we’re full-stack developers now. Browser Devtool Secrets Windows Subsystem for Linux 2: The Complete Guide Docker for Web Developers Docker course samples and excerpts ( discount code dock30 ) Jump Start Web Performance Craig Buckler - YouTube Craig Buckler, Author at SitePoint Craig Buckler :: freelance UK web developer, writer, and speaker Craig Buckler Twitter: Craig Buckler ( @craigbuckler ) Picks Aimee- AWS flash cards Aimee- Normatec 2.0 Leg System AJ- Emulate Mobile Hardware AJ- The Black Prism (Lightbringer) AJ- webinstall.dev/wsl Charles- Having a workout buddy Charles- Water Balloon Launcher Charles- Camp Stove and Griddle Combo Craig- How to Favicon in 2021 by Andrey Sitnik Craig- When you're trying to print something by Stevie Martin Dan- Master of the Five Magics Dan- Introducing WebContainers: Run Node.js natively in your browser Contact Aimee: Aimee Knight – Software Architect, and International Keynote Speaker GitHub: Aimee Knight ( AimeeKnight ) Twitter: Aimee Knight ( @Aimee_Knight ) LinkedIn: Aimee K. aimeemarieknight | Instagram Aimee Knight | Facebook Contact AJ: AJ ONeal CoolAJ86 on GIT Beyond Code Bootcamp Beyond Code Bootcamp | GitHub Follow Beyond Code Bootcamp | Facebook Twitter: Beyond Code Bootcamp ( @_beyondcode ) Contact Charles: Devchat.tv DevChat.tv | Facebook Twitter: DevChat.tv ( @devchattv ) Contact Dan: GitHub: Dan Shappir ( DanShappir ) LinkedIn: Dan Shappir Twitter: Dan Shappir ( @DanShappir ) Contact Steve: Twitter: Steve Edwards ( @wonder95 ) GitHub: Steve Edwards ( wonder95 ) LinkedIn: Steve Edwards
https://stackoverflow.com/a/46780568/971592 Hello there friends. Sorry, it's been a little bit of a break but I was working on something and I wanted to measure how fast it was but yeah and so I'm gonna use the Chrome DevTools profiling performance tab to profile it. And it's always so frustrating to like try and find the part of the code that you're trying to profile in that flame graph. It's really kind of a confusing area. And I remembered that there's the performance mark API and so you have the ability to add your own custom timings and so,You can say okay here's the start of what I'm trying to measure and here's the end. And how long did all of that take? And you can do like performance down now console log in and then like compare and stuff. But I wanted to look at the flame graph. And so I it didn't work right away and I looked up and found a stack overflow post that showed exactly how to do this and so I'll link to that in the notes of this episode. But basically, it's a combination of the mark and measure APIs on the performance object. So you say performance dot mark and you give it a string to label that mark.And then another performance you do whatever it is you want to do So in my case I wanted to measure the reading time of a blog post and I'm using this module called reading hyphen time, which I think is what like all the gaps people logs and everything used for this. And so I call that function and I'm on a really big blog post and to test this out. I call that function and then on the next line you say performance.mark and you give it another label and then you say right after that performance dot measure and you give three arguments the first is the label for. The measurement the second is for the start marker So whatever you put for the first performance dot mark, that's what the second argument here. And then the last argument is whatever you put for the end. So you give it a name and then the start in the end and then that will show up in your dev tools in the performance profiling tab and it just makes it so much easier to make sure that you're honing in on the area of the code that you actually care about. Unfortunately, I don't know whether this works very well for asynchronous code. So, I cure it's we're gonna do an await here and then afterThat away I don't think that's gonna work very well But for synchronous code it works great And actually if we're talking about async code and there is some performance profiling metrics that you can do with React app specifically and they do have the ability to do some sort of timings for that with asynchrony. It's actually very cool and I teach you about it on epic react. So if you're if you're not done the performance workshop yet, or the actually the performance section of the bookshelf app, then you may not have run up against that. But it's really cool. All right, hopefully that's helpful. I'll chat with you later.
We’re talking about VisBug. What is it, and how is it different from the array of options already found in Chrome DevTools? Drew McLellan talks to its creator Adam Argyle to find out.
In this week's episode, Sarah and Areej chat to Natalie Mott, freelance SEO consultant, about all things core web vitals. We also find out what inspires Natalie and what empowers her to be the brilliant woman she is today. Where to find Natalie: Twitter: https://twitter.com/njmott LinkedIn: https://www.linkedin.com/in/natalie-mott/ Core Web Vitals Resources: https://web.dev/ (https://web.dev/) - All Google's guidance on Core Web Vitals https://web.dev/chrome-ux-report-data-studio-dashboard/ (https://web.dev/chrome-ux-report-data-studio-dashboard/) - Google's own Data Studio dashboard for tracking stats from the CrUX report https://developers.google.com/speed/docs/insights/v5/get-started (https://developers.google.com/speed/docs/insights/v5/get-started) - Link PageSpeed Insights with ScreamingFrog to pull CrUX data for every page on your site as you crawl https://developer.chrome.com/docs/devtools/ (https://developer.chrome.com/docs/devtools/) - use Chrome DevTools to drill down and work out which parts of your page are affecting your CWV scores. In particular, if you are looking to see what part of the page has the LCP, or when a Layout Shift is occurring, you can use Performance > Show Web Vitals to pinpoint where this is happening https://corewebvitals.iprospect.com/ (https://corewebvitals.iprospect.com/) - a good summary of the state of play in every industry from iProspect https://reddico.co.uk/tools/serp-speed/ (https://reddico.co.uk/tools/serp-speed/) - check how you and competitors are performing for Core Web Vitals on a keyword level --- Episode Sponsor Massive shout out to DeepCrawl for supporting the WTSPodcast and sponsoring this episode. DeepCrawl offers the complete end-to-end technical SEO platform with the tools and integrations you need to grow — detecting technical improvements that will help you drive growth, and protecting your website from harmful code through SEO testing automation. Where to find DeepCrawl: Website - https://www.deepcrawl.com/ LinkedIn - https://www.linkedin.com/company/deepcrawl Twitter - https://twitter.com/DeepCrawl Facebook - https://facebook.com/deepcrawlHD --- Episode Transcript Sarah: Hello and welcome to the Women in Tech SEO podcast, where your hosts are myself, Sarah McDowell, SEO Content Executive at Holland and Barrett, and the delightful Areej AbuAli, who is a SEO Consultant and the Founder of the epic Women in Tech SEO community. This week, we have the wonderful Natalie Mott joining us, who is an SEO all-rounder with core interests in technical SEO, content strategy, project management and outreach. She has had senior SEO positions at several digital agencies and is thoroughly enjoying spending time in the world of freelance consultant. A very warm welcome and hello to the both of you. This episode is sponsored by DeepCrawl. DeepCrawl offers the complete end-to-end technical SEO platform with the tools and integrations you need to grow. Detecting technical improvements that will help you drive growth and protecting your website from harmful code through SEO testing automation. Discover just how deep pools technical SEO platform can help you increase your search performance and revenue by visiting their website: deepcrawl.com. You can also follow them on Facebook, Twitter, and LinkedIn. Sarah: Thank you so much for spending your Saturday morning with me and Areej. Very, very appreciative. How are we doing? How's your morning been? Natalie: Quite productive, actually quite got up early. Had a good, good breakfast. That's a bit unusual for Saturday to be honest, but yeah, it was going very well. How about yourself? Areej: Yeah, all good. All good over here. I'm just really, really excited to have you here with us. We've met several times, since the very start of the Women in Tech SEO community. But I'd love you to tell the community, a bit more about you. So how
An airhacks.fm conversation with Justin Fagnani (@justinfagnani) about: creating fireworks animations with Apple IIe, games were hard to get for Apple IIe, "hello, world" with Apple Basic, enjoying the un-productivity and making funk music, Basic, Turbo Pascal, Turbo C, Java and Python, starting with Java 0.9 and Applets, Microsoft introduced JScript (Visual J++) with major incompatibilities to Java, staring with Python and Django, Python over Ruby, studying an algorithm book for two weeks to pass the interview at Google, using FileMaker, starting at Google's HR department compensation planning system, creating the AppMaker during the "free" 20% Google time, AppMaker was shutdown in 2020, AppMaker is an low-code application builder, one-click deploy and one-click deploy, GWT and Java were heavily used at Google, using Java's Rhino to run JavaScript on the server, the AppMaker clone with Dart, writing parsers and Polymer Dart, Chrome supported Dart, leaving Dart before flutter, Angular Dart is very popular at the apps group at Google, wiz is the most popular web framework at google, joining the polymer team, html imports vs. JavaScript imports, CSS-modules and JSON-modules proposals, lit-html start to provide better tooling story for Polymer 3.0, lit-html vs. hyper-html, ES 6 template literals enable great performance for lit-html, Microsoft's fast framework was inspired by lit-html, lit-html source code fits on a slide, lit-html source size is close to 3kB, the first lit-html breaking change since 2017, the contractual obligation to support IE, lit-html vs. lit-element, lit-element offers a richer, reactive lifecycle, decreasing lock-in is lit's design philosophy, passing data between component trees, cross-DOM communication with Custom Events, Web Components conventions are micro stacks, less and less needs for a JavaScript framework, chrome is shipping with import maps, web platform - and the tooling is optional, polymer was not the component host, polymer is popular inside google, lit-html is growing fast at google, Chrome OS is using lit-element, Chrome Dev Tools is implemented with lit-html Justin Fagnani @justinfagnani, @polymer and @lit_html on twitter
Heute mit: Steuer-ID, WhatsApp, Glücksspiel-Apps, Erasmus+ ***SPONSOR-HINWEIS*** Wie könnenSie auf Ihren Websites Performance-Bremsen lösen und relevanter für Suchmaschinen werden? Bei der virtuellen Frontend-Entwicklerkonferenz c't am 9. Februar dreht sich alles um die Performance-Optimierung von Websites. Erfahren Sie in sechs Vorträgen Wissenswertes rund um: Googles Core Web Vitals, Bild-Optimierungen, Testing mit Chrome Devtools und CloudFlare Workers, JavaScript-Feinschliff sowie Performance-Verbesserungen mit WordPress. Drei vertiefende Workshops am 10. Februar ergänzen die Veranstaltung. Weitere Details und Tickets finden Sie unter www.ctwebdev.de ***SPONSOR-HINWEIS ENDE***
Heute mit: Agri-Gaia, Plastikmüll, Disney+ Star, Doomsday Clock ***SPONSOR-HINWEIS*** Wie könnenSie auf Ihren Websites Performance-Bremsen lösen und relevanter für Suchmaschinen werden? Bei der virtuellen Frontend-Entwicklerkonferenz c't am 9. Februar dreht sich alles um die Performance-Optimierung von Websites. Erfahren Sie in sechs Vorträgen Wissenswertes rund um: Googles Core Web Vitals, Bild-Optimierungen, Testing mit Chrome Devtools und CloudFlare Workers, JavaScript-Feinschliff sowie Performance-Verbesserungen mit WordPress. Drei vertiefende Workshops am 10. Februar ergänzen die Veranstaltung. Weitere Details und Tickets finden Sie unter www.ctwebdev.de ***SPONSOR-HINWEIS ENDE***
Heute mit: Wirtschaftskrise, Joy-Con-Drift, Facebook, Starlink ***SPONSOR-HINWEIS*** Wie könnenSie auf Ihren Websites Performance-Bremsen lösen und relevanter für Suchmaschinen werden?Bei der virtuellen Frontend-Entwicklerkonferenz c't am 9.Februar dreht sich alles um die Performance-Optimierung von Websites. Erfahren Sie in sechs Vorträgen Wissenswertes rund um: Googles Core Web Vitals, Bild-Optimierungen, Testing mit Chrome Devtools und CloudFlare Workers, JavaScript-Feinschliff sowie Performance-Verbesserungen mit WordPress. Drei vertiefende Workshops am 10.2. ergänzen die Veranstaltung. Weitere Details und Tickets finden Sie unter www.ctwebdev.de ***SPONSOR-HINWEIS ENDE***
Heute mit: autonomes Fahren, Birdwatch, Eventim, OpenLabs ***SPONSOR-HINWEIS*** Wie könnenSie auf Ihren Websites Performance-Bremsen lösen und relevanter für Suchmaschinen werden?Bei der virtuellen Frontend-Entwicklerkonferenz c't am 9.Februar dreht sich alles um die Performance-Optimierung von Websites. Erfahren Sie in sechs Vorträgen Wissenswertes rund um: Googles Core Web Vitals, Bild-Optimierungen, Testing mit Chrome Devtools und CloudFlare Workers, JavaScript-Feinschliff sowie Performance-Verbesserungen mit WordPress. Drei vertiefende Workshops am 10.2. ergänzen die Veranstaltung. Weitere Details und Tickets finden Sie unter www.ctwebdev.de ***SPONSOR-HINWEIS ENDE***
Heute mit: Homeoffice, Wingcopter, Pilzmyzel, Clubhouse ***SPONSOR-HINWEIS*** Wie könnenSie auf Ihren Websites Performance-Bremsen lösen und relevanter für Suchmaschinen werden?Bei der virtuellen Frontend-Entwicklerkonferenz c't am 9.Februar dreht sich alles um die Performance-Optimierung von Websites. Erfahren Sie in sechs Vorträgen Wissenswertes rund um: Googles Core Web Vitals, Bild-Optimierungen, Testing mit Chrome Devtools und CloudFlare Workers, JavaScript-Feinschliff sowie Performance-Verbesserungen mit WordPress. Drei vertiefende Workshops am 10.2. ergänzen die Veranstaltung. Weitere Details und Tickets finden Sie unter www.ctwebdev.de ***SPONSOR-HINWEIS ENDE***
In this episode of the Search News You Can Use Podcast, Marie Haynes discusses: Unraveling the indexing issues from September 2020 Can syndicating content hurt Google’s assessment of quality for your site? WordPress now lets you add rel=“sponsored” to links Would you ask for a link from a friend who owns an unrelated website? Local SEO flux New metrics in GMB New info on whether you can have multiple GMB listings Mini site review: ladyleeshome.com Mini site review: prettypurpledoor.com Q&A: Should you noindex your terms and conditions and privacy policy pages? Also, in newsletter: Google shopping is going free worldwide by mid October New study looks at optimal page title length Info from Cyrus Shepard on great content How developers can get value from using GSC Tips on getting your web story indexed Email outreach ideas from Siege Media Changes to page layout can impact your ability to rank Cloudflare announces free analytics New accessibility feature in Chrome Dev Tools for colour blindness Local SEO: How to get a review removed Local SEO: Is Google upping their spam fighting game? This episode corresponds with newsletter episode 153: https://courses.mariehaynes.com/search-news-you-can-use/episode-153 Submit a question for the next Q&A: Mariehaynes.com/qa-with-mhc/ Subscribe to the newsletter: Mariehaynes.com/newsletter Twitter: @Marie_Haynes - Twitter.com/Marie_Haynes
In this episode, we cover Ruby 3 typing, GitHub running Ruby 2.7 in production, and Chrome DevTools CSS overview. We also speak with Jonas Downey, design lead at Basecamp, and co-creator of the Hello Weather app, about a Twitter thread he wrote titled, “Here’s a little tale about what it’s like to be an indie iOS developer working under Apple's 800-pound gorilla rule." Finally we chat with Dan Abramov, software engineer at Facebook, creator of Redux, and co-author of the Create React App. We’ll chat with Dan about React’s decision to release React 17 with no new features. Show Notes DevDiscuss (sponsor) Triplebyte (sponsor) CodeNewbie (sponsor) Vonage (sponsor) GitHub React Basecamp The State of Ruby 3 Typing Ruby 2.7 GitHub running Ruby 2.7 in production New in Chrome: CSS Overview Changing World, Changing Mozilla Here’s a little tale about what it’s like to be an indie iOS developer working under Apple's 800-pound gorilla rule... Hello Weather
JavaScript Remote Conf 2020 May 14th to 15th - register now! Dan Shappir takes the lead to explain all of the acronyms and metrics for measuring the performance of your web applications. He leads a discussion through the ins and outs of monitoring performance and then how to improve and check up on how your website is doing. Panel AJ O’Neal Aimee Knight Steve Edwards Dan Shappir Sponsors Taiko, free and open source browser test automation Educative.io | Click here for 10% discount ____________________________________________________________ "The MaxCoders Guide to Finding Your Dream Developer Job" by Charles Max Wood is now available on Amazon. Get Your Copy Today! ____________________________________________________________ Links : The Picture element - HTML: Hypertext Markup Language | MDN Picks AJ O’Neal: The Way of Kings Taco Bell Aimee Knight: web.dev @DanShappir Dan Shappir: New accessibility feature in Chrome Dev Tools: simulate vision deficiencies, including blurred vision & various types of color blindness. In Canary at the bottom of the Rendering tab. Better Call Saul Follow JavaScript Jabber on Twitter > @JSJabber
JavaScript Remote Conf 2020 May 14th to 15th - register now! Dan Shappir takes the lead to explain all of the acronyms and metrics for measuring the performance of your web applications. He leads a discussion through the ins and outs of monitoring performance and then how to improve and check up on how your website is doing. Panel AJ O’Neal Aimee Knight Steve Edwards Dan Shappir Sponsors Taiko, free and open source browser test automation Educative.io | Click here for 10% discount ____________________________________________________________ "The MaxCoders Guide to Finding Your Dream Developer Job" by Charles Max Wood is now available on Amazon. Get Your Copy Today! ____________________________________________________________ Links : The Picture element - HTML: Hypertext Markup Language | MDN Picks AJ O’Neal: The Way of Kings Taco Bell Aimee Knight: web.dev @DanShappir Dan Shappir: New accessibility feature in Chrome Dev Tools: simulate vision deficiencies, including blurred vision & various types of color blindness. In Canary at the bottom of the Rendering tab. Better Call Saul Follow JavaScript Jabber on Twitter > @JSJabber
JavaScript Remote Conf 2020 May 14th to 15th - register now! Dan Shappir takes the lead to explain all of the acronyms and metrics for measuring the performance of your web applications. He leads a discussion through the ins and outs of monitoring performance and then how to improve and check up on how your website is doing. Panel AJ O’Neal Aimee Knight Steve Edwards Dan Shappir Sponsors Taiko, free and open source browser test automation Educative.io | Click here for 10% discount ____________________________________________________________ "The MaxCoders Guide to Finding Your Dream Developer Job" by Charles Max Wood is now available on Amazon. Get Your Copy Today! ____________________________________________________________ Links : The Picture element - HTML: Hypertext Markup Language | MDN Picks AJ O’Neal: The Way of Kings Taco Bell Aimee Knight: web.dev @DanShappir Dan Shappir: New accessibility feature in Chrome Dev Tools: simulate vision deficiencies, including blurred vision & various types of color blindness. In Canary at the bottom of the Rendering tab. Better Call Saul Follow JavaScript Jabber on Twitter > @JSJabber
Episode 095 – Debugging With Chrome Dev Tools. – We have a bonus episode for you today and it’s about how to debug your code. What we’re going to teach you is the skill of how to fix problems in your code. The difference, usually, between a junior developer and a senior developer is how …
Wszechobecne “apki” coraz częściej nie wymagają od nas instalowania ich na naszych urządzeniach. Progresywne web aplikacje wychodzą na przeciw temu trendowi i starają się jeszcze bardziej zniwelować różnicę między aplikacjami natywnymi, a webowymi. Wysyłanie powiadomień, obsługa offline, instalacja ze sklepu te opcje zawsze kojarzyły się z rozwiązaniami natywnymi, a już wkrótce staną się dostępne dla web developerów. Oczywiście nie wszystko jesteśmy w stanie zrobić z PWA. W tym podcaście staramy się odpowiedzieć na pytanie kiedy warto sięgnąć po tę technologię. 0:00 - Intro, ogłoszenia 2:30 - Przedstawienie gościa, rozgrzewkowe pytania 5:00 - Zainteresowanie PWA 7:20 - Uczelnia vs. Aplikacje PWA 8:20 - Do jakich rozwiązań PWA się nadaje? 11:00 - Zachowanie aplikacji w offline, aplikacje desktopowe 12:10 - Dodanie ikonki strony do pulpitu / ekranu startowego 13:20 - Wysyłanie powiadomień z aplikacji 14:30 - Wsparcie różnych przeglądarek (Safari, Edge, Chrome) 15:00 - Wdrażanie aplikacji PWA do sklepów 16:40 - Do jakich aplikacji PWA się NIE nadaje? 18:30 - Rady dla junior developerów osób 20:20 - Frameworki i biblioteki wspierające tworzenie progresywnych web aplikacji 22:00 - Service workery - po co i jak użyć? 25:00 - Testowanie swojego kodu 26:30 - Jak może wyglądać przyszłość aplikacji PWA 29:20 - Ciekawy projekt open-soruce Hospital Run (PWA używane do zbierania danych medycznych) Podstawy: ServiceWorker https://developers.google.com/web/fundamentals/primers/service-workers https://developer.mozilla.org/en-US/docs/Web/API/Service_Worker_API), FetchAPI https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API Książki: -"Progresywne aplikacje webowe. Potęga aplikacji natywnych w przeglądarce" Tal Ater, O'reilly -"Progressive Web Apps with Angular" Majid Hajian, Apress Codelabs: -PWA Fire's Codelabs: https://pwafire.org/developer/codelabs/index.html -Your first PWA by Google (https://codelabs.developers.google.com/codelabs/your-first-pwapp/#0), -Developing PWA 02: Offline quickstart: https://codelabs.developers.google.com/codelabs/pwa-offline-quickstart/#0 -Build PWA using workbox: https://codelabs.developers.google.com/codelabs/workbox-lab/index.html?index=..%2F..index#0, Narzędzia / biblioteki: -JavaScript Library for Service Worker - workbox (JS https://developers.google.com/web/tools/workbox), -Chrome DevTools, Lighthouse (audit tests), -Web Manifest Generator: https://pwafire.org/developer/tools/get-manifest/ -PWA Image (Icons) Generator: https://www.pwabuilder.com/imageGenerator Projekty open source: Hospital Run (Open source, modern software for charitable hospitals in the developing world.): https://github.com/hospitalrun Materiały na youtube: Progressive Web App Training (Playlist): https://www.youtube.com/watch?v=psB_Pjwhbxo&list=PLNYkxOF6rcIB2xHBZ7opgc2Mv009X87Hh
Today Jeff Byer (@globaljeff) talks about his experiment with workspaces in Chrome Dev Tools, the Shopify universe, upcoming project launches, real-world project issues, and tech talk.
There is a feature of Chrome DevTools that lets you: See the code of any given resource the current web page is using (like CSS and JavaScript). “Pretty Print” it (format it for readability) Save it to disk Use that saved version to override the live version, even on page refresh. That last one is pretty awesome. If you’re debugging a problem that only seems to happen on the live website, it gives you a debugging tool that will allow … Read article “#174: Using Local Overrides in DevTools”
Мы продолжаем брать интервью у спикеров конференции #HolyJS и сегодня мы поговорим с Software Engineer из Netflix - Алексеем Козятинским. Алексей расскажет нам про свой путь в мир IT, работу в Google и Netflix. Полезные ссылки: Лекции Александра Куликова про алгоритмы: https://www.youtube.com/watch?v=t-t_VaVY-TA&list=PLlb7e2G7aSpQutUr7qYIunvm04cqdr5mx Книга «Peopleware: Productive Projects and Teams»: https://www.amazon.com/Peopleware-Productive-Projects-Tom-DeMarco/dp/0932633439 Все выпуски DevShow: https://goo.gl/b6vBPp DevShow в формате подкастов на SoundCloud: https://soundcloud.com/loftblog DevShow в формате подкастов в iTunes: https://itunes.apple.com/ru/podcast/loftblog/id1313900856?l=en&mt=2 Школа онлайн-образования: https://loftschool.com/ Telegram Loftblog: https://t-do.ru/loftblog Telegram IT-обучение: https://t-do.ru/it_loft Slack: http://slack.loftblog.ru/ Сайт: http://loftblog.ru/ Instagram: https://www.instagram.com/loftblog/ Группа вконтакте: http://vk.com/loftblog Facebook: http://www.facebook.com/loftblog Twitter: http://twitter.com/loft_blog
Panel Joe Eames Luis Hernandez Mike Dane Sam Julien Joined by special guest: Tommy Williams Episode Summary In this episode of the Dev Ed podcast, the panel is joined by special guest Tommy Williams, who is currently a Software Manager at Playware Media, and has a strong background in web development. He starts off the discussion by explaining what the term performance tuning really means, and the other panelists join in with their own definitions and give examples to elaborate on it. They talk at length about the tradeoff between performance tuning and maintainability while each sharing their valuable experiences. They then steer the discussion towards learning performance tuning, what resources and tools to use, recommend some good courses to listeners and discuss how to go about learning it in general. Tommy talks about the performance issues that can possibly come up while writing web applications and ways to practice performance tuning followed by the panelists’s tips on it as well. They conclude the show with picks. Links jsPerf Chrome DevTools Lighthouse Modern DevTools Umar Hansa Tommy’s LinkedIn Picks Mike Dane: Saint Thomas Luis Hernandez: Refactoring: Improving the Design of Existing Code Sam Julien: Keyboard Maestro Joe Eames: Zombicide Tommy Williams: Dominican slang word -Vaina
Panel Joe Eames Luis Hernandez Mike Dane Sam Julien Joined by special guest: Tommy Williams Episode Summary In this episode of the Dev Ed podcast, the panel is joined by special guest Tommy Williams, who is currently a Software Manager at Playware Media, and has a strong background in web development. He starts off the discussion by explaining what the term performance tuning really means, and the other panelists join in with their own definitions and give examples to elaborate on it. They talk at length about the tradeoff between performance tuning and maintainability while each sharing their valuable experiences. They then steer the discussion towards learning performance tuning, what resources and tools to use, recommend some good courses to listeners and discuss how to go about learning it in general. Tommy talks about the performance issues that can possibly come up while writing web applications and ways to practice performance tuning followed by the panelists’s tips on it as well. They conclude the show with picks. Links jsPerf Chrome DevTools Lighthouse Modern DevTools Umar Hansa Tommy’s LinkedIn Picks Mike Dane: Saint Thomas Luis Hernandez: Refactoring: Improving the Design of Existing Code Sam Julien: Keyboard Maestro Joe Eames: Zombicide Tommy Williams: Dominican slang word -Vaina
Acabamos la trilogía de las Chrome DevTools, en este caso la sección Network, en la que podrás ver las peticiones http realizadas por tu página y el rendimiento de la misma #chrome #devtools #network
Continuando con la serie de Chrome Devtools, hoy te muestro la pestaña Console. #chrome #devtools #console Autor: Joseda Barrios Síguenos en https://twitter.com/RadioDevPodcast y https://t.me/RadioDev
Eine Rarität! Das komplette Podcast-Kernteam vereint in einer Revision. Keine ganz so große Rarität: der Löwenanteil der Sprechzeit entfällt auf Peter, der einen Einblick in die Entwicklung einer Chro…
Eine Rarität! Das komplette Podcast-Kernteam vereint in einer Revision. Keine ganz so große Rarität: der Löwenanteil der Sprechzeit entfällt auf Peter, der einen Einblick in die Entwicklung einer Chrome-Devtools-Extension gibt. Unser Sponsor Diese Folge wird gesponsert von Flagbit aus Karlsruhe. Flagbit konzipiert, entwickelt und optimiert Online Shops für seine Kunden und sucht insbesondere Frontend Developer […]
El navegador Chrome tiene una serie de herramientas muy interesantes. Aqui comento dos de ellas, enfocadas en el diseño web. Autor: Joseda Barrios Síguenos en https://twitter.com/RadioDevPodcast y https://t.me/RadioDev
Showing Love debugger community Rey bango, (others) helped get me into Mozilla MDN, my go to when I don’t understand JavaScript. Google and StackOverflow. When the docs just aren’t good enough Twitter: When I’m ragey and I need to tell people on the Internet they’re wrong Meetups, wafflejs Nicolas Zackas, ESLint AST Explorer, the nerdiest! Main Topic: Firefox Debugger Jason: “The term debugging is terrible” “We want to understand our creations” What tools can we use? Firefox DevTools, Chrome DevTools, TrackJS What is the hardest bug you’ve ever had to debug? Todd: building a debugging tool to solve old android issues Logan: debugging async flow of data. Jason: debugging the debugger with webreplay to rewind your debugging Coming soon to Firefox DevTools Debugger Log points Better breakable location information (“specificity”) Column breakpoints Event listener breakpoints XHR Breakpoints WebReplay ##Panelists Jason Laster Logan Smyth ##Hosts Todd Gardner DavidWalsh This episode is sponsored by TrackJS JavaScript Error Monitoring. Find and fix the bugs in your web application with the context to see real user errors. Start your free trial at TrackJS.com.
Troy Hunt, creator of “Have I been pwned”, gives a virtual keynote that explores how security threats are evolving - and what we need to be especially conscious of in the modern era. In this keynote, you’ll learn: Real world examples of both current and emerging threats How threats are evolving and where to put your focus How to stem the flow of data breaches and protect against malicious activity and much more! Transcript Cindy Ng: Troy Hunt is a world-renowned web security expert known for public education and outreach on security topics. And we recently invited Troy to give a keynote on the modern state of insecurity. Troy Hunt: Then moving on another one I think is really fascinating today is to look at the supply chain, the modern supply chain. And what we're really talking about here is what are the different bits and pieces that go into modern-day applications? And what risks do those bits and pieces then introduce into the ecosystem? There's some interesting stats, which helps set the scene for why we have a problem today. And the first that I want to start with, the average size of webpage, just over 700 kilobytes in 2010. But over time, websites have started to get a lot bigger. You fast forward a couple of years later and they're literally 50% larger, growing very, very fast. Go through another couple of years, now we're up to approaching 2 megabytes. Get through to 2016 and we're at 2.3 megabytes. Every webpage is 2.3 megabytes. And when you have a bit of a browse around the web, maybe just open up the Chrome DevTools and have a look at the number of requests that come through. Go through on the application part of the DevTools, have a look at the images. And have a look at how big they are. And how much JavaScript, and how many other requests there are. And you realize not just how large pages are, but how the composition is made up from things from many, many different locations. So, we've had this period of six years where we've tripled the average size of a webpage. And of course, ironically, during that period we've become far more dependent on mobile devices as well. Which very frequently have less bandwidth or more expensive bandwidth, particularly if you're in Australia. So, we've sort of had this period where things have grown massively in an era where we really would have hoped that maybe they'd actually be a little bit more efficient. The reason I stopped at 2016 is because the 2.3-megabyte number is significant. And the reason it's significant is because that's the size of Doom. So, remember Doom, like the original Doom, like the 1993 Doom, where if you're a similar age to me or thereabouts, you probably blew a bunch of your childhood. When you should've been doing homework, just going through fragging stuff with BFG. So, Doom was 2.3 megabytes. That's the original size of it. And just as a reminder of the glory of Doom, remember what it was like. You just wander around these very shoddy looking graphics, but it was a first-person shoot-em-up. There were monsters, and aliens, and levels, and all sorts of things. Sounds. All of that went into two floppy disks and that's your 2.3 megabytes. So, it's amazing to think today when you go to a website, you're looking at the entire size of Doom, bundled into that one page, loaded on the browser. Now, that then leads us into where that all goes. So, let's consider a modern website. The U.S. Courts website. And I actually think it's pretty cool looking government website. Most government websites don’t look this cool. But, of course, to make a website look cool, there's a bunch of stuff that's got to go into it. So, if we break this down by content type, predictably images are large. You've got 1.1 megabytes worth of images, so almost half the content there is just images. The one that I found particularly fascinating though when I started breaking this apart is the script. Because you've got about 3/4 of a megabyte worth of JavaScript. Now keep in mind as well, JavaScript can be very well optimized. I mean, we should be minimizing it. It should be quite efficient. So, where does 726 kilobytes worth of script go? Well, one of the things we're seeing with modern websites is that they're being comprised of multiple different external services. And in the case of the U.S. Courts website, one of those web services is BrowseAloud. And BrowseAloud is interesting. So, this is an accessibility service made by a company called Texthelp. And the value proposition of BrowseAloud is that if you're running a website, and accessibility is important to you...and just to be clear about what we mean by that, if someone is visually impaired, if they may be English is second language, if they need help reading the page, then accessibility is important. And accessibility is particularly important to governments because they very often have regulatory requirements to ensure that their content is accessible to everyone. So, the value proposition of a service like BrowseAloud is that there's this external thing that you can just embed on this site. And the people building the site can use all their expertise to sort of actually build the content, and the taxonomy, and whatever else of the site. They just focus on building the site and then they pull in the external services. A little bit like we're pulling an external library. So, these days there's a lot of libraries that go into most web applications. We don't go and build all the nuts and bolts of everything. We just throw probably way too much jQuery out there. Or other themes that we pull from other places. Now, in the case of BrowseAloud, it begs the question, what would happen if someone could change that ba.js file? And really where we're leading here, is that if you can control the JavaScript that runs on a website, what would you do? If you're a bad dude, what could you do, if you could modify that file? And the simple answer is is that once you're running JavaScript in the browser and you have control over that JavaScript, there is a lot you can do. You can pull in external content, you can modify the DOM. You can exfiltrate anything that can be accessed via client script. So, for example, all the cookies, you can access all the cookies so as long as the cookies aren't flagged as HTTP only. And guess what? A lot of them which should be, still are. So, you have a huge amount of control when you can run arbitrary JavaScript on someone else's website. Now, here's what went wrong with the BrowseAloud situation. So, you've got all of these websites using this exact script tag, thousands of them, many of them government websites. And earlier this year, Scott Helme, he discovered that the ICO, the Information Commissioner's Office in the UK, so basically the data regulator in the UK, was loading this particular JavaScript file. And at the top of this file, was some script which shouldn't be there. And if you look down at about the third line and you see Coinhive, you start to see where all of this has gone wrong. Now, let's talk about Coinhive briefly. So, everyone's aware that there is cryptocurrency and there is crypto currency mining. The value proposition of Coinhive...and you can go to coinhive.com in your browser. Nothing bad is going to happen. You can always close it. But bear with me, I'll explain. So, the value proposition of coinhive.com is you know how people don't like ads. You know because you get a website, and there's tracking, and they're obnoxious, and all the rest of it. Coinhive believe that because they don't like ads, but you might still want to monetize your content, what you can do is you get rid of the ads, and you just run a crypto miner on people's browser. And what could go wrong? And in fairness, if there's no tracking and you're just chewing up a few CPU cycles, then maybe that is a better thing, but it just feels dirty. Doesn't it? You know, like if you ever go to a website and there's a Coinhive crypto miner on there, and they usually mine Monero, and you see your CPU spiking because it's trying to chew up cycles to put money in someone else's pocket, you're going to feel pretty dirty about it. So, there is a valid value proposition for Coinhive. But unfortunately, when you're a malicious party, and there's a piece of script that you can put on someone else's website, and you can profit from it, well then obviously, Coinhive is going to be quite attractive to you as well. So, what we saw was this Coinhive script being embedded into the BrowseAloud JavaScript file, then the BrowseAloud JavaScript file being embedded into thousands of other websites around the world. So, U.S. Courts was one. U.S. House of Representatives was another. I mentioned the Information Commissioner's Office, the NHS in Scotland, the National Health Service, so all of these government websites. Now, when Scott found this, one of the things that both of us found very fascinating about it is that there are really good, freely accessible browser security controls out there that will stop this from happening. So, for example, there are content security policies. And content security policies are awesome because they're just a response killer, and every single browser supports them. And a CSP lets you say, ''I would like this browser to be able to load scripts from these domains and images from those domains.'' And that's it. And then if any script tries to be loaded from a location such as coinhive.com, which I would assume you're not going to whitelist, it gets blocked. So, this is awesome. This stops these sorts of attacks absolutely dead. The adoption of content security policies is all the sites not using it. And that's about 97%. So, it's about a 3% adoption rate of content security policies. And the reason why I wanted to flag this is because this is something which is freely accessible. It's not something you go out and spend big bucks on a vendor with. When I was in London at the Infosecurity EU Conference, loads of vendors there selling loads of products and many of them are very good products, but also a lot of money. And I'm going, ''Why aren't people using the free things?'' Because the free things can actually fix this. And I think it probably boils down to education more than anything else. Now, interestingly, if we go back and look at that U.S. Courts website, here's how they solved the problem. So, they basically just commented it all out, and arguably this does actually solve the problem. Because if you comment out the script, and someone modifies it, well, now it's not a problem anymore. But now you've got an accessibility problem. I actually had people after I've been talking about this, say, ''Oh, you should never trust third-party scripts. You should just write all this yourself.'' This is an entire accessibility framework with things like text to speech. You're not going to go out and write all that yourself. You're actually got to go and build content. Instead, we'd really, really like to see people actually using the security controls to be able to make the most of services like this, but do so in a way that protects them if anything goes wrong. Now, it's interesting to look now at sites that are still embedding BrowseAloud but are doing so with no CSP. And in case anyone's wondering, no Subresource Integrity as well. So, things like major retailers, there are still us government sites, there are still UK government sites. And when I last looked at this, I found a UK transportation service as well. Exactly the same problem. And one of the things that that sort of makes me lament is that even after we have these issues where we've just had an adversary run arbitrary script and everyone's browser, and let's face it, just Coinhive is dodging a bullet. Because that is a really benign thing in the scope of what you could have done if you could have run whatever script you wanted in everyone's browser. But even after all that these services are still just doing the same thing. So, I don't think we're learning very well from previous incidents. ...
DSGVO; Cookies; CSRF (Cross-Site-Request-Forgery); Embetty von heise online; ePrivacy-Verordnung; Pad-Switch; Efail (Mailclient und Verschlüsselung); Google I/O 2018; We Are Developers; Rimac Elektro-Auto; Microsoft HoloLens; iPhone X Reboot hilft; Garmin Forerunner 235; Podbike; Humble Book Bundle Web Design & Development; CSS Grid; Bootstrap Framework; Bulma CSS Framework; Sakura CSS Framework; Normalize.css; WebComponents; Chrome DevTools; CSS Flexbox; HTTP/2; WebSockets; Welt.de; Google AMP; Roblox; Franzis Fledermausdetektor zum Selberbauen; Aua-uff-Code! Podcast; Machine Learning Guide Podcast; ElixirConf Machine Learning Talk; Machine Learning with TensorFlow on Google Cloud Platform Specialization (Coursera) Gäste: Stefan und Ulrich
Scott and Wes detail the tips and tools you need to get better at debugging. Freshbooks - Sponsor Get a 30 day free trial of Freshbooks at freshbooks.com/syntax and put SYNTAX in the “How did you hear about us?” section. Coffeecup’s CSS Grid Builder Tool Check out Coffeecup’s CSS Grid.cc builder tool and resources to learn, prototype and build next gen layouts with CSS Grid! Show Notes 4:00 Read the stack trace 7:35 Make sure you aren’t debugging production Make sure you’re not cached 8:40 Check the network panel for the whole response CORS (Cross-Origin Resource Sharing) 12:04 Use debugger commands in the browser Set breakpoints (race conditions 13:40 Use Source Maps 15:29 Make full use of all console methods console.group/groupEnd/info/ console.log({objNAme}) 17:02 View your code in other browers Make sure your browser is up to date 18:50 Step into and step over function Useful for especially tricky issues 19:47 Look into Scope in dev tools panel 20:33 Recreate it in CodePen or Create React App Helps to quarantine your code Verify where the problem actually is 24:12 Take a break Helps clear your head and approach your problem from a different angle 25:40 Rubber Duck Debugging Forcing yourself to talk it out will often reveal problems/issues 26:40 Check Github issues / ask in Slack Leave your solution in the comments for others 28:12 Use Node --inspect flag 29:57 Read the code in your libs (if you can) 32:34 - Chrome Dev Tools Tabs Rundown 33:10 - Network tab 34:15 - Preformance tab 35:58 - Lesser known tools 36:15 - Firefox Grid Debug 36:20 - Firefox Fonts tab 37:10 - More tools (three dots menu in top right of Chrome Dev Tools window) 37:39 - Chrome Animations Inspector 38:34 - /dev tips & Modern DevTools Course 39:41 - Chrome & FF Layers for 3d and full view of canvas & window 40:51 - Sensors for overriding fake devices sensors Hot tip: Use Instagram on the desktop - Go to Instagram and set the user agent to iPad and it will work as it does on that actual device Hot tip: Get free internet on a plane - If the airplane offers free internet for a certain device, change your user agent to that device and then switch back 43:12 - Favorite DevTools extensions Apollo React Redux Vue Lighthouse JSON Formatter 44:06 - Application/Storage tab 44:41 - FireFox Grid Inspector ××× SIIIIICK ××× PIIIICKS ××× Scott: Hotel Tonight App Wes: Coffee table - If you’re trying to build an outdoor coffee table, use a peice of granite Shameless Plugs Scott’s Level 2 React Course coming out THIS WEEK! - subscribe and be notified when it’s released Wes’ Courses - Advanced React course coming soon - subscribe to be notified when it’s released Tweet us your tasty treats! Scott’s Instagram LevelUpTutorials Instagram Wes’ Instagram Wes’ Twitter Wes’ Facebook Scott’s Twitter Make sure to include @SyntaxFM in your tweets
On this episode of Eat Sleep Code, guest Robert Boedigheimer discusses debugging HTTP with Fiddler and Chrome Dev Tools. Robert talks about Fiddler and how it's used to sniff out mixed https content and what are protocol-less URLs. Robert shares his tips on performance tuning by troubleshooting website images. Working with Location APIs, and mimicking slow network traffic are also discussed.
Developer bias is something all developers deal with either knowingly or not. Trey Shugart the creator of SkateJS joins us this week to discuss the advantages and disadvantages of developer bias. Following up on his talk at Web Componente Remote Conference we look at what problems developer bias causes us in our day to day lives and Trey provides some tips on how to step back and look at problems more objectively. Around the Web in Two Minutes Safari 10.1 Release Includes Fetch, CSS Grid Layout, ES2016-2017 Support & Custom Elements V1! This release also includes support for ES6 modules making Safari the first browser to ship this feature! https://webkit.org/blog/7477/new-web-features-in-safari-10-1/ Chrome DevTools landed JS/CSS coverage moved out of experiments Currently in Canary M59 Useful for identifying code which would be split/ or lazy loaded https://twitter.com/addyosmani/status/848907772318015488 Edge confirms that an updated CSS Grid implementation is in development https://twitter.com/MSEdgeDev/status/848997331567497216 Interesting Blink Intent to Ship's of the week: Writable Streams https://groups.google.com/a/chromium.org/forum/#!topic/blink-dev/qpY1dj_ND-Q Animated PNGs https://groups.google.com/a/chromium.org/forum/#!msg/blink-dev/xEOyu8QFCr0/3oqSlHBxCwAJ InputEvent https://groups.google.com/a/chromium.org/forum/#!topic/blink-dev/W-Q1yWW-zas Firefox added experimental support for async iteration into Firefox nightly https://groups.google.com/forum/#!topic/mozilla.dev.platform/4--twpqTrMY Engineers gathered in Japan for two days of Service Worker standards meetings covering potential version2 updates. Guests Trey Shugart (@treshugart) Panel Danny Blue (@dee_bloo) Leon Revill (@RevillWeb) Justin Ribeiro (@justinribeiro)
Добрый день уважаемые слушатели. Представляем новый выпуск подкаста RWpod. В этом выпуске: Ruby Concurrency in Ruby 3 with Guilds, Beware the ORM: Locking and Joins и 45 Ruby Blogs Using Phoenix with Legacy Rails Applications и Tensorflow.rb JavaScript Meet Leaflet 1.0, Loading images progressively using Gaussian blur и Who Needs AMP? How to Lazy Load Responsive Images Quick and Easy with Layzr.js Codebase Overview, Node.js debugging with Chrome DevTools (in parallel with browser JavaScript) и IronNode - JavaScript debugging for Node.js
An every day use tool, Chrome DevTools offers a significant amount of power for web developers to debug and make the web better. In this episode, Konrad Dzwinel sits down with Justin Ribeiro to discuss building tools for DevTools to expand your development horizons. Resources and Links Betwixt - https://github.com/kdzwinel/betwixt SnappySnippet - https://github.com/kdzwinel/SnappySnippet kdzwinel/DOMListenerExtension https://github.com/kdzwinel/DOMListenerExtension Chrome DevTools frontend - https://github.com/ChromeDevTools/devtools-frontend npm-publish-devtools-frontend - https://github.com/paulirish/npm-publish-devtools-frontend ChromeDevTools/debugger-protocol-viewer https://github.com/ChromeDevTools/debugger-protocol-viewer Getting started building Chrome Extensions - https://developer.chrome.com/extensions Contributing to Chromium: an illustrated guide, Monica Dinculescu - http://meowni.ca/posts/chromium-101/ On This Episode Konrad Dzwinel (@kdzwinel) Justin Ribeiro (@justinribeiro)
Chrome DevTools is the topic of the week, which we've simply deemed "awesome".
Richard, Chet, Ben, and not Tor in the spacious London studioIn this Tor-less episode, Chet talks with Ben Murdoch and Richard Coles from the Android WebView team. We talk about WebView's ability to update outside of platform releases, the transition from the original WebView to the new Chromium WebView widget, about some of the new features and APIs in recent releases, and about cute kitten bitmaps.Tor didn't have much to say, about kittens or anything else.Subscribe to the podcast feed or download the audio file directly.Relevant LinksBeta Channel for Android WebViewGoogle+ Beta Channel CommunityBug TrackerChrome Dev ToolsWebView APIBen Murdoch: google.com/+BenMurdoch, @ksasqRichard Coles: google.com/+RichardColesGoogleTor: google.com/+TorNorbye, @tornorbyeChet: google.com/+ChetHaase, @chethaaseAlso, thanks to continued support by Bryan Gordon, our audio engineer who puts this stuff together every time.
Check out Ruby Remote Conf! 02:19 - Gleb Bahmutov Introduction Twitter GitHub Blog 03:21 - Perceptual Performance Paul Irish: "Delivering the goods" Fluent 2014 Keynote Gleb Bahmutov: Improving Angular web app performance example. [YouTube] Gleb Bahmutov: Profile and Optimize Your JavaScript Like a Ninja 07:09 - Getting User Feedback 12:15 - Profiling, Tools and Techniques code-snippets 16:45 - Performance Optimization The Pareto Principle Chrome DevTools 20:38 - Benchmarks 22:20 - Extracting Value from Profiling angular-vs-repeat 26:11 - Top Performance Problems Two-Way Binding Keeping Up-to-Date with Versions Minimize the Number of Expressions in Template Elements 28:44 - Performance Lessons Ng-webworker Dave Smith: Angular + React = Speed @ ng-conf 2015 34:30 - Public Opinion on Performance in Angular 40:57 - Drive-by Optimizations 42:26 - Angular 2 Performance Predictions Minko Gechev: Bringing Immutability to Angular @ ng-vegas 2015 More From Gleb: Fast Legoization Angular plus React equals Speed revisited JavaScript and AngularJs learning resources Picks The CodeNewbie Podcast (Chuck) Ruby Remote Conf (Chuck) Wait Wait... Don't Tell Me! (Chuck) Ask Me Another (Chuck) Ruby Rogues (Chuck) JavaScript Jabber (Chuck) The Freelancers’ Show (Chuck) The iPhreaks Show (Chuck) RailsClips (Chuck) Car Talk (Gleb) Colorsublime (Gleb)
Check out Ruby Remote Conf! 02:19 - Gleb Bahmutov Introduction Twitter GitHub Blog 03:21 - Perceptual Performance Paul Irish: "Delivering the goods" Fluent 2014 Keynote Gleb Bahmutov: Improving Angular web app performance example. [YouTube] Gleb Bahmutov: Profile and Optimize Your JavaScript Like a Ninja 07:09 - Getting User Feedback 12:15 - Profiling, Tools and Techniques code-snippets 16:45 - Performance Optimization The Pareto Principle Chrome DevTools 20:38 - Benchmarks 22:20 - Extracting Value from Profiling angular-vs-repeat 26:11 - Top Performance Problems Two-Way Binding Keeping Up-to-Date with Versions Minimize the Number of Expressions in Template Elements 28:44 - Performance Lessons Ng-webworker Dave Smith: Angular + React = Speed @ ng-conf 2015 34:30 - Public Opinion on Performance in Angular 40:57 - Drive-by Optimizations 42:26 - Angular 2 Performance Predictions Minko Gechev: Bringing Immutability to Angular @ ng-vegas 2015 More From Gleb: Fast Legoization Angular plus React equals Speed revisited JavaScript and AngularJs learning resources Picks The CodeNewbie Podcast (Chuck) Ruby Remote Conf (Chuck) Wait Wait... Don't Tell Me! (Chuck) Ask Me Another (Chuck) Ruby Rogues (Chuck) JavaScript Jabber (Chuck) The Freelancers’ Show (Chuck) The iPhreaks Show (Chuck) RailsClips (Chuck) Car Talk (Gleb) Colorsublime (Gleb)
Check out Ruby Remote Conf! 02:19 - Gleb Bahmutov Introduction Twitter GitHub Blog 03:21 - Perceptual Performance Paul Irish: "Delivering the goods" Fluent 2014 Keynote Gleb Bahmutov: Improving Angular web app performance example. [YouTube] Gleb Bahmutov: Profile and Optimize Your JavaScript Like a Ninja 07:09 - Getting User Feedback 12:15 - Profiling, Tools and Techniques code-snippets 16:45 - Performance Optimization The Pareto Principle Chrome DevTools 20:38 - Benchmarks 22:20 - Extracting Value from Profiling angular-vs-repeat 26:11 - Top Performance Problems Two-Way Binding Keeping Up-to-Date with Versions Minimize the Number of Expressions in Template Elements 28:44 - Performance Lessons Ng-webworker Dave Smith: Angular + React = Speed @ ng-conf 2015 34:30 - Public Opinion on Performance in Angular 40:57 - Drive-by Optimizations 42:26 - Angular 2 Performance Predictions Minko Gechev: Bringing Immutability to Angular @ ng-vegas 2015 More From Gleb: Fast Legoization Angular plus React equals Speed revisited JavaScript and AngularJs learning resources Picks The CodeNewbie Podcast (Chuck) Ruby Remote Conf (Chuck) Wait Wait... Don't Tell Me! (Chuck) Ask Me Another (Chuck) Ruby Rogues (Chuck) JavaScript Jabber (Chuck) The Freelancers’ Show (Chuck) The iPhreaks Show (Chuck) RailsClips (Chuck) Car Talk (Gleb) Colorsublime (Gleb)
01:35 - Katya Eames Introduction Twitter [YouTube] Katya Eames: How to Teach Angular to Your Kids 01:52 - Ben Nadel Introduction Twitter GitHub Blog Adventures in Angular Episode 029: Angular At Work with Ben Nadel InVision @InVisionApp 04:47 - Performance Basecamp Nested Pages 08:04 - User Experience 10:01 - Fixing Performance Problems as a Team Engineering Validation “Premature optimization is the root of all evil -- Donald Knuth” DOM Manipulation ngRepeat Screen Experience 23:28 - Finding Performance Issues Chrome Developer Tools Firefox Firebug Utilizing Chrome Dev Tools and Creating the Videos on Ben’s Blog “Imposter Syndrome” Addy Osmani Paul Irish 29:27 - “Just-in-Time View Construction” 34:43 - ngIf 37:16 - Angular 2 Opinions [YouTube] Dave Smith: Angular + React = Speed Unit Directional Data Flow & Functionality Victor Savkin: Change Detection in Angular 2 [Egghead.io] John Lindquist: Angular 2: Template Syntax ES5, ES6 AtScript, TypeScript traceur-compiler Babel 46:01 - Moving to 2.0 Picks BrowserSync (John) [Egghead.io] Angular 2: Template Syntax (Joe) Win an InVision App T-Shirt! (Lukas) Adventures in Angular (Lukas) WELCOME TO NIGHT VALE (Katya) Being and Time (Harper Perennial Modern Thought) by Martin Heidegger (Ward) Angular Grid (Ward) Steelheart (The Reckoners) by Brandon Sanderson (Chuck) StarTech.com MUHSMF2M 2m 4 Position TRRS Headset Extension Cable (Ben) Any Given Sunday (Ben) News ng-vegas: May 7th and 8th, 2015! AngularU in the Bay Area in June
01:35 - Katya Eames Introduction Twitter [YouTube] Katya Eames: How to Teach Angular to Your Kids 01:52 - Ben Nadel Introduction Twitter GitHub Blog Adventures in Angular Episode 029: Angular At Work with Ben Nadel InVision @InVisionApp 04:47 - Performance Basecamp Nested Pages 08:04 - User Experience 10:01 - Fixing Performance Problems as a Team Engineering Validation “Premature optimization is the root of all evil -- Donald Knuth” DOM Manipulation ngRepeat Screen Experience 23:28 - Finding Performance Issues Chrome Developer Tools Firefox Firebug Utilizing Chrome Dev Tools and Creating the Videos on Ben’s Blog “Imposter Syndrome” Addy Osmani Paul Irish 29:27 - “Just-in-Time View Construction” 34:43 - ngIf 37:16 - Angular 2 Opinions [YouTube] Dave Smith: Angular + React = Speed Unit Directional Data Flow & Functionality Victor Savkin: Change Detection in Angular 2 [Egghead.io] John Lindquist: Angular 2: Template Syntax ES5, ES6 AtScript, TypeScript traceur-compiler Babel 46:01 - Moving to 2.0 Picks BrowserSync (John) [Egghead.io] Angular 2: Template Syntax (Joe) Win an InVision App T-Shirt! (Lukas) Adventures in Angular (Lukas) WELCOME TO NIGHT VALE (Katya) Being and Time (Harper Perennial Modern Thought) by Martin Heidegger (Ward) Angular Grid (Ward) Steelheart (The Reckoners) by Brandon Sanderson (Chuck) StarTech.com MUHSMF2M 2m 4 Position TRRS Headset Extension Cable (Ben) Any Given Sunday (Ben) News ng-vegas: May 7th and 8th, 2015! AngularU in the Bay Area in June
01:35 - Katya Eames Introduction Twitter [YouTube] Katya Eames: How to Teach Angular to Your Kids 01:52 - Ben Nadel Introduction Twitter GitHub Blog Adventures in Angular Episode 029: Angular At Work with Ben Nadel InVision @InVisionApp 04:47 - Performance Basecamp Nested Pages 08:04 - User Experience 10:01 - Fixing Performance Problems as a Team Engineering Validation “Premature optimization is the root of all evil -- Donald Knuth” DOM Manipulation ngRepeat Screen Experience 23:28 - Finding Performance Issues Chrome Developer Tools Firefox Firebug Utilizing Chrome Dev Tools and Creating the Videos on Ben’s Blog “Imposter Syndrome” Addy Osmani Paul Irish 29:27 - “Just-in-Time View Construction” 34:43 - ngIf 37:16 - Angular 2 Opinions [YouTube] Dave Smith: Angular + React = Speed Unit Directional Data Flow & Functionality Victor Savkin: Change Detection in Angular 2 [Egghead.io] John Lindquist: Angular 2: Template Syntax ES5, ES6 AtScript, TypeScript traceur-compiler Babel 46:01 - Moving to 2.0 Picks BrowserSync (John) [Egghead.io] Angular 2: Template Syntax (Joe) Win an InVision App T-Shirt! (Lukas) Adventures in Angular (Lukas) WELCOME TO NIGHT VALE (Katya) Being and Time (Harper Perennial Modern Thought) by Martin Heidegger (Ward) Angular Grid (Ward) Steelheart (The Reckoners) by Brandon Sanderson (Chuck) StarTech.com MUHSMF2M 2m 4 Position TRRS Headset Extension Cable (Ben) Any Given Sunday (Ben) News ng-vegas: May 7th and 8th, 2015! AngularU in the Bay Area in June
In this episode of The Treehouse Show, Nick Pettit (@nickrp) and Jason Seifer (@jseifer) talk about the latest in web design, web development, HTML5, front end development, and more.
In this episode of The Treehouse Show, Nick Pettit (@nickrp) and Jason Seifer (@jseifer) talk about the latest in web design, web development, HTML5, front end development, and more.
In this episode of The Treehouse Show, Nick Pettit (@nickrp) and Jason Seifer (@jseifer) talk about the latest in web design, web development, html5, front end development, and more.
In this episode of The Treehouse Show, Nick Pettit (@nickrp) and Jason Seifer (@jseifer) talk about the latest in web design, web development, html5, front end development, and more.
The panelists discuss Chrome dev tools with Paul Irish.
The panelists discuss Chrome dev tools with Paul Irish.
The panelists discuss Chrome dev tools with Paul Irish.