Podcasts about WebAssembly

binary format for executables used by web pages

  • 353PODCASTS
  • 843EPISODES
  • 48mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • May 19, 2025LATEST

POPULARITY

20172018201920202021202220232024

Categories



Best podcasts about WebAssembly

Show all podcasts related to webassembly

Latest podcast episodes about WebAssembly

The InfoQ Podcast
How developer platforms, Wasm and sovereign cloud can help build a more effective organisation

The InfoQ Podcast

Play Episode Listen Later May 19, 2025 37:54


Max Körbächer, Managing Partner at Liquid Reply, discusses the coming of age of the Kubernetes ecosystem and how and when an organisation should use it to build its platform. Also, he touches on how to measure its success and how WebAssembly and Kubernetes can play together to obtain the most effective usage of your infrastructure. Read a transcript of this interview: https://bit.ly/3RK7DuP Subscribe to the Software Architects' Newsletter for your monthly guide to the essential news and experience from industry peers on emerging patterns and technologies: https://www.infoq.com/software-architects-newsletter Upcoming Events: InfoQ Dev Summit Boston (June 9-10, 2025) Actionable insights on today's critical dev priorities. devsummit.infoq.com/conference/boston2025 InfoQ Dev Summit Munich (October 15-16, 2025) Essential insights on critical software development priorities. https://devsummit.infoq.com/conference/munich2025 QCon San Francisco 2025 (November 17-21, 2025) Get practical inspiration and best practices on emerging software trends directly from senior software developers at early adopter companies. https://qconsf.com/ QCon AI New York 2025 (December 16-17, 2025) https://ai.qconferences.com/ The InfoQ Podcasts: Weekly inspiration to drive innovation and build great teams from senior software leaders. Listen to all our podcasts and read interview transcripts: - The InfoQ Podcast https://www.infoq.com/podcasts/ - Engineering Culture Podcast by InfoQ https://www.infoq.com/podcasts/#engineering_culture - Generally AI: https://www.infoq.com/generally-ai-podcast/ Follow InfoQ: - Mastodon: https://techhub.social/@infoq - Twitter: twitter.com/InfoQ - LinkedIn: www.linkedin.com/company/infoq - Facebook: bit.ly/2jmlyG8 - Instagram: @infoqdotcom - Youtube: www.youtube.com/infoq Write for InfoQ: Learn and share the changes and innovations in professional software development. - Join a community of experts. - Increase your visibility. - Grow your career. https://www.infoq.com/write-for-infoq

Rustacean Station
Dataland with Howard Zuo

Rustacean Station

Play Episode Listen Later May 9, 2025 60:58


Allen Wyma talks with Howard Zuo, CEO at Dataland, a software company that builds AI agents for customer support teams, using Rust. Contributing to Rustacean Station Rustacean Station is a community project; get in touch with us if you'd like to suggest an idea for an episode or offer your services as a host or audio editor! Twitter: @rustaceanfm Discord: Rustacean Station Github: @rustacean-station Email: hello@rustacean-station.org Timestamps [@0:00] - Introduction to Howard Zuo and Dataland [@2:21] - Supported data sources and plugins [@5:36] - Challenges with data diversity [@9:12] - Focus on customer support teams [@13:02] - Choosing Rust for performance and safety [@18:39] - Comparing Rust to Go [@24:10] - Learning async and debugging [@30:28] - Rust's ecosystem for data processing [@48:32] - Rust and WebAssembly for UI performance [@57:14] - Closing thoughts Credits Intro Theme: Aerocity Audio Editing: Plangora Hosting Infrastructure: Jon Gjengset Show Notes: Plangora Hosts: Allen Wyma

Software Engineering Radio - The Podcast for Professional Software Developers

Ashley Peacock, the author of Serverless Apps on Cloudflare, speaks with host Jeremy Jung about content delivery networks (CDNs). Along the way, they examine dependency injection with bindings, local development, serverless, cold starts, the V8 runtime, AWS Lambda vs Cloudflare workers, WebAssembly limitations, and core services such as R2, D1, KV, and Pages. Ashley suggests why most users use an external database and discusses eventually consistent data stores, S3-to-R2 migration strategies, queues and workflows, inter-service communication, durable objects, and describes some example projects. Brought to you by IEEE Computer Society and IEEE Software magazine.

Coffee and Open Source
Ralph Squillace

Coffee and Open Source

Play Episode Listen Later May 6, 2025 71:19


Ralph is Microsoft's representative on the board of the Bytecode Alliance Foundation and is responsible for WebAssembly outside of the browser at the company. Ralph has worked with Linux for several decades and had worked on many Azure services through the years. His team is also responsible for the ContainerD project Runwasi, part of the SpinKube project.You can find Ralph on the following sites:BlueskyGitHubPLEASE SUBSCRIBE TO THE PODCASTSpotifyApple PodcastsYouTube MusicAmazon MusicRSS FeedYou can check out more episodes of Coffee and Open Source on https://www.coffeeandopensource.comCoffee and Open Source is hosted by Isaac Levin

The New Stack Podcast
Prequel: Software Errors Be Gone

The New Stack Podcast

Play Episode Listen Later May 5, 2025 5:13


Prequel is launching a new developer-focused service aimed at democratizing software error detection—an area typically dominated by large cloud providers. Co-founded by Lyndon Brown and Tony Meehan, both former NSA engineers, Prequel introduces a community-driven observability approach centered on Common Reliability Enumerations (CREs). CREs categorize recurring production issues, helping engineers detect, understand, and communicate problems without reinventing solutions or working in isolation. Their open-source tools, cre and prereq, allow teams to build and share detectors that catch bugs and anti-patterns in real time—without exposing sensitive data, thanks to edge processing using WebAssembly.The urgency behind Prequel's mission stems from the rapid pace of AI-driven development, increased third-party code usage, and rising infrastructure costs. Traditional observability tools may surface symptoms, but Prequel aims to provide precise problem definitions and actionable insights. While observability giants like Datadog and Splunk dominate the market, Brown and Meehan argue that engineers still feel overwhelmed by data and underpowered in diagnostics—something they believe CREs can finally change.Learn more from The New Stack about the latest Observability insights Why Consolidating Observability Tools Is a Smart MoveBuilding an Observability Culture: Getting Everyone Onboard Join our community of newsletter subscribers to stay on top of the news and at the top of your game. 

Paul's Security Weekly
More WAFs in Blocking Mode and More Security Headaches from LLMs - Sandy Carielli, Janet Worthington - ASW #326

Paul's Security Weekly

Play Episode Listen Later Apr 15, 2025 74:45


The breaches will continue until appsec improves. Janet Worthington and Sandy Carielli share their latest research on breaches from 2024, WAFs in 2025, and where secure by design fits into all this. WAFs are delivering value in a way that orgs are relying on them more for bot management and fraud detection. But adopting phishing-resistant authentication solutions like passkeys and deploying WAFs still seem peripheral to secure by design principles. We discuss what's necessary for establishing a secure environment and why so many orgs still look to tools. And with LLMs writing so much code, we continue to look for ways LLMs can help appsec in addition to all the ways LLMs keep recreating appsec problems. Resources https://www.forrester.com/blogs/breaches-and-lawsuits-and-fines-oh-my-what-we-learned-the-hard-way-from-2024/ https://www.forrester.com/blogs/wafs-are-now-the-center-of-application-protection-suites/ https://www.forrester.com/blogs/are-you-making-these-devsecops-mistakes-the-four-phases-you-need-to-know-before-your-code-becomes-your-vulnerability/ In the news, crates.io logging mistake shows the errors of missing redactions, LLMs give us slopsquatting as a variation on typosquatting, CaMeL kicks sand on prompt injection attacks, using NTLM flaws as lessons for authentication designs, tradeoffs between containers and WebAssembly, research gaps in the world of Programmable Logic Controllers, and more! Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-326

Paul's Security Weekly TV
More WAFs in Blocking Mode and More Security Headaches from LLMs - Sandy Carielli, Janet Worthington - ASW #326

Paul's Security Weekly TV

Play Episode Listen Later Apr 15, 2025 74:45


The breaches will continue until appsec improves. Janet Worthington and Sandy Carielli share their latest research on breaches from 2024, WAFs in 2025, and where secure by design fits into all this. WAFs are delivering value in a way that orgs are relying on them more for bot management and fraud detection. But adopting phishing-resistant authentication solutions like passkeys and deploying WAFs still seem peripheral to secure by design principles. We discuss what's necessary for establishing a secure environment and why so many orgs still look to tools. And with LLMs writing so much code, we continue to look for ways LLMs can help appsec in addition to all the ways LLMs keep recreating appsec problems. Resources https://www.forrester.com/blogs/breaches-and-lawsuits-and-fines-oh-my-what-we-learned-the-hard-way-from-2024/ https://www.forrester.com/blogs/wafs-are-now-the-center-of-application-protection-suites/ https://www.forrester.com/blogs/are-you-making-these-devsecops-mistakes-the-four-phases-you-need-to-know-before-your-code-becomes-your-vulnerability/ In the news, crates.io logging mistake shows the errors of missing redactions, LLMs give us slopsquatting as a variation on typosquatting, CaMeL kicks sand on prompt injection attacks, using NTLM flaws as lessons for authentication designs, tradeoffs between containers and WebAssembly, research gaps in the world of Programmable Logic Controllers, and more! Show Notes: https://securityweekly.com/asw-326

Application Security Weekly (Audio)
More WAFs in Blocking Mode and More Security Headaches from LLMs - Sandy Carielli, Janet Worthington - ASW #326

Application Security Weekly (Audio)

Play Episode Listen Later Apr 15, 2025 74:45


The breaches will continue until appsec improves. Janet Worthington and Sandy Carielli share their latest research on breaches from 2024, WAFs in 2025, and where secure by design fits into all this. WAFs are delivering value in a way that orgs are relying on them more for bot management and fraud detection. But adopting phishing-resistant authentication solutions like passkeys and deploying WAFs still seem peripheral to secure by design principles. We discuss what's necessary for establishing a secure environment and why so many orgs still look to tools. And with LLMs writing so much code, we continue to look for ways LLMs can help appsec in addition to all the ways LLMs keep recreating appsec problems. Resources https://www.forrester.com/blogs/breaches-and-lawsuits-and-fines-oh-my-what-we-learned-the-hard-way-from-2024/ https://www.forrester.com/blogs/wafs-are-now-the-center-of-application-protection-suites/ https://www.forrester.com/blogs/are-you-making-these-devsecops-mistakes-the-four-phases-you-need-to-know-before-your-code-becomes-your-vulnerability/ In the news, crates.io logging mistake shows the errors of missing redactions, LLMs give us slopsquatting as a variation on typosquatting, CaMeL kicks sand on prompt injection attacks, using NTLM flaws as lessons for authentication designs, tradeoffs between containers and WebAssembly, research gaps in the world of Programmable Logic Controllers, and more! Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-326

Application Security Weekly (Video)
More WAFs in Blocking Mode and More Security Headaches from LLMs - Sandy Carielli, Janet Worthington - ASW #326

Application Security Weekly (Video)

Play Episode Listen Later Apr 15, 2025 74:45


The breaches will continue until appsec improves. Janet Worthington and Sandy Carielli share their latest research on breaches from 2024, WAFs in 2025, and where secure by design fits into all this. WAFs are delivering value in a way that orgs are relying on them more for bot management and fraud detection. But adopting phishing-resistant authentication solutions like passkeys and deploying WAFs still seem peripheral to secure by design principles. We discuss what's necessary for establishing a secure environment and why so many orgs still look to tools. And with LLMs writing so much code, we continue to look for ways LLMs can help appsec in addition to all the ways LLMs keep recreating appsec problems. Resources https://www.forrester.com/blogs/breaches-and-lawsuits-and-fines-oh-my-what-we-learned-the-hard-way-from-2024/ https://www.forrester.com/blogs/wafs-are-now-the-center-of-application-protection-suites/ https://www.forrester.com/blogs/are-you-making-these-devsecops-mistakes-the-four-phases-you-need-to-know-before-your-code-becomes-your-vulnerability/ In the news, crates.io logging mistake shows the errors of missing redactions, LLMs give us slopsquatting as a variation on typosquatting, CaMeL kicks sand on prompt injection attacks, using NTLM flaws as lessons for authentication designs, tradeoffs between containers and WebAssembly, research gaps in the world of Programmable Logic Controllers, and more! Show Notes: https://securityweekly.com/asw-326

PodRocket - A web development podcast from LogRocket
Put your database in the browser with Ben Holmes

PodRocket - A web development podcast from LogRocket

Play Episode Listen Later Apr 3, 2025 32:25


Ben Holmes, product engineer at Warp, joins PodRocket to talk about local-first web apps and what it takes to run a database directly in the browser. He breaks down how moving data closer to the user can reduce latency, improve performance, and simplify frontend development. Learn about SQLite in the browser, syncing challenges, handling conflicts, and tools like WebAssembly, IndexedDB, and CRDTs. Plus, Ben shares insights from building his own SimpleSyncEngine and where local-first development is headed! Links https://bholmes.dev https://www.linkedin.com/in/bholmesdev https://www.youtube.com/@bholmesdev https://x.com/bholmesdev https://bsky.app/profile/bholmes.dev https://github.com/bholmesdev We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Let us know by sending an email to our producer, Emily, at emily.kochanekketner@logrocket.com (mailto:emily.kochanekketner@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understand where your users are struggling by trying it for free at [LogRocket.com]. Try LogRocket for free today.(https://logrocket.com/signup/?pdr) Special Guest: Ben Holmes.

Liquid Weekly Podcast: Shopify Developers Talking Shopify Development
Episode 038 - Jan Frey on How to Become a Shopify Developer in 2025

Liquid Weekly Podcast: Shopify Developers Talking Shopify Development

Play Episode Listen Later Mar 27, 2025 62:22


In this episode of the Liquid Weekly Podcast, hosts Karl Meisterheim and Taylor Page welcome back Jan Frey, one of the greatest Shopify development teachers on the web with the incredibly helpful and popular YouTube channel Coding with Jan.The conversation covers a range of topics including the current landscape of Shopify development, strategies for finding clients, the importance of professionalism, and the evolving role of AI in web development. Jan shares insights on his new JavaScript training program aimed at helping developers enhance their skills, while also discussing the significance of soft skills in client interactions. TakeawaysFinding clients often comes from referrals and established relationships.Professionalism is key in differentiating yourself as a developer.AI tools can enhance productivity but should not replace human developers.Soft skills are just as important as technical skills in client interactions.Building a solid portfolio and online presence is crucial for new developers.JavaScript is essential for Shopify development and should be learned thoroughly.Understanding the Shopify admin and Liquid is vital for effective development.Networking and community engagement can lead to more opportunities.Creating content can help establish authority and attract clients.Continuous learning and adapting to new tools is necessary for success in development.Introducing the .dev Assistant VSCode Extension - https://shopify.dev/changelog/introdu...[action required] Checkout APIs will be shut down April 1, 2025 - https://shopify.dev/changelog/checkou...[action required] AMAZON_PAY enumerated in DigitalWallets - https://shopify.dev/changelog/amazonp...[action required] Metafield description input field removal - https://shopify.dev/changelog/metafie...New customer address capabilities in the Admin API - https://shopify.dev/changelog/new-cus...Timestamps00:00 Exploring Shopify Development and Educational Initiatives01:25 The Evolution of Development in 202504:23 Finding Clients and Building a Portfolio07:21 Soft Skills in Development and Client Interaction13:15 Navigating Cold Outreach Strategies17:30 Building a Professional Online Presence22:59 The Importance of Referrals and Networking31:52 Establishing Technical Knowledge in Development38:49 The Future of Development in an AI World40:47 The Role of AI in Web Development46:04 Essential Skills for Freelance Developers47:03 Mastering JavaScript for Shopify52:56 Shopify Updates and Changes01:01:35 Personal Highlights and Future CollaborationsFind Jan OnlineYouTube: https://www.youtube.com/@CodingwithJanLinkedIn:   / jan-frey   Twitter (X): https://x.com/Coding_with_Jan Website: https://codingwithjan.com/Freemote: https://www.freemote.com/Javascript Training: https://www.codingwithjan.com/javascr... ResourcesLuck Sail:    • The little risks you can take to incr...   Dev ChangelogPicks of the WeekKarl: Ruby on Rails and Web Assembly - https://web.dev/blog/ruby-on-rails-on...Jan: Working with Shopify Academy - https://www.shopifyacademy.com/Taylor: GoRuck Weighted Vest - https://www.goruck.com/products/train...Signup for Liquid Weekly NewsletterDon't miss out on expert insights and tips—subscribe to Liquid Weekly for more content like this delivered right to your inbox each week - https://liquidweekly.com/

Thinking Elixir Podcast
245: Supply Chain Security and SBoMs

Thinking Elixir Podcast

Play Episode Listen Later Mar 18, 2025 74:36


News includes a new library called phoenix_sync for real-time sync in Postgres-backed Phoenix applications, Peter Solnica released a Text Parser for extracting structured data from text, a useful tip on finding Hex package versions locally with mix hex.info, Wasmex updated to v0.10 with WebAssembly component support, and Chrome introduces a new browser feature similar to LiveView.JS. We also talked with Alistair Woodman and Jonatan Männchen from the EEF about Jonatan's role as CISO, the Security Working Group, and their work on OpenChain compliance for supply-chain security, Software Bill of Materials (SBoMs), and what these initiatives mean for the Elixir community, and more! Show Notes online - http://podcast.thinkingelixir.com/245 (http://podcast.thinkingelixir.com/245) Elixir Community News https://gigalixir.com/thinking (https://gigalixir.com/thinking?utm_source=thinkingelixir&utm_medium=shownotes) – Gigalixir is sponsoring the show, offering 20% off standard tier prices for a year with promo code "Thinking". https://github.com/electric-sql/phoenix_sync (https://github.com/electric-sql/phoenix_sync?utm_source=thinkingelixir&utm_medium=shownotes) – New library called phoenix_sync providing real-time sync for Postgres-backed Phoenix applications. https://hexdocs.pm/phoenix_sync/readme.html (https://hexdocs.pm/phoenix_sync/readme.html?utm_source=thinkingelixir&utm_medium=shownotes) – Documentation for phoenix_sync, a solution for building modern, real-time apps with local-first/sync in Elixir. https://github.com/josevalim/sync (https://github.com/josevalim/sync?utm_source=thinkingelixir&utm_medium=shownotes) – José Valim's original proof of concept repo that was promptly archived. https://electric-sql.com/ (https://electric-sql.com/?utm_source=thinkingelixir&utm_medium=shownotes) – Electric SQL's platform that syncs subsets of Postgres data into local apps and services, allowing data to be available offline and in-sync. https://solnic.dev/posts/announcing-textparser-for-elixir/ (https://solnic.dev/posts/announcing-textparser-for-elixir/?utm_source=thinkingelixir&utm_medium=shownotes) – Peter Solnica released TextParser, a library for extracting interesting parts of text like hashtags and links. https://hexdocs.pm/text_parser/readme.html (https://hexdocs.pm/text_parser/readme.html?utm_source=thinkingelixir&utm_medium=shownotes) – Documentation for the Text Parser library that helps parse text into structured data. https://www.elixirstreams.com/tips/mix-hex-info (https://www.elixirstreams.com/tips/mix-hex-info?utm_source=thinkingelixir&utm_medium=shownotes) – Elixir stream tip on using mix hex.info to find the latest package version for a Hex package locally, without needing to search on hex.pm or GitHub. https://github.com/phoenixframework/tailwind/blob/main/README.md#updating-from-tailwind-v3-to-v4 (https://github.com/phoenixframework/tailwind/blob/main/README.md#updating-from-tailwind-v3-to-v4?utm_source=thinkingelixir&utm_medium=shownotes) – Guide for upgrading Tailwind to V4 in existing Phoenix applications using Tailwind's automatic upgrade helper. https://gleam.run/news/hello-echo-hello-git/ (https://gleam.run/news/hello-echo-hello-git/?utm_source=thinkingelixir&utm_medium=shownotes) – Gleam 1.9.0 release with searchability on hexdocs, Echo debug printing for improved debugging, and ability to depend on Git-hosted dependencies. https://d-gate.io/blog/everything-i-was-lied-to-about-node-came-true-with-elixir (https://d-gate.io/blog/everything-i-was-lied-to-about-node-came-true-with-elixir?utm_source=thinkingelixir&utm_medium=shownotes) – Blog post discussing how promises made about NodeJS actually came true with Elixir. https://hexdocs.pm/wasmex/Wasmex.Components.html (https://hexdocs.pm/wasmex/Wasmex.Components.html?utm_source=thinkingelixir&utm_medium=shownotes) – Wasmex updated to v0.10 with support for WebAssembly components, enabling applications and components to work together regardless of original programming language. https://ashweekly.substack.com/p/ash-weekly-issue-8 (https://ashweekly.substack.com/p/ash-weekly-issue-8?utm_source=thinkingelixir&utm_medium=shownotes) – AshWeekly Issue 8 covering AshOps with mix task capabilities for CRUD operations and BeaconCMS being included in the Ash HQ installer script. https://developer.chrome.com/blog/command-and-commandfor (https://developer.chrome.com/blog/command-and-commandfor?utm_source=thinkingelixir&utm_medium=shownotes) – Chrome update brings new browser feature with commandfor and command attributes, similar to Phoenix LiveView.JS but native to browsers. https://codebeamstockholm.com/ (https://codebeamstockholm.com/?utm_source=thinkingelixir&utm_medium=shownotes) – Code BEAM Lite announced for Stockholm on June 2, 2025 with keynote speaker Björn Gustavsson, the "B" in BEAM. https://alchemyconf.com/ (https://alchemyconf.com/?utm_source=thinkingelixir&utm_medium=shownotes) – AlchemyConf coming up March 31-April 3 in Braga, Portugal. Use discount code THINKINGELIXIR for 10% off. https://www.gigcityelixir.com/ (https://www.gigcityelixir.com/?utm_source=thinkingelixir&utm_medium=shownotes) – GigCity Elixir and NervesConf on May 8-10, 2025 in Chattanooga, TN, USA. https://www.elixirconf.eu/ (https://www.elixirconf.eu/?utm_source=thinkingelixir&utm_medium=shownotes) – ElixirConf EU on May 15-16, 2025 in Kraków & Virtual. https://goatmire.com/#tickets (https://goatmire.com/#tickets?utm_source=thinkingelixir&utm_medium=shownotes) – Goatmire tickets are on sale now for the conference on September 10-12, 2025 in Varberg, Sweden. Do you have some Elixir news to share? Tell us at @ThinkingElixir (https://twitter.com/ThinkingElixir) or email at show@thinkingelixir.com (mailto:show@thinkingelixir.com) Discussion Resources https://elixir-lang.org/blog/2025/02/26/elixir-openchain-certification/ (https://elixir-lang.org/blog/2025/02/26/elixir-openchain-certification/?utm_source=thinkingelixir&utm_medium=shownotes) https://cna.erlef.org/ (https://cna.erlef.org/?utm_source=thinkingelixir&utm_medium=shownotes) – EEF CVE Numbering Authority https://erlangforums.com/t/security-working-group-minutes/3451/22 (https://erlangforums.com/t/security-working-group-minutes/3451/22?utm_source=thinkingelixir&utm_medium=shownotes) https://podcast.thinkingelixir.com/220 (https://podcast.thinkingelixir.com/220?utm_source=thinkingelixir&utm_medium=shownotes) – previous interview with Alistair https://digital-strategy.ec.europa.eu/en/policies/cyber-resilience-act (https://digital-strategy.ec.europa.eu/en/policies/cyber-resilience-act?utm_source=thinkingelixir&utm_medium=shownotes) – CRA - Cyber Resilience Act https://www.cisa.gov/ (https://www.cisa.gov/?utm_source=thinkingelixir&utm_medium=shownotes) – CISA US Government Agency https://www.cisa.gov/sbom (https://www.cisa.gov/sbom?utm_source=thinkingelixir&utm_medium=shownotes) – Software Bill of Materials https://oss-review-toolkit.org/ort/ (https://oss-review-toolkit.org/ort/?utm_source=thinkingelixir&utm_medium=shownotes) – Desire to integrate with tooling outside the Elixir ecosystem like OSS Review Toolkit https://github.com/voltone/rebar3_sbom (https://github.com/voltone/rebar3_sbom?utm_source=thinkingelixir&utm_medium=shownotes) https://cve.mitre.org/ (https://cve.mitre.org/?utm_source=thinkingelixir&utm_medium=shownotes) https://openssf.org/projects/guac/ (https://openssf.org/projects/guac/?utm_source=thinkingelixir&utm_medium=shownotes) https://erlef.github.io/security-wg/securityvulnerabilitydisclosure/ (https://erlef.github.io/security-wg/security_vulnerability_disclosure/?utm_source=thinkingelixir&utm_medium=shownotes) – EEF Security WG Vulnerability Disclosure Guide Guest Information - https://x.com/maennchen_ (https://x.com/maennchen_?utm_source=thinkingelixir&utm_medium=shownotes) – Jonatan on Twitter/X - https://bsky.app/profile/maennchen.dev (https://bsky.app/profile/maennchen.dev?utm_source=thinkingelixir&utm_medium=shownotes) – Jonatan on Bluesky - https://github.com/maennchen/ (https://github.com/maennchen/?utm_source=thinkingelixir&utm_medium=shownotes) – Jonatan on Github - https://maennchen.dev (https://maennchen.dev?utm_source=thinkingelixir&utm_medium=shownotes) – Jonatan's Blog - https://www.linkedin.com/in/alistair-woodman-51934433 (https://www.linkedin.com/in/alistair-woodman-51934433?utm_source=thinkingelixir&utm_medium=shownotes) – Alistair Woodman on LinkedIn - awoodman@erlef.org - https://github.com/ahw59/ (https://github.com/ahw59/?utm_source=thinkingelixir&utm_medium=shownotes) – Alistair on Github - http://erlef.org/ (http://erlef.org/?utm_source=thinkingelixir&utm_medium=shownotes) – Erlang Ecosystem Foundation Website Find us online - Message the show - Bluesky (https://bsky.app/profile/thinkingelixir.com) - Message the show - X (https://x.com/ThinkingElixir) - Message the show on Fediverse - @ThinkingElixir@genserver.social (https://genserver.social/ThinkingElixir) - Email the show - show@thinkingelixir.com (mailto:show@thinkingelixir.com) - Mark Ericksen on X - @brainlid (https://x.com/brainlid) - Mark Ericksen on Bluesky - @brainlid.bsky.social (https://bsky.app/profile/brainlid.bsky.social) - Mark Ericksen on Fediverse - @brainlid@genserver.social (https://genserver.social/brainlid) - David Bernheisel on Bluesky - @david.bernheisel.com (https://bsky.app/profile/david.bernheisel.com) - David Bernheisel on Fediverse - @dbern@genserver.social (https://genserver.social/dbern)

The Secure Developer
Rethinking Secure Communication With Mrinal Wadhwa

The Secure Developer

Play Episode Listen Later Mar 18, 2025 40:32


Episode SummaryIn this episode of The Secure Developer, Danny Allan sits down with Mrinal Wadhwa, CTO at Ockam, to explore the evolving landscape of secure communication in distributed systems. They discuss the challenges of securing microservices, IoT networks, and Kubernetes environments and how traditional TLS-based security models may no longer be sufficient. Mrinal shares insights into Ockam's approach to end-to-end encrypted, mutually authenticated channels and the impact of WebAssembly, passkeys, and modern cryptographic identity management on security. Tune in for a deep dive into how organizations can rethink security at runtime to minimize risks in today's complex digital ecosystems.Show NotesSecurity in modern applications is more challenging than ever, with microservices architectures, IoT deployments, and distributed computing environments introducing new risks. In this episode, Danny Allan welcomes Mrinal Wadhwa, CTO at Ockam, to discuss how secure communication models need to evolve beyond traditional TLS and perimeter-based defenses.Topics covered include:The challenges of securing microservices and Kubernetes clustersHow end-to-end encryption and mutual authentication can minimize riskThe importance of cryptographic identities and key rotation at scaleHow Ockam enables secure channels across multiple transport layers (TCP, Bluetooth, Kafka, etc.)The role of WebAssembly and passkeys in rethinking security modelsShifting from perimeter-based security to secure-by-design communicationMrinal shares key insights on how organizations can rethink risk at runtime, considering the number of people and systems involved in data flow rather than just static build-time dependencies. Whether you're a security leader, developer, or architect, this episode provides actionable insights on building trust in your infrastructure without compromising performance or agility.LinksOckamPasskeys OverviewPrivate Compute Cloud by AppleSnyk - The Developer Security Company Follow UsOur WebsiteOur LinkedIn

De Nederlandse Kubernetes Podcast
#86 Waarom Observability Cruciaal is voor SRE Teams

De Nederlandse Kubernetes Podcast

Play Episode Listen Later Mar 18, 2025 44:28


In deze speciale aflevering van de Nederlandse Kubernetes Podcast, opgenomen tijdens SRE Day 2024, duiken Ronald Kers (CNCF Ambassador) en Jan Stomphorst (Solutions Architect bij ACC ICT) samen met Erik Schabell (Director Technical Marketing & Evangelism bij Chronosphere) in de wereld van observability, cloud-native monitoring en kostenbesparing.Observability is een essentieel onderdeel van moderne IT-infrastructuren, maar veel organisaties worstelen met de explosieve groei van data en de kosten die hiermee gepaard gaan. Erik deelt zijn ervaringen uit zijn lange carrière in open source en cloud-technologieën, en hoe hij van Red Hat naar Chronosphere is gegaan om bedrijven te helpen hun observability stack te optimaliseren.We bespreken:Hoe Chronosphere organisaties helpt kosten en data-explosie onder controle te houden.Waarom veel verzamelde data nooit wordt gebruikt, maar wel hoge opslag- en verwerkingskosten met zich meebrengt.De verschuiving van traditionele monitoringtools naar cloud-native observability oplossingen.Hoe SRE-teams kunnen voorkomen dat ze worden overspoeld door overbodige alerts en metrics.De psychologische impact van on-call shifts en hoe tooling SRE's kan helpen stress te verminderen.Daarnaast gaan we in op AI, vendor lock-in en de toekomst van Kubernetes. Containers en Kubernetes hebben de IT-wereld getransformeerd, maar wat wordt de volgende grote innovatie? AI? WebAssembly? Serverless? Erik deelt zijn visie over waar de industrie naartoe beweegt en hoe observability de sleutel blijft tot efficiëntere en betrouwbaardere IT-systemen.Stuur ons een bericht.ACC ICT Specialist in IT-CONTINUÏTEIT Bedrijfskritische applicaties én data veilig beschikbaar, onafhankelijk van derden, altijd en overalLike and subscribe! It helps out a lot.You can also find us on:De Nederlandse Kubernetes Podcast - YouTubeNederlandse Kubernetes Podcast (@k8spodcast.nl) | TikTokDe Nederlandse Kubernetes PodcastWhere can you meet us:EventsThis Podcast is powered by:ACC ICT - IT-Continuïteit voor Bedrijfskritische Applicaties | ACC ICT

52 Weeks of Cloud
Rust Projects with Multiple Entry Points Like CLI and Web

52 Weeks of Cloud

Play Episode Listen Later Mar 16, 2025 5:32


Rust Multiple Entry Points: Architectural PatternsKey PointsCore Concept: Multiple entry points in Rust enable single codebase deployment across CLI, microservices, WebAssembly and GUI contextsImplementation Path: Initial CLI development → Web API → Lambda/cloud functionsCargo Integration: Native support via src/bin directory or explicit binary targets in Cargo.tomlTechnical AdvantagesMemory Safety: Consistent safety guarantees across deployment targetsType Consistency: Strong typing ensures API contract integrity between interfacesAsync Model: Unified asynchronous execution model across environmentsBinary Optimization: Compile-time optimizations yield superior performance vs runtime interpretationOwnership Model: No-saved-state philosophy aligns with Lambda execution contextDeployment ArchitectureCore Logic Isolation: Business logic encapsulated in library cratesInterface Separation: Entry point-specific code segregated from core functionalityBuild Pipeline: Single compilation source enables consistent artifact generationInfrastructure Consistency: Uniform deployment targets eliminate environment-specific bugsResource Optimization: Shared components reduce binary size and memory footprintImplementation BenefitsIteration Speed: CLI provides immediate feedback loop during core developmentSecurity Posture: Memory safety extends across all deployment targetsAPI Consistency: JSON payload structures remain identical between CLI and web interfacesEvent Architecture: Natural alignment with event-driven cloud function patternsCompile-Time Optimizations: CPU-specific enhancements available at binary generation

52 Weeks of Cloud
Assembly Language & WebAssembly: Technical Analysis

52 Weeks of Cloud

Play Episode Listen Later Mar 7, 2025 5:52


Assembly Language & WebAssembly: Evolutionary ParadigmsEpisode NotesI. Assembly Language: Foundational FrameworkOntological DefinitionLow-level symbolic representation of machine code instructionsMinimalist abstraction layer above binary machine code (1s/0s)Human-readable mnemonics with 1:1 processor operation correspondenceCore Architectural CharacteristicsISA-Specificity: Direct processor instruction set architecture mappingMemory Model: Direct register/memory location/IO port addressingExecution Paradigm: Sequential instruction execution with explicit flow controlAbstraction Level: Minimal hardware abstraction; operations reflect CPU execution stepsStructural ComponentsMnemonics: Symbolic machine instruction representations (MOV, ADD, JMP)Operands: Registers, memory addresses, immediate valuesDirectives: Non-compiled assembler instructions (.data, .text)Labels: Symbolic memory location referencesII. WebAssembly: Theoretical FrameworkConceptual ArchitectureBinary instruction format for portable compilation targetingHigh-level language compilation target enabling near-native web platform performanceArchitectural Divergence from Traditional AssemblyAbstraction Layer: Virtual ISA designed for multi-target architecture translationExecution Model: Stack-based VM within memory-safe sandboxMemory Paradigm: Linear memory model with explicit bounds checkingType System: Static typing with validation guaranteesImplementation TaxonomyBinary Format: Compact encoding optimized for parsing efficiencyText Format (WAT): S-expression syntax for human-readable representationModule System: Self-contained execution units with explicit import/export interfacesCompilation Pipeline: High-level languages → LLVM IR → WebAssembly binaryIII. Comparative AnalysisConceptual ContinuityWebAssembly extends assembly principles via virtualization and standardizationPreserves performance characteristics while introducing portability and security guaranteesTechnical DivergencesExecution Environment: Hardware CPU vs. Virtual MachineMemory Safety: Unconstrained memory access vs. Sandboxed linear memoryPortability Paradigm: Architecture-specific vs. Architecture-neutralIV. Evolutionary SignificanceWebAssembly represents convergent evolution of assembly principles adapted to distributed computingMaintains low-level performance characteristics while enabling cross-platform executionExemplifies incremental technological innovation building upon historical foundations

Avkodat - En podd för utvecklare
42 - Ingen drar pluggen ur WebAssembly - Jimmy Engström om Blazor och varför bokskrivande ändå är så kul

Avkodat - En podd för utvecklare

Play Episode Listen Later Mar 3, 2025 42:50


Avkodat fyller 42 avsnitt! Vem passar bättre att bjuda in än Jimmy Engström. Blazor-guru, Microsoft MVP och utvecklar-lead hos CAB Group. Jimmy snackar förstås Blazor - men även bokskrivandes och kursskapandes våndor och stora uppsidor - med Johan Nordberg och Jakob Ehn. Länkar: WebAssembly: https://webassembly.org/  Jimmys bok - Web Development with Blazor https://www.adlibris.com/se/bok/web-development-with-blazor-9781835465912 

The Unhandled Exception Podcast
The Uno Platform and Hot Design - with Jérôme Laban

The Unhandled Exception Podcast

Play Episode Listen Later Feb 28, 2025 58:24


In this episode, I was joined by Jérôme Laban, the CTO of the Uno Platform. We chatted about the Uno Platform itself, which is a cross-platform framework for building single-codebase applications that run on Windows, iOS, Android, macOS, Linux, and the web via WebAssembly. We also discussed a new Hot Design feature, which is a designer/builder that during development, becomes part your application at runtime, so you can live build/edit your UIs with an easy drag and drop interface, whilst your application is running! Given you're building your UI at runtime, the designer UI also has access to your real data and its properties for databinding, and renders them as you edit!Jérôme Laban is the Chief Technology Officer at Uno Platform. With over two decades of experience in software development, Jérôme is a recognized expert in cross-platform development, .NET technologies, and has been awarded Microsoft MVP award for many years. At Uno Platform, Jerome leads the technical vision and development of the framework, empowering developers worldwide to create rich, performant applications across all platforms. A passionate advocate for open-source and community-driven development, Jerome frequently shares his insights at global conferences, webinars, and on Twitch.For a full list of show notes, or to add comments - please see the website here

The Cloud Pod
293: Terraform Apply – Output Pizza

The Cloud Pod

Play Episode Listen Later Feb 26, 2025 69:53


Welcome to episode 293 of The Cloud Pod – where the forecast is always cloudy! This week we've got a lot of new and, surprise, a new installment of Cloud Journey AND and aftershow – so make sure to stay tuned for that! We've got undersea cables, Go 1.24, Wasm, Anthropic and more.  Titles we almost went with this week: Lets Go! Under Sea cables make AI go BRRRRRR The CloudPod says it will grow the listeners by 10x by 2027 A big thanks to this week's sponsor: We're sponsorless! Want to get your brand, company, or service in front of a very enthusiastic group of cloud news seekers? You've come to the right place! Send us an email or hit us up on our slack channel for more info.  General News 01:30 Go 1.24 is released!  Go 1.24 has been released with a bunch of improvements!  Go now fully supports generic type aliases. It also includes several performance improvements to the runtime that have reduced CPU overhead by 2-3% on average across a suite of representative benchmarks. (Say that 5 times fast.) Tool improvements around tool dependencies for a module.  The standard library now includes new mechanisms to facilitate FIPS-140-3 compliance. And you know we love some good FIPS-140-3 compliance.  Lastly, it includes some improved WebAssembly support – which we'll talk about later.  04:46 Unlocking global AI potential with next-generation subsea infrastructure Meta announced their most ambitious subsea cable endeavor: Project Waterworth.  Once the cable is completed, the project will reach five major continents and span over 50,000 KM (longer than the earth’s circumference) making it the world’s longest subsea cable project using the highest-capacity technology available.  It will bring connectivity to the US, India, Brazil, South Africa, as well as other key regions.  Waterworth will be a multi-billion dollar, multi-year investment to strengthen the scale and reliability of the world's digital highways by opening three new oceanic corridors with the abundant, high-speed connectivity needed to drive AI innovation around the world. Meta has apparently developed 20 subsea cables over the last decade, including multiple deployments of industry leading subsea cables of 24 fiber pairs, compared to the typical 8 to 16 pairs of other new systems . They are also deploying a first of its kind routing system, maximizing the cable load in deep waters at depths up to 7,000 meters and using enhanced burial techniques in high-risk fault areas, such as shallow waters near the coast, to avoid damage from ship anchors and other hazards.  They wrap up the article by basically saying t

52 Weeks of Cloud
What is Web Assembly?

52 Weeks of Cloud

Play Episode Listen Later Feb 24, 2025 7:39


WebAssembly Core Concepts - Episode NotesIntroduction [00:00-00:14]Overview of episode focus: WebAssembly core conceptsStructure: definition, purpose, implementation pathwaysFundamental Definition [00:14-00:38]Low-level binary instruction format for stack-based virtual machineDesigned as compilation target for high-level languagesEnables client/server application deploymentNear-native performance execution capabilitiesSpeed as primary advantageTechnical Architecture [00:38-01:01]Binary format with deterministic execution modelStructured control flow with validation constraintsLinear memory model with protected executionStatic type system for function safetyRuntime Characteristics [01:01-01:33]Execution in structured stack machine environmentProcesses structured control flow (blocks, loops, branches)Memory-safe sandboxed execution environmentStatic validation for consistent behavior guaranteesCompilation Pipeline [01:33-02:01]Accepts diverse high-level language inputs (C++, Rust)Implements efficient compilation strategiesGenerates optimized binary format outputMaintains debugging information through source mapsArchitectural Components [02:01-02:50]Virtual Machine Integration:Operates alongside JavaScript in browserEnables distinct code execution pathwaysMaintains interoperability between runtimesBinary Format Implementation:Compact format designed for low latencyNear-native execution performanceInstruction sequences optimized for modern processorsMemory Model:Linear memory through ArrayBufferLow-level memory accessMaintains browser sandbox securityCore Technical Components [02:50-03:53]Module System:Fundamental compilation unitStateless design for cross-context sharingExplicit import/export interfacesDeterministic initialization semanticsMemory Management:Resizable ArrayBuffer for linear memory operationsBounds-checked memory accessDirect binary data manipulationMemory isolation between instancesTable Architecture:Stores reference types not representable as raw bytesImplements dynamic dispatchSupports function reference managementEnables indirect call operationsIntegration Pathways [03:53-04:47]C/C++ Development:Emscripten toolchainLLVM backend optimizationsJavaScript interface code generationDOM access through JavaScript bindingsRust Development:Native WebAssembly target supportwasm-bindgen for JavaScript interopDirect wasm-pack integrationZero-cost abstractionsAssemblyScript:TypeScript-like development experienceStrict typing requirementsDirect WebAssembly compilationFamiliar tooling compatibilityPerformance Characteristics [04:47-05:30]Execution Efficiency:Near-native execution speedsOptimized instruction sequencesReduced parsing and compilation overheadConsistent performance profilesMemory Efficiency:Direct memory manipulationReduced garbage collection overheadOptimized binary data operationsPredictable memory patternsSecurity Implementation [05:30-05:53]Sandboxed executionBrowser security policy enforcementMemory isolationSame-origin restrictionsControlled external accessWeb Platform Integration [05:53-06:20]JavaScript Interoperability:Bidirectional function callsPrimitive data type exchangeStructured data marshalingSynchronous operation capabilityDOM Integration:DOM access through JavaScript bridgesEvent handling mechanismsWeb API supportBrowser compatibilityDevelopment Toolchain [06:20-06:52]Compilation Targets:Multiple source language supportOptimization pipelinesDebugging capabilitiesTooling integrationsDevelopment Workflow:Modular development patternsTesting frameworksPerformance profiling toolsDeployment optimizationsFuture Development [06:52-07:10]Direct DOM access capabilitiesEnhanced garbage collectionImproved debugging featuresExpanded language supportPlatform evolutionResources [07:10-07:40]Mozilla Developer Network (developer.mozilla.org)WebAssembly concepts documentationWeb API implementation detailsMozilla's official curriculumProduction NotesTotal Duration: ~7:40Key visualization opportunities:Stack-based VM architecture diagramMemory model illustrationLanguage compilation pathwaysPerformance comparison graphs

Modern Web
AI is Making Serverless and Cloud Development TOO EASY

Modern Web

Play Episode Listen Later Feb 19, 2025 35:03


In this episode of the Modern Web Podcast, Danny Thompson and Adam Rackis talk with Abdel Sghiouar, Cloud Developer Advocate at Google, Kubernetes Podcast co-host, and CNCF Ambassador. Abdel shares insights from his global tech journey, from Morocco to Google's largest data center in Belgium, and now Sweden. They discuss cloud computing trends, including WebAssembly, AI-driven serverless workloads, and the shifting lines between frontend and backend. They also explore AI's impact on cloud development, from simplifying tooling to raising questions about job automation. Abdel offers a pragmatic take on AI's role, emphasizing that those who learn to leverage it will thrive.Key points from this episode:- Cultural Differences in Tech – Abdel's global experience shaped his view on work culture, from Morocco's relationship-driven workplaces to Europe's structured work-life balance.- Making Cloud Simpler – He focuses on breaking down cloud concepts and making them more approachable for developers, from high-level serverless tools to hands-on infrastructure.- AI in Cloud & Serverless – AI is improving cloud navigation, troubleshooting, and serverless efficiency, with tools like Google Cloud Assist and Vercel's Fluid Compute.- AI & Tech Jobs – AI won't replace developers but will automate simpler tasks. Understanding fundamentals and problem-solving remain key to staying relevant.0:00 - The challenge of opinionated platforms and integration in cloud0:46 - Welcome to the Modern Web Podcast with Danny Thompson & Adam Rackis1:15 - Guest introduction: Abdel Sghiouar, Cloud Developer Advocate at Google2:01 - Abdel's international journey and how different work cultures shape tech perspectives7:08 - Bridging the cloud knowledge gap for web developers9:38 - Cloud fundamentals: compute, storage, and networking12:19 - Emerging trends: WebAssembly, AI, and serverless evolution16:07 - AI's impact on cloud development: Hype vs. reality22:27 - The future of serverless and infrastructure automation28:22 - Google Cloud vs. Firebase: Balancing simplicity and scalability31:50 - What Abdel is geeking out about: Content creation and AI tools34:51 - Closing thoughts & where to connect

Paul's Security Weekly
Top 10 Web Hacking Techniques of 2024 - James Kettle - ASW #318

Paul's Security Weekly

Play Episode Listen Later Feb 18, 2025 44:57


We're getting close to two full decades of celebrating web hacking techniques. James Kettle shares which was his favorite, why the list is important to the web hacking community, and what inspires the kind of research that makes it onto the list. We discuss why we keep seeing eternal flaws like XSS and SQL injection making these lists year after year and how clever research is still finding new attack surfaces in old technologies. But there's a lot of new web technology still to be examined, from HTTP/2 and HTTP/3 to WebAssembly. Segment Resources: Top 10, 2024: https://portswigger.net/research/top-10-web-hacking-techniques-of-2024 Full nomination list: https://portswigger.net/research/top-10-web-hacking-techniques-of-2024-nominations-open Project overview: https://portswigger.net/research/top-10-web-hacking-techniques Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-318

Paul's Security Weekly TV
Top 10 Web Hacking Techniques of 2024 - James Kettle - ASW #318

Paul's Security Weekly TV

Play Episode Listen Later Feb 18, 2025 44:57


We're getting close to two full decades of celebrating web hacking techniques. James Kettle shares which was his favorite, why the list is important to the web hacking community, and what inspires the kind of research that makes it onto the list. We discuss why we keep seeing eternal flaws like XSS and SQL injection making these lists year after year and how clever research is still finding new attack surfaces in old technologies. But there's a lot of new web technology still to be examined, from HTTP/2 and HTTP/3 to WebAssembly. Segment Resources: Top 10, 2024: https://portswigger.net/research/top-10-web-hacking-techniques-of-2024 Full nomination list: https://portswigger.net/research/top-10-web-hacking-techniques-of-2024-nominations-open Project overview: https://portswigger.net/research/top-10-web-hacking-techniques Show Notes: https://securityweekly.com/asw-318

Application Security Weekly (Audio)
Top 10 Web Hacking Techniques of 2024 - James Kettle - ASW #318

Application Security Weekly (Audio)

Play Episode Listen Later Feb 18, 2025 44:57


We're getting close to two full decades of celebrating web hacking techniques. James Kettle shares which was his favorite, why the list is important to the web hacking community, and what inspires the kind of research that makes it onto the list. We discuss why we keep seeing eternal flaws like XSS and SQL injection making these lists year after year and how clever research is still finding new attack surfaces in old technologies. But there's a lot of new web technology still to be examined, from HTTP/2 and HTTP/3 to WebAssembly. Segment Resources: Top 10, 2024: https://portswigger.net/research/top-10-web-hacking-techniques-of-2024 Full nomination list: https://portswigger.net/research/top-10-web-hacking-techniques-of-2024-nominations-open Project overview: https://portswigger.net/research/top-10-web-hacking-techniques Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-318

Application Security Weekly (Video)
Top 10 Web Hacking Techniques of 2024 - James Kettle - ASW #318

Application Security Weekly (Video)

Play Episode Listen Later Feb 18, 2025 44:57


We're getting close to two full decades of celebrating web hacking techniques. James Kettle shares which was his favorite, why the list is important to the web hacking community, and what inspires the kind of research that makes it onto the list. We discuss why we keep seeing eternal flaws like XSS and SQL injection making these lists year after year and how clever research is still finding new attack surfaces in old technologies. But there's a lot of new web technology still to be examined, from HTTP/2 and HTTP/3 to WebAssembly. Segment Resources: Top 10, 2024: https://portswigger.net/research/top-10-web-hacking-techniques-of-2024 Full nomination list: https://portswigger.net/research/top-10-web-hacking-techniques-of-2024-nominations-open Project overview: https://portswigger.net/research/top-10-web-hacking-techniques Show Notes: https://securityweekly.com/asw-318

Cup o' Go

Cup o' Go

Play Episode Listen Later Feb 14, 2025 70:32 Transcription Available


This episode was LIVE! Even if you usually listen to this show, if you want you can check out the video on YouTube :)Visit https://cupogo.dev/ for store links, past episodes including transcripts, and more!GopherCon IsraelAccepted proposal: Clone a HashWe Replaced Our React Frontend with Go and WebAssembly from DaggerExtensible Wasm Applications with Go by Cherry MuiSQL NULLs are Weird! by Raymond TukpeLighting round:Go programs freeze when they are launched via a Steam clientLovable's rewrite From Python to GoBunster: Compile shell scripts to static binariesNVM for Windowschi drops support for Go 1.14-1.19Go 1.24.0 released ★ Support this podcast on Patreon ★

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Did you know that adding a simple Code Interpreter took o3 from 9.2% to 32% on FrontierMath? The Latent Space crew is hosting a hack night Feb 11th in San Francisco focused on CodeGen use cases, co-hosted with E2B and Edge AGI; watch E2B's new workshop and RSVP here!We're happy to announce that today's guest Samuel Colvin will be teaching his very first Pydantic AI workshop at the newly announced AI Engineer NYC Workshops day on Feb 22! 25 tickets left.If you're a Python developer, it's very likely that you've heard of Pydantic. Every month, it's downloaded >300,000,000 times, making it one of the top 25 PyPi packages. OpenAI uses it in its SDK for structured outputs, it's at the core of FastAPI, and if you've followed our AI Engineer Summit conference, Jason Liu of Instructor has given two great talks about it: “Pydantic is all you need” and “Pydantic is STILL all you need”. Now, Samuel Colvin has raised $17M from Sequoia to turn Pydantic from an open source project to a full stack AI engineer platform with Logfire, their observability platform, and PydanticAI, their new agent framework.Logfire: bringing OTEL to AIOpenTelemetry recently merged Semantic Conventions for LLM workloads which provides standard definitions to track performance like gen_ai.server.time_per_output_token. In Sam's view at least 80% of new apps being built today have some sort of LLM usage in them, and just like web observability platform got replaced by cloud-first ones in the 2010s, Logfire wants to do the same for AI-first apps. If you're interested in the technical details, Logfire migrated away from Clickhouse to Datafusion for their backend. We spent some time on the importance of picking open source tools you understand and that you can actually contribute to upstream, rather than the more popular ones; listen in ~43:19 for that part.Agents are the killer app for graphsPydantic AI is their attempt at taking a lot of the learnings that LangChain and the other early LLM frameworks had, and putting Python best practices into it. At an API level, it's very similar to the other libraries: you can call LLMs, create agents, do function calling, do evals, etc.They define an “Agent” as a container with a system prompt, tools, structured result, and an LLM. Under the hood, each Agent is now a graph of function calls that can orchestrate multi-step LLM interactions. You can start simple, then move toward fully dynamic graph-based control flow if needed.“We were compelled enough by graphs once we got them right that our agent implementation [...] is now actually a graph under the hood.”Why Graphs?* More natural for complex or multi-step AI workflows.* Easy to visualize and debug with mermaid diagrams.* Potential for distributed runs, or “waiting days” between steps in certain flows.In parallel, you see folks like Emil Eifrem of Neo4j talk about GraphRAG as another place where graphs fit really well in the AI stack, so it might be time for more people to take them seriously.Full Video EpisodeLike and subscribe!Chapters* 00:00:00 Introductions* 00:00:24 Origins of Pydantic* 00:05:28 Pydantic's AI moment * 00:08:05 Why build a new agents framework?* 00:10:17 Overview of Pydantic AI* 00:12:33 Becoming a believer in graphs* 00:24:02 God Model vs Compound AI Systems* 00:28:13 Why not build an LLM gateway?* 00:31:39 Programmatic testing vs live evals* 00:35:51 Using OpenTelemetry for AI traces* 00:43:19 Why they don't use Clickhouse* 00:48:34 Competing in the observability space* 00:50:41 Licensing decisions for Pydantic and LogFire* 00:51:48 Building Pydantic.run* 00:55:24 Marimo and the future of Jupyter notebooks* 00:57:44 London's AI sceneShow Notes* Sam Colvin* Pydantic* Pydantic AI* Logfire* Pydantic.run* Zod* E2B* Arize* Langsmith* Marimo* Prefect* GLA (Google Generative Language API)* OpenTelemetry* Jason Liu* Sebastian Ramirez* Bogomil Balkansky* Hood Chatham* Jeremy Howard* Andrew LambTranscriptAlessio [00:00:03]: Hey, everyone. Welcome to the Latent Space podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI.Swyx [00:00:12]: Good morning. And today we're very excited to have Sam Colvin join us from Pydantic AI. Welcome. Sam, I heard that Pydantic is all we need. Is that true?Samuel [00:00:24]: I would say you might need Pydantic AI and Logfire as well, but it gets you a long way, that's for sure.Swyx [00:00:29]: Pydantic almost basically needs no introduction. It's almost 300 million downloads in December. And obviously, in the previous podcasts and discussions we've had with Jason Liu, he's been a big fan and promoter of Pydantic and AI.Samuel [00:00:45]: Yeah, it's weird because obviously I didn't create Pydantic originally for uses in AI, it predates LLMs. But it's like we've been lucky that it's been picked up by that community and used so widely.Swyx [00:00:58]: Actually, maybe we'll hear it. Right from you, what is Pydantic and maybe a little bit of the origin story?Samuel [00:01:04]: The best name for it, which is not quite right, is a validation library. And we get some tension around that name because it doesn't just do validation, it will do coercion by default. We now have strict mode, so you can disable that coercion. But by default, if you say you want an integer field and you get in a string of 1, 2, 3, it will convert it to 123 and a bunch of other sensible conversions. And as you can imagine, the semantics around it. Exactly when you convert and when you don't, it's complicated, but because of that, it's more than just validation. Back in 2017, when I first started it, the different thing it was doing was using type hints to define your schema. That was controversial at the time. It was genuinely disapproved of by some people. I think the success of Pydantic and libraries like FastAPI that build on top of it means that today that's no longer controversial in Python. And indeed, lots of other people have copied that route, but yeah, it's a data validation library. It uses type hints for the for the most part and obviously does all the other stuff you want, like serialization on top of that. But yeah, that's the core.Alessio [00:02:06]: Do you have any fun stories on how JSON schemas ended up being kind of like the structure output standard for LLMs? And were you involved in any of these discussions? Because I know OpenAI was, you know, one of the early adopters. So did they reach out to you? Was there kind of like a structure output console in open source that people were talking about or was it just a random?Samuel [00:02:26]: No, very much not. So I originally. Didn't implement JSON schema inside Pydantic and then Sebastian, Sebastian Ramirez, FastAPI came along and like the first I ever heard of him was over a weekend. I got like 50 emails from him or 50 like emails as he was committing to Pydantic, adding JSON schema long pre version one. So the reason it was added was for OpenAPI, which is obviously closely akin to JSON schema. And then, yeah, I don't know why it was JSON that got picked up and used by OpenAI. It was obviously very convenient for us. That's because it meant that not only can you do the validation, but because Pydantic will generate you the JSON schema, it will it kind of can be one source of source of truth for structured outputs and tools.Swyx [00:03:09]: Before we dive in further on the on the AI side of things, something I'm mildly curious about, obviously, there's Zod in JavaScript land. Every now and then there is a new sort of in vogue validation library that that takes over for quite a few years and then maybe like some something else comes along. Is Pydantic? Is it done like the core Pydantic?Samuel [00:03:30]: I've just come off a call where we were redesigning some of the internal bits. There will be a v3 at some point, which will not break people's code half as much as v2 as in v2 was the was the massive rewrite into Rust, but also fixing all the stuff that was broken back from like version zero point something that we didn't fix in v1 because it was a side project. We have plans to move some of the basically store the data in Rust types after validation. Not completely. So we're still working to design the Pythonic version of it, in order for it to be able to convert into Python types. So then if you were doing like validation and then serialization, you would never have to go via a Python type we reckon that can give us somewhere between three and five times another three to five times speed up. That's probably the biggest thing. Also, like changing how easy it is to basically extend Pydantic and define how particular types, like for example, NumPy arrays are validated and serialized. But there's also stuff going on. And for example, Jitter, the JSON library in Rust that does the JSON parsing, has SIMD implementation at the moment only for AMD64. So we can add that. We need to go and add SIMD for other instruction sets. So there's a bunch more we can do on performance. I don't think we're going to go and revolutionize Pydantic, but it's going to continue to get faster, continue, hopefully, to allow people to do more advanced things. We might add a binary format like CBOR for serialization for when you'll just want to put the data into a database and probably load it again from Pydantic. So there are some things that will come along, but for the most part, it should just get faster and cleaner.Alessio [00:05:04]: From a focus perspective, I guess, as a founder too, how did you think about the AI interest rising? And then how do you kind of prioritize, okay, this is worth going into more, and we'll talk about Pydantic AI and all of that. What was maybe your early experience with LLAMP, and when did you figure out, okay, this is something we should take seriously and focus more resources on it?Samuel [00:05:28]: I'll answer that, but I'll answer what I think is a kind of parallel question, which is Pydantic's weird, because Pydantic existed, obviously, before I was starting a company. I was working on it in my spare time, and then beginning of 22, I started working on the rewrite in Rust. And I worked on it full-time for a year and a half, and then once we started the company, people came and joined. And it was a weird project, because that would never go away. You can't get signed off inside a startup. Like, we're going to go off and three engineers are going to work full-on for a year in Python and Rust, writing like 30,000 lines of Rust just to release open-source-free Python library. The result of that has been excellent for us as a company, right? As in, it's made us remain entirely relevant. And it's like, Pydantic is not just used in the SDKs of all of the AI libraries, but I can't say which one, but one of the big foundational model companies, when they upgraded from Pydantic v1 to v2, their number one internal model... The metric of performance is time to first token. That went down by 20%. So you think about all of the actual AI going on inside, and yet at least 20% of the CPU, or at least the latency inside requests was actually Pydantic, which shows like how widely it's used. So we've benefited from doing that work, although it didn't, it would have never have made financial sense in most companies. In answer to your question about like, how do we prioritize AI, I mean, the honest truth is we've spent a lot of the last year and a half building. Good general purpose observability inside LogFire and making Pydantic good for general purpose use cases. And the AI has kind of come to us. Like we just, not that we want to get away from it, but like the appetite, uh, both in Pydantic and in LogFire to go and build with AI is enormous because it kind of makes sense, right? Like if you're starting a new greenfield project in Python today, what's the chance that you're using GenAI 80%, let's say, globally, obviously it's like a hundred percent in California, but even worldwide, it's probably 80%. Yeah. And so everyone needs that stuff. And there's so much yet to be figured out so much like space to do things better in the ecosystem in a way that like to go and implement a database that's better than Postgres is a like Sisyphean task. Whereas building, uh, tools that are better for GenAI than some of the stuff that's about now is not very difficult. Putting the actual models themselves to one side.Alessio [00:07:40]: And then at the same time, then you released Pydantic AI recently, which is, uh, um, you know, agent framework and early on, I would say everybody like, you know, Langchain and like, uh, Pydantic kind of like a first class support, a lot of these frameworks, we're trying to use you to be better. What was the decision behind we should do our own framework? Were there any design decisions that you disagree with any workloads that you think people didn't support? Well,Samuel [00:08:05]: it wasn't so much like design and workflow, although I think there were some, some things we've done differently. Yeah. I think looking in general at the ecosystem of agent frameworks, the engineering quality is far below that of the rest of the Python ecosystem. There's a bunch of stuff that we have learned how to do over the last 20 years of building Python libraries and writing Python code that seems to be abandoned by people when they build agent frameworks. Now I can kind of respect that, particularly in the very first agent frameworks, like Langchain, where they were literally figuring out how to go and do this stuff. It's completely understandable that you would like basically skip some stuff.Samuel [00:08:42]: I'm shocked by the like quality of some of the agent frameworks that have come out recently from like well-respected names, which it just seems to be opportunism and I have little time for that, but like the early ones, like I think they were just figuring out how to do stuff and just as lots of people have learned from Pydantic, we were able to learn a bit from them. I think from like the gap we saw and the thing we were frustrated by was the production readiness. And that means things like type checking, even if type checking makes it hard. Like Pydantic AI, I will put my hand up now and say it has a lot of generics and you need to, it's probably easier to use it if you've written a bit of Rust and you really understand generics, but like, and that is, we're not claiming that that makes it the easiest thing to use in all cases, we think it makes it good for production applications in big systems where type checking is a no-brainer in Python. But there are also a bunch of stuff we've learned from maintaining Pydantic over the years that we've gone and done. So every single example in Pydantic AI's documentation is run on Python. As part of tests and every single print output within an example is checked during tests. So it will always be up to date. And then a bunch of things that, like I say, are standard best practice within the rest of the Python ecosystem, but I'm not followed surprisingly by some AI libraries like coverage, linting, type checking, et cetera, et cetera, where I think these are no-brainers, but like weirdly they're not followed by some of the other libraries.Alessio [00:10:04]: And can you just give an overview of the framework itself? I think there's kind of like the. LLM calling frameworks, there are the multi-agent frameworks, there's the workflow frameworks, like what does Pydantic AI do?Samuel [00:10:17]: I glaze over a bit when I hear all of the different sorts of frameworks, but I like, and I will tell you when I built Pydantic, when I built Logfire and when I built Pydantic AI, my methodology is not to go and like research and review all of the other things. I kind of work out what I want and I go and build it and then feedback comes and we adjust. So the fundamental building block of Pydantic AI is agents. The exact definition of agents and how you want to define them. is obviously ambiguous and our things are probably sort of agent-lit, not that we would want to go and rename them to agent-lit, but like the point is you probably build them together to build something and most people will call an agent. So an agent in our case has, you know, things like a prompt, like system prompt and some tools and a structured return type if you want it, that covers the vast majority of cases. There are situations where you want to go further and the most complex workflows where you want graphs and I resisted graphs for quite a while. I was sort of of the opinion you didn't need them and you could use standard like Python flow control to do all of that stuff. I had a few arguments with people, but I basically came around to, yeah, I can totally see why graphs are useful. But then we have the problem that by default, they're not type safe because if you have a like add edge method where you give the names of two different edges, there's no type checking, right? Even if you go and do some, I'm not, not all the graph libraries are AI specific. So there's a, there's a graph library called, but it allows, it does like a basic runtime type checking. Ironically using Pydantic to try and make up for the fact that like fundamentally that graphs are not typed type safe. Well, I like Pydantic, but it did, that's not a real solution to have to go and run the code to see if it's safe. There's a reason that starting type checking is so powerful. And so we kind of, from a lot of iteration eventually came up with a system of using normally data classes to define nodes where you return the next node you want to call and where we're able to go and introspect the return type of a node to basically build the graph. And so the graph is. Yeah. Inherently type safe. And once we got that right, I, I wasn't, I'm incredibly excited about graphs. I think there's like masses of use cases for them, both in gen AI and other development, but also software's all going to have interact with gen AI, right? It's going to be like web. There's no longer be like a web department in a company is that there's just like all the developers are building for web building with databases. The same is going to be true for gen AI.Alessio [00:12:33]: Yeah. I see on your docs, you call an agent, a container that contains a system prompt function. Tools, structure, result, dependency type model, and then model settings. Are the graphs in your mind, different agents? Are they different prompts for the same agent? What are like the structures in your mind?Samuel [00:12:52]: So we were compelled enough by graphs once we got them right, that we actually merged the PR this morning. That means our agent implementation without changing its API at all is now actually a graph under the hood as it is built using our graph library. So graphs are basically a lower level tool that allow you to build these complex workflows. Our agents are technically one of the many graphs you could go and build. And we just happened to build that one for you because it's a very common, commonplace one. But obviously there are cases where you need more complex workflows where the current agent assumptions don't work. And that's where you can then go and use graphs to build more complex things.Swyx [00:13:29]: You said you were cynical about graphs. What changed your mind specifically?Samuel [00:13:33]: I guess people kept giving me examples of things that they wanted to use graphs for. And my like, yeah, but you could do that in standard flow control in Python became a like less and less compelling argument to me because I've maintained those systems that end up with like spaghetti code. And I could see the appeal of this like structured way of defining the workflow of my code. And it's really neat that like just from your code, just from your type hints, you can get out a mermaid diagram that defines exactly what can go and happen.Swyx [00:14:00]: Right. Yeah. You do have very neat implementation of sort of inferring the graph from type hints, I guess. Yeah. Is what I would call it. Yeah. I think the question always is I have gone back and forth. I used to work at Temporal where we would actually spend a lot of time complaining about graph based workflow solutions like AWS step functions. And we would actually say that we were better because you could use normal control flow that you already knew and worked with. Yours, I guess, is like a little bit of a nice compromise. Like it looks like normal Pythonic code. But you just have to keep in mind what the type hints actually mean. And that's what we do with the quote unquote magic that the graph construction does.Samuel [00:14:42]: Yeah, exactly. And if you look at the internal logic of actually running a graph, it's incredibly simple. It's basically call a node, get a node back, call that node, get a node back, call that node. If you get an end, you're done. We will add in soon support for, well, basically storage so that you can store the state between each node that's run. And then the idea is you can then distribute the graph and run it across computers. And also, I mean, the other weird, the other bit that's really valuable is across time. Because it's all very well if you look at like lots of the graph examples that like Claude will give you. If it gives you an example, it gives you this lovely enormous mermaid chart of like the workflow, for example, managing returns if you're an e-commerce company. But what you realize is some of those lines are literally one function calls another function. And some of those lines are wait six days for the customer to print their like piece of paper and put it in the post. And if you're writing like your demo. Project or your like proof of concept, that's fine because you can just say, and now we call this function. But when you're building when you're in real in real life, that doesn't work. And now how do we manage that concept to basically be able to start somewhere else in the in our code? Well, this graph implementation makes it incredibly easy because you just pass the node that is the start point for carrying on the graph and it continues to run. So it's things like that where I was like, yeah, I can just imagine how things I've done in the past would be fundamentally easier to understand if we had done them with graphs.Swyx [00:16:07]: You say imagine, but like right now, this pedantic AI actually resume, you know, six days later, like you said, or is this just like a theoretical thing we can go someday?Samuel [00:16:16]: I think it's basically Q&A. So there's an AI that's asking the user a question and effectively you then call the CLI again to continue the conversation. And it basically instantiates the node and calls the graph with that node again. Now, we don't have the logic yet for effectively storing state in the database between individual nodes that we're going to add soon. But like the rest of it is basically there.Swyx [00:16:37]: It does make me think that not only are you competing with Langchain now and obviously Instructor, and now you're going into sort of the more like orchestrated things like Airflow, Prefect, Daxter, those guys.Samuel [00:16:52]: Yeah, I mean, we're good friends with the Prefect guys and Temporal have the same investors as us. And I'm sure that my investor Bogomol would not be too happy if I was like, oh, yeah, by the way, as well as trying to take on Datadog. We're also going off and trying to take on Temporal and everyone else doing that. Obviously, we're not doing all of the infrastructure of deploying that right yet, at least. We're, you know, we're just building a Python library. And like what's crazy about our graph implementation is, sure, there's a bit of magic in like introspecting the return type, you know, extracting things from unions, stuff like that. But like the actual calls, as I say, is literally call a function and get back a thing and call that. It's like incredibly simple and therefore easy to maintain. The question is, how useful is it? Well, I don't know yet. I think we have to go and find out. We have a whole. We've had a slew of people joining our Slack over the last few days and saying, tell me how good Pydantic AI is. How good is Pydantic AI versus Langchain? And I refuse to answer. That's your job to go and find that out. Not mine. We built a thing. I'm compelled by it, but I'm obviously biased. The ecosystem will work out what the useful tools are.Swyx [00:17:52]: Bogomol was my board member when I was at Temporal. And I think I think just generally also having been a workflow engine investor and participant in this space, it's a big space. Like everyone needs different functions. I think the one thing that I would say like yours, you know, as a library, you don't have that much control of it over the infrastructure. I do like the idea that each new agents or whatever or unit of work, whatever you call that should spin up in this sort of isolated boundaries. Whereas yours, I think around everything runs in the same process. But you ideally want to sort of spin out its own little container of things.Samuel [00:18:30]: I agree with you a hundred percent. And we will. It would work now. Right. As in theory, you're just like as long as you can serialize the calls to the next node, you just have to all of the different containers basically have to have the same the same code. I mean, I'm super excited about Cloudflare workers running Python and being able to install dependencies. And if Cloudflare could only give me my invitation to the private beta of that, we would be exploring that right now because I'm super excited about that as a like compute level for some of this stuff where exactly what you're saying, basically. You can run everything as an individual. Like worker function and distribute it. And it's resilient to failure, et cetera, et cetera.Swyx [00:19:08]: And it spins up like a thousand instances simultaneously. You know, you want it to be sort of truly serverless at once. Actually, I know we have some Cloudflare friends who are listening, so hopefully they'll get in front of the line. Especially.Samuel [00:19:19]: I was in Cloudflare's office last week shouting at them about other things that frustrate me. I have a love-hate relationship with Cloudflare. Their tech is awesome. But because I use it the whole time, I then get frustrated. So, yeah, I'm sure I will. I will. I will get there soon.Swyx [00:19:32]: There's a side tangent on Cloudflare. Is Python supported at full? I actually wasn't fully aware of what the status of that thing is.Samuel [00:19:39]: Yeah. So Pyodide, which is Python running inside the browser in scripting, is supported now by Cloudflare. They basically, they're having some struggles working out how to manage, ironically, dependencies that have binaries, in particular, Pydantic. Because these workers where you can have thousands of them on a given metal machine, you don't want to have a difference. You basically want to be able to have a share. Shared memory for all the different Pydantic installations, effectively. That's the thing they work out. They're working out. But Hood, who's my friend, who is the primary maintainer of Pyodide, works for Cloudflare. And that's basically what he's doing, is working out how to get Python running on Cloudflare's network.Swyx [00:20:19]: I mean, the nice thing is that your binary is really written in Rust, right? Yeah. Which also compiles the WebAssembly. Yeah. So maybe there's a way that you'd build... You have just a different build of Pydantic and that ships with whatever your distro for Cloudflare workers is.Samuel [00:20:36]: Yes, that's exactly what... So Pyodide has builds for Pydantic Core and for things like NumPy and basically all of the popular binary libraries. Yeah. It's just basic. And you're doing exactly that, right? You're using Rust to compile the WebAssembly and then you're calling that shared library from Python. And it's unbelievably complicated, but it works. Okay.Swyx [00:20:57]: Staying on graphs a little bit more, and then I wanted to go to some of the other features that you have in Pydantic AI. I see in your docs, there are sort of four levels of agents. There's single agents, there's agent delegation, programmatic agent handoff. That seems to be what OpenAI swarms would be like. And then the last one, graph-based control flow. Would you say that those are sort of the mental hierarchy of how these things go?Samuel [00:21:21]: Yeah, roughly. Okay.Swyx [00:21:22]: You had some expression around OpenAI swarms. Well.Samuel [00:21:25]: And indeed, OpenAI have got in touch with me and basically, maybe I'm not supposed to say this, but basically said that Pydantic AI looks like what swarms would become if it was production ready. So, yeah. I mean, like, yeah, which makes sense. Awesome. Yeah. I mean, in fact, it was specifically saying, how can we give people the same feeling that they were getting from swarms that led us to go and implement graphs? Because my, like, just call the next agent with Python code was not a satisfactory answer to people. So it was like, okay, we've got to go and have a better answer for that. It's not like, let us to get to graphs. Yeah.Swyx [00:21:56]: I mean, it's a minimal viable graph in some sense. What are the shapes of graphs that people should know? So the way that I would phrase this is I think Anthropic did a very good public service and also kind of surprisingly influential blog post, I would say, when they wrote Building Effective Agents. We actually have the authors coming to speak at my conference in New York, which I think you're giving a workshop at. Yeah.Samuel [00:22:24]: I'm trying to work it out. But yes, I think so.Swyx [00:22:26]: Tell me if you're not. yeah, I mean, like, that was the first, I think, authoritative view of, like, what kinds of graphs exist in agents and let's give each of them a name so that everyone is on the same page. So I'm just kind of curious if you have community names or top five patterns of graphs.Samuel [00:22:44]: I don't have top five patterns of graphs. I would love to see what people are building with them. But like, it's been it's only been a couple of weeks. And of course, there's a point is that. Because they're relatively unopinionated about what you can go and do with them. They don't suit them. Like, you can go and do lots of lots of things with them, but they don't have the structure to go and have like specific names as much as perhaps like some other systems do. I think what our agents are, which have a name and I can't remember what it is, but this basically system of like, decide what tool to call, go back to the center, decide what tool to call, go back to the center and then exit. One form of graph, which, as I say, like our agents are effectively one implementation of a graph, which is why under the hood they are now using graphs. And it'll be interesting to see over the next few years whether we end up with these like predefined graph names or graph structures or whether it's just like, yep, I built a graph or whether graphs just turn out not to match people's mental image of what they want and die away. We'll see.Swyx [00:23:38]: I think there is always appeal. Every developer eventually gets graph religion and goes, oh, yeah, everything's a graph. And then they probably over rotate and go go too far into graphs. And then they have to learn a whole bunch of DSLs. And then they're like, actually, I didn't need that. I need this. And they scale back a little bit.Samuel [00:23:55]: I'm at the beginning of that process. I'm currently a graph maximalist, although I haven't actually put any into production yet. But yeah.Swyx [00:24:02]: This has a lot of philosophical connections with other work coming out of UC Berkeley on compounding AI systems. I don't know if you know of or care. This is the Gartner world of things where they need some kind of industry terminology to sell it to enterprises. I don't know if you know about any of that.Samuel [00:24:24]: I haven't. I probably should. I should probably do it because I should probably get better at selling to enterprises. But no, no, I don't. Not right now.Swyx [00:24:29]: This is really the argument is that instead of putting everything in one model, you have more control and more maybe observability to if you break everything out into composing little models and changing them together. And obviously, then you need an orchestration framework to do that. Yeah.Samuel [00:24:47]: And it makes complete sense. And one of the things we've seen with agents is they work well when they work well. But when they. Even if you have the observability through log five that you can see what was going on, if you don't have a nice hook point to say, hang on, this is all gone wrong. You have a relatively blunt instrument of basically erroring when you exceed some kind of limit. But like what you need to be able to do is effectively iterate through these runs so that you can have your own control flow where you're like, OK, we've gone too far. And that's where one of the neat things about our graph implementation is you can basically call next in a loop rather than just running the full graph. And therefore, you have this opportunity to to break out of it. But yeah, basically, it's the same point, which is like if you have two bigger unit of work to some extent, whether or not it involves gen AI. But obviously, it's particularly problematic in gen AI. You only find out afterwards when you've spent quite a lot of time and or money when it's gone off and done done the wrong thing.Swyx [00:25:39]: Oh, drop on this. We're not going to resolve this here, but I'll drop this and then we can move on to the next thing. This is the common way that we we developers talk about this. And then the machine learning researchers look at us. And laugh and say, that's cute. And then they just train a bigger model and they wipe us out in the next training run. So I think there's a certain amount of we are fighting the bitter lesson here. We're fighting AGI. And, you know, when AGI arrives, this will all go away. Obviously, on Latent Space, we don't really discuss that because I think AGI is kind of this hand wavy concept that isn't super relevant. But I think we have to respect that. For example, you could do a chain of thoughts with graphs and you could manually orchestrate a nice little graph that does like. Reflect, think about if you need more, more inference time, compute, you know, that's the hot term now. And then think again and, you know, scale that up. Or you could train Strawberry and DeepSeq R1. Right.Samuel [00:26:32]: I saw someone saying recently, oh, they were really optimistic about agents because models are getting faster exponentially. And I like took a certain amount of self-control not to describe that it wasn't exponential. But my main point was. If models are getting faster as quickly as you say they are, then we don't need agents and we don't really need any of these abstraction layers. We can just give our model and, you know, access to the Internet, cross our fingers and hope for the best. Agents, agent frameworks, graphs, all of this stuff is basically making up for the fact that right now the models are not that clever. In the same way that if you're running a customer service business and you have loads of people sitting answering telephones, the less well trained they are, the less that you trust them, the more that you need to give them a script to go through. Whereas, you know, so if you're running a bank and you have lots of customer service people who you don't trust that much, then you tell them exactly what to say. If you're doing high net worth banking, you just employ people who you think are going to be charming to other rich people and set them off to go and have coffee with people. Right. And the same is true of models. The more intelligent they are, the less we need to tell them, like structure what they go and do and constrain the routes in which they take.Swyx [00:27:42]: Yeah. Yeah. Agree with that. So I'm happy to move on. So the other parts of Pydantic AI that are worth commenting on, and this is like my last rant, I promise. So obviously, every framework needs to do its sort of model adapter layer, which is, oh, you can easily swap from OpenAI to Cloud to Grok. You also have, which I didn't know about, Google GLA, which I didn't really know about until I saw this in your docs, which is generative language API. I assume that's AI Studio? Yes.Samuel [00:28:13]: Google don't have good names for it. So Vertex is very clear. That seems to be the API that like some of the things use, although it returns 503 about 20% of the time. So... Vertex? No. Vertex, fine. But the... Oh, oh. GLA. Yeah. Yeah.Swyx [00:28:28]: I agree with that.Samuel [00:28:29]: So we have, again, another example of like, well, I think we go the extra mile in terms of engineering is we run on every commit, at least commit to main, we run tests against the live models. Not lots of tests, but like a handful of them. Oh, okay. And we had a point last week where, yeah, GLA is a little bit better. GLA1 was failing every single run. One of their tests would fail. And we, I think we might even have commented out that one at the moment. So like all of the models fail more often than you might expect, but like that one seems to be particularly likely to fail. But Vertex is the same API, but much more reliable.Swyx [00:29:01]: My rant here is that, you know, versions of this appear in Langchain and every single framework has to have its own little thing, a version of that. I would put to you, and then, you know, this is, this can be agree to disagree. This is not needed in Pydantic AI. I would much rather you adopt a layer like Lite LLM or what's the other one in JavaScript port key. And that's their job. They focus on that one thing and they, they normalize APIs for you. All new models are automatically added and you don't have to duplicate this inside of your framework. So for example, if I wanted to use deep seek, I'm out of luck because Pydantic AI doesn't have deep seek yet.Samuel [00:29:38]: Yeah, it does.Swyx [00:29:39]: Oh, it does. Okay. I'm sorry. But you know what I mean? Should this live in your code or should it live in a layer that's kind of your API gateway that's a defined piece of infrastructure that people have?Samuel [00:29:49]: And I think if a company who are well known, who are respected by everyone had come along and done this at the right time, maybe we should have done it a year and a half ago and said, we're going to be the universal AI layer. That would have been a credible thing to do. I've heard varying reports of Lite LLM is the truth. And it didn't seem to have exactly the type safety that we needed. Also, as I understand it, and again, I haven't looked into it in great detail. Part of their business model is proxying the request through their, through their own system to do the generalization. That would be an enormous put off to an awful lot of people. Honestly, the truth is I don't think it is that much work unifying the model. I get where you're coming from. I kind of see your point. I think the truth is that everyone is centralizing around open AIs. Open AI's API is the one to do. So DeepSeq support that. Grok with OK support that. Ollama also does it. I mean, if there is that library right now, it's more or less the open AI SDK. And it's very high quality. It's well type checked. It uses Pydantic. So I'm biased. But I mean, I think it's pretty well respected anyway.Swyx [00:30:57]: There's different ways to do this. Because also, it's not just about normalizing the APIs. You have to do secret management and all that stuff.Samuel [00:31:05]: Yeah. And there's also. There's Vertex and Bedrock, which to one extent or another, effectively, they host multiple models, but they don't unify the API. But they do unify the auth, as I understand it. Although we're halfway through doing Bedrock. So I don't know about it that well. But they're kind of weird hybrids because they support multiple models. But like I say, the auth is centralized.Swyx [00:31:28]: Yeah, I'm surprised they don't unify the API. That seems like something that I would do. You know, we can discuss all this all day. There's a lot of APIs. I agree.Samuel [00:31:36]: It would be nice if there was a universal one that we didn't have to go and build.Alessio [00:31:39]: And I guess the other side of, you know, routing model and picking models like evals. How do you actually figure out which one you should be using? I know you have one. First of all, you have very good support for mocking in unit tests, which is something that a lot of other frameworks don't do. So, you know, my favorite Ruby library is VCR because it just, you know, it just lets me store the HTTP requests and replay them. That part I'll kind of skip. I think you are busy like this test model. We're like just through Python. You try and figure out what the model might respond without actually calling the model. And then you have the function model where people can kind of customize outputs. Any other fun stories maybe from there? Or is it just what you see is what you get, so to speak?Samuel [00:32:18]: On those two, I think what you see is what you get. On the evals, I think watch this space. I think it's something that like, again, I was somewhat cynical about for some time. Still have my cynicism about some of the well, it's unfortunate that so many different things are called evals. It would be nice if we could agree. What they are and what they're not. But look, I think it's a really important space. I think it's something that we're going to be working on soon, both in Pydantic AI and in LogFire to try and support better because it's like it's an unsolved problem.Alessio [00:32:45]: Yeah, you do say in your doc that anyone who claims to know for sure exactly how your eval should be defined can safely be ignored.Samuel [00:32:52]: We'll delete that sentence when we tell people how to do their evals.Alessio [00:32:56]: Exactly. I was like, we need we need a snapshot of this today. And so let's talk about eval. So there's kind of like the vibe. Yeah. So you have evals, which is what you do when you're building. Right. Because you cannot really like test it that many times to get statistical significance. And then there's the production eval. So you also have LogFire, which is kind of like your observability product, which I tried before. It's very nice. What are some of the learnings you've had from building an observability tool for LEMPs? And yeah, as people think about evals, even like what are the right things to measure? What are like the right number of samples that you need to actually start making decisions?Samuel [00:33:33]: I'm not the best person to answer that is the truth. So I'm not going to come in here and tell you that I think I know the answer on the exact number. I mean, we can do some back of the envelope statistics calculations to work out that like having 30 probably gets you most of the statistical value of having 200 for, you know, by definition, 15% of the work. But the exact like how many examples do you need? For example, that's a much harder question to answer because it's, you know, it's deep within the how models operate in terms of LogFire. One of the reasons we built LogFire the way we have and we allow you to write SQL directly against your data and we're trying to build the like powerful fundamentals of observability is precisely because we know we don't know the answers. And so allowing people to go and innovate on how they're going to consume that stuff and how they're going to process it is we think that's valuable. Because even if we come along and offer you an evals framework on top of LogFire, it won't be right in all regards. And we want people to be able to go and innovate and being able to write their own SQL connected to the API. And effectively query the data like it's a database with SQL allows people to innovate on that stuff. And that's what allows us to do it as well. I mean, we do a bunch of like testing what's possible by basically writing SQL directly against LogFire as any user could. I think the other the other really interesting bit that's going on in observability is OpenTelemetry is centralizing around semantic attributes for GenAI. So it's a relatively new project. A lot of it's still being added at the moment. But basically the idea that like. They unify how both SDKs and or agent frameworks send observability data to to any OpenTelemetry endpoint. And so, again, we can go and having that unification allows us to go and like basically compare different libraries, compare different models much better. That stuff's in a very like early stage of development. One of the things we're going to be working on pretty soon is basically, I suspect, GenAI will be the first agent framework that implements those semantic attributes properly. Because, again, we control and we can say this is important for observability, whereas most of the other agent frameworks are not maintained by people who are trying to do observability. With the exception of Langchain, where they have the observability platform, but they chose not to go down the OpenTelemetry route. So they're like plowing their own furrow. And, you know, they're a lot they're even further away from standardization.Alessio [00:35:51]: Can you maybe just give a quick overview of how OTEL ties into the AI workflows? There's kind of like the question of is, you know, a trace. And a span like a LLM call. Is it the agent? It's kind of like the broader thing you're tracking. How should people think about it?Samuel [00:36:06]: Yeah, so they have a PR that I think may have now been merged from someone at IBM talking about remote agents and trying to support this concept of remote agents within GenAI. I'm not particularly compelled by that because I don't think that like that's actually by any means the common use case. But like, I suppose it's fine for it to be there. The majority of the stuff in OTEL is basically defining how you would instrument. A given call to an LLM. So basically the actual LLM call, what data you would send to your telemetry provider, how you would structure that. Apart from this slightly odd stuff on remote agents, most of the like agent level consideration is not yet implemented in is not yet decided effectively. And so there's a bit of ambiguity. Obviously, what's good about OTEL is you can in the end send whatever attributes you like. But yeah, there's quite a lot of churn in that space and exactly how we store the data. I think that one of the most interesting things, though, is that if you think about observability. Traditionally, it was sure everyone would say our observability data is very important. We must keep it safe. But actually, companies work very hard to basically not have anything that sensitive in their observability data. So if you're a doctor in a hospital and you search for a drug for an STI, the sequel might be sent to the observability provider. But none of the parameters would. It wouldn't have the patient number or their name or the drug. With GenAI, that distinction doesn't exist because it's all just messed up in the text. If you have that same patient asking an LLM how to. What drug they should take or how to stop smoking. You can't extract the PII and not send it to the observability platform. So the sensitivity of the data that's going to end up in observability platforms is going to be like basically different order of magnitude to what's in what you would normally send to Datadog. Of course, you can make a mistake and send someone's password or their card number to Datadog. But that would be seen as a as a like mistake. Whereas in GenAI, a lot of data is going to be sent. And I think that's why companies like Langsmith and are trying hard to offer observability. On prem, because there's a bunch of companies who are happy for Datadog to be cloud hosted, but want self-hosted self-hosting for this observability stuff with GenAI.Alessio [00:38:09]: And are you doing any of that today? Because I know in each of the spans you have like the number of tokens, you have the context, you're just storing everything. And then you're going to offer kind of like a self-hosting for the platform, basically. Yeah. Yeah.Samuel [00:38:23]: So we have scrubbing roughly equivalent to what the other observability platforms have. So if we, you know, if we see password as the key, we won't send the value. But like, like I said, that doesn't really work in GenAI. So we're accepting we're going to have to store a lot of data and then we'll offer self-hosting for those people who can afford it and who need it.Alessio [00:38:42]: And then this is, I think, the first time that most of the workloads performance is depending on a third party. You know, like if you're looking at Datadog data, usually it's your app that is driving the latency and like the memory usage and all of that. Here you're going to have spans that maybe take a long time to perform because the GLA API is not working or because OpenAI is kind of like overwhelmed. Do you do anything there since like the provider is almost like the same across customers? You know, like, are you trying to surface these things for people and say, hey, this was like a very slow span, but actually all customers using OpenAI right now are seeing the same thing. So maybe don't worry about it or.Samuel [00:39:20]: Not yet. We do a few things that people don't generally do in OTA. So we send. We send information at the beginning. At the beginning of a trace as well as sorry, at the beginning of a span, as well as when it finishes. By default, OTA only sends you data when the span finishes. So if you think about a request which might take like 20 seconds, even if some of the intermediate spans finished earlier, you can't basically place them on the page until you get the top level span. And so if you're using standard OTA, you can't show anything until those requests are finished. When those requests are taking a few hundred milliseconds, it doesn't really matter. But when you're doing Gen AI calls or when you're like running a batch job that might take 30 minutes. That like latency of not being able to see the span is like crippling to understanding your application. And so we've we do a bunch of slightly complex stuff to basically send data about a span as it starts, which is closely related. Yeah.Alessio [00:40:09]: Any thoughts on all the other people trying to build on top of OpenTelemetry in different languages, too? There's like the OpenLEmetry project, which doesn't really roll off the tongue. But how do you see the future of these kind of tools? Is everybody going to have to build? Why does everybody want to build? They want to build their own open source observability thing to then sell?Samuel [00:40:29]: I mean, we are not going off and trying to instrument the likes of the OpenAI SDK with the new semantic attributes, because at some point that's going to happen and it's going to live inside OTEL and we might help with it. But we're a tiny team. We don't have time to go and do all of that work. So OpenLEmetry, like interesting project. But I suspect eventually most of those semantic like that instrumentation of the big of the SDKs will live, like I say, inside the main OpenTelemetry report. I suppose. What happens to the agent frameworks? What data you basically need at the framework level to get the context is kind of unclear. I don't think we know the answer yet. But I mean, I was on the, I guess this is kind of semi-public, because I was on the call with the OpenTelemetry call last week talking about GenAI. And there was someone from Arize talking about the challenges they have trying to get OpenTelemetry data out of Langchain, where it's not like natively implemented. And obviously they're having quite a tough time. And I was realizing, hadn't really realized this before, but how lucky we are to primarily be talking about our own agent framework, where we have the control rather than trying to go and instrument other people's.Swyx [00:41:36]: Sorry, I actually didn't know about this semantic conventions thing. It looks like, yeah, it's merged into main OTel. What should people know about this? I had never heard of it before.Samuel [00:41:45]: Yeah, I think it looks like a great start. I think there's some unknowns around how you send the messages that go back and forth, which is kind of the most important part. It's the most important thing of all. And that is moved out of attributes and into OTel events. OTel events in turn are moving from being on a span to being their own top-level API where you send data. So there's a bunch of churn still going on. I'm impressed by how fast the OTel community is moving on this project. I guess they, like everyone else, get that this is important, and it's something that people are crying out to get instrumentation off. So I'm kind of pleasantly surprised at how fast they're moving, but it makes sense.Swyx [00:42:25]: I'm just kind of browsing through the specification. I can already see that this basically bakes in whatever the previous paradigm was. So now they have genai.usage.prompt tokens and genai.usage.completion tokens. And obviously now we have reasoning tokens as well. And then only one form of sampling, which is top-p. You're basically baking in or sort of reifying things that you think are important today, but it's not a super foolproof way of doing this for the future. Yeah.Samuel [00:42:54]: I mean, that's what's neat about OTel is you can always go and send another attribute and that's fine. It's just there are a bunch that are agreed on. But I would say, you know, to come back to your previous point about whether or not we should be relying on one centralized abstraction layer, this stuff is moving so fast that if you start relying on someone else's standard, you risk basically falling behind because you're relying on someone else to keep things up to date.Swyx [00:43:14]: Or you fall behind because you've got other things going on.Samuel [00:43:17]: Yeah, yeah. That's fair. That's fair.Swyx [00:43:19]: Any other observations just about building LogFire, actually? Let's just talk about this. So you announced LogFire. I was kind of only familiar with LogFire because of your Series A announcement. I actually thought you were making a separate company. I remember some amount of confusion with you when that came out. So to be clear, it's Pydantic LogFire and the company is one company that has kind of two products, an open source thing and an observability thing, correct? Yeah. I was just kind of curious, like any learnings building LogFire? So classic question is, do you use ClickHouse? Is this like the standard persistence layer? Any learnings doing that?Samuel [00:43:54]: We don't use ClickHouse. We started building our database with ClickHouse, moved off ClickHouse onto Timescale, which is a Postgres extension to do analytical databases. Wow. And then moved off Timescale onto DataFusion. And we're basically now building, it's DataFusion, but it's kind of our own database. Bogomil is not entirely happy that we went through three databases before we chose one. I'll say that. But like, we've got to the right one in the end. I think we could have realized that Timescale wasn't right. I think ClickHouse. They both taught us a lot and we're in a great place now. But like, yeah, it's been a real journey on the database in particular.Swyx [00:44:28]: Okay. So, you know, as a database nerd, I have to like double click on this, right? So ClickHouse is supposed to be the ideal backend for anything like this. And then moving from ClickHouse to Timescale is another counterintuitive move that I didn't expect because, you know, Timescale is like an extension on top of Postgres. Not super meant for like high volume logging. But like, yeah, tell us those decisions.Samuel [00:44:50]: So at the time, ClickHouse did not have good support for JSON. I was speaking to someone yesterday and said ClickHouse doesn't have good support for JSON and got roundly stepped on because apparently it does now. So they've obviously gone and built their proper JSON support. But like back when we were trying to use it, I guess a year ago or a bit more than a year ago, everything happened to be a map and maps are a pain to try and do like looking up JSON type data. And obviously all these attributes, everything you're talking about there in terms of the GenAI stuff. You can choose to make them top level columns if you want. But the simplest thing is just to put them all into a big JSON pile. And that was a problem with ClickHouse. Also, ClickHouse had some really ugly edge cases like by default, or at least until I complained about it a lot, ClickHouse thought that two nanoseconds was longer than one second because they compared intervals just by the number, not the unit. And I complained about that a lot. And then they caused it to raise an error and just say you have to have the same unit. Then I complained a bit more. And I think as I understand it now, they have some. They convert between units. But like stuff like that, when all you're looking at is when a lot of what you're doing is comparing the duration of spans was really painful. Also things like you can't subtract two date times to get an interval. You have to use the date sub function. But like the fundamental thing is because we want our end users to write SQL, the like quality of the SQL, how easy it is to write, matters way more to us than if you're building like a platform on top where your developers are going to write the SQL. And once it's written and it's working, you don't mind too much. So I think that's like one of the fundamental differences. The other problem that I have with the ClickHouse and Impact Timescale is that like the ultimate architecture, the like snowflake architecture of binary data in object store queried with some kind of cache from nearby. They both have it, but it's closed sourced and you only get it if you go and use their hosted versions. And so even if we had got through all the problems with Timescale or ClickHouse, we would end up like, you know, they would want to be taking their 80% margin. And then we would be wanting to take that would basically leave us less space for margin. Whereas data fusion. Properly open source, all of that same tooling is open source. And for us as a team of people with a lot of Rust expertise, data fusion, which is implemented in Rust, we can literally dive into it and go and change it. So, for example, I found that there were some slowdowns in data fusion's string comparison kernel for doing like string contains. And it's just Rust code. And I could go and rewrite the string comparison kernel to be faster. Or, for example, data fusion, when we started using it, didn't have JSON support. Obviously, as I've said, it's something we can do. It's something we needed. I was able to go and implement that in a weekend using our JSON parser that we built for Pydantic Core. So it's the fact that like data fusion is like for us the perfect mixture of a toolbox to build a database with, not a database. And we can go and implement stuff on top of it in a way that like if you were trying to do that in Postgres or in ClickHouse. I mean, ClickHouse would be easier because it's C++, relatively modern C++. But like as a team of people who are not C++ experts, that's much scarier than data fusion for us.Swyx [00:47:47]: Yeah, that's a beautiful rant.Alessio [00:47:49]: That's funny. Most people don't think they have agency on these projects. They're kind of like, oh, I should use this or I should use that. They're not really like, what should I pick so that I contribute the most back to it? You know, so but I think you obviously have an open source first mindset. So that makes a lot of sense.Samuel [00:48:05]: I think if we were probably better as a startup, a better startup and faster moving and just like headlong determined to get in front of customers as fast as possible, we should have just started with ClickHouse. I hope that long term we're in a better place for having worked with data fusion. We like we're quite engaged now with the data fusion community. Andrew Lam, who maintains data fusion, is an advisor to us. We're in a really good place now. But yeah, it's definitely slowed us down relative to just like building on ClickHouse and moving as fast as we can.Swyx [00:48:34]: OK, we're about to zoom out and do Pydantic run and all the other stuff. But, you know, my last question on LogFire is really, you know, at some point you run out sort of community goodwill just because like, oh, I use Pydantic. I love Pydantic. I'm going to use LogFire. OK, then you start entering the territory of the Datadogs, the Sentrys and the honeycombs. Yeah. So where are you going to really spike here? What differentiator here?Samuel [00:48:59]: I wasn't writing code in 2001, but I'm assuming that there were people talking about like web observability and then web observability stopped being a thing, not because the web stopped being a thing, but because all observability had to do web. If you were talking to people in 2010 or 2012, they would have talked about cloud observability. Now that's not a term because all observability is cloud first. The same is going to happen to gen AI. And so whether or not you're trying to compete with Datadog or with Arise and Langsmith, you've got to do first class. You've got to do general purpose observability with first class support for AI. And as far as I know, we're the only people really trying to do that. I mean, I think Datadog is starting in that direction. And to be honest, I think Datadog is a much like scarier company to compete with than the AI specific observability platforms. Because in my opinion, and I've also heard this from lots of customers, AI specific observability where you don't see everything else going on in your app is not actually that useful. Our hope is that we can build the first general purpose observability platform with first class support for AI. And that we have this open source heritage of putting developer experience first that other companies haven't done. For all I'm a fan of Datadog and what they've done. If you search Datadog logging Python. And you just try as a like a non-observability expert to get something up and running with Datadog and Python. It's not trivial, right? That's something Sentry have done amazingly well. But like there's enormous space in most of observability to do DX better.Alessio [00:50:27]: Since you mentioned Sentry, I'm curious how you thought about licensing and all of that. Obviously, your MIT license, you don't have any rolling license like Sentry has where you can only use an open source, like the one year old version of it. Was that a hard decision?Samuel [00:50:41]: So to be clear, LogFire is co-sourced. So Pydantic and Pydantic AI are MIT licensed and like properly open source. And then LogFire for now is completely closed source. And in fact, the struggles that Sentry have had with licensing and the like weird pushback the community gives when they take something that's closed source and make it source available just meant that we just avoided that whole subject matter. I think the other way to look at it is like in terms of either headcount or revenue or dollars in the bank. The amount of open source we do as a company is we've got to be open source. We're up there with the most prolific open source companies, like I say, per head. And so we didn't feel like we were morally obligated to make LogFire open source. We have Pydantic. Pydantic is a foundational library in Python. That and now Pydantic AI are our contribution to open source. And then LogFire is like openly for profit, right? As in we're not claiming otherwise. We're not sort of trying to walk a line if it's open source. But really, we want to make it hard to deploy. So you probably want to pay us. We're trying to be straight. That it's to pay for. We could change that at some point in the future, but it's not an immediate plan.Alessio [00:51:48]: All right. So the first one I saw this new I don't know if it's like a product you're building the Pydantic that run, which is a Python browser sandbox. What was the inspiration behind that? We talk a lot about code interpreter for lamps. I'm an investor in a company called E2B, which is a code sandbox as a service for remote execution. Yeah. What's the Pydantic that run story?Samuel [00:52:09]: So Pydantic that run is again completely open source. I have no interest in making it into a product. We just needed a sandbox to be able to demo LogFire in particular, but also Pydantic AI. So it doesn't have it yet, but I'm going to add basically a proxy to OpenAI and the other models so that you can run Pydantic AI in the browser. See how it works. Tweak the prompt, et cetera, et cetera. And we'll have some kind of limit per day of what you can spend on it or like what the spend is. The other thing we wanted to b

The Secure Developer
The Development Of Security With David Mytton

The Secure Developer

Play Episode Listen Later Jan 21, 2025 34:23


Episode SummaryIn this episode of The Secure Developer, host Danny Allan sits down with David Mytton, founder and CEO of Arcjet, former CEO of Server Density, and co-founder of Console.dev. David shares his insights into bridging the “developer-security gap” with Arcjet, a cutting-edge middleware SDK designed to empower developers with advanced security tools like rate limiting and bot protection. The conversation dives into the evolution of developer tools, the growing role of AI in coding, and the future of secure software development in modern environments. David also offers a fascinating perspective on sustainable computing and the impact of clean energy in the tech industry.Show NotesIn this thought-provoking episode of The Secure Developer, host Danny Allan sits down with David Mytton, founder and CEO of Arcjet, to explore the evolving intersection of development, security, and AI. David, a serial entrepreneur with deep roots in cloud monitoring and developer tools, shares his journey from co-founding Server Density to building Arcjet, a groundbreaking solution for developers managing runtime security.The conversation begins with David's take on why developers should prioritize security early in the development lifecycle. He highlights the challenges developers face in modern environments, where traditional security tools often fail to integrate seamlessly with serverless and edge computing platforms. David introduces Arcjet as an innovative SDK that empowers developers to implement rate-limiting, bot detection, and other security measures directly in their applications, offering a developer-first approach to runtime protection.Delving deeper, the discussion shifts to the rise of WebAssembly as a transformative technology. David explains how WebAssembly enables near-native performance across platforms while providing unparalleled isolation—making it a perfect fit for modern security needs. He contrasts this with traditional intrusion detection systems and outlines how Arcjet leverages WebAssembly to fill the gaps left by legacy tools.The episode also explores the broader evolution of the developer ecosystem. From the increasing adoption of AI-powered coding tools to the growing interest in languages like Rust, David shares his perspective on how these trends are reshaping software development. He also discusses the challenges of balancing AI-generated code with the need for security and the potential for AI to exacerbate vulnerabilities if not carefully managed.As the conversation wraps up, David touches on his research in sustainable computing and its implications for the tech industry. He highlights the positive strides being made toward greener computing practices and how developers can contribute to a more sustainable future.This episode offers a rich blend of technical insights, forward-thinking ideas, and practical advice for developers and security professionals navigating the ever-changing landscape of software security and development.LinksArcjetConsoleAcquiaRust Programming LanguageUniversity of OxfordSnyk - The Developer Security Company Follow UsOur WebsiteOur LinkedIn

Les Cast Codeurs Podcast
LCC 321 - Les évènements écran large

Les Cast Codeurs Podcast

Play Episode Listen Later Jan 21, 2025 73:53


Arnaud et Emmanuel discutent des versions Java, font un résumé de l'ecosystème WebAssembly, discutent du nouveau Model Context Protocol, parlent d'observabilité avec notamment les Wide Events et de pleins d'autres choses encore. Enregistré le 17 janvier 2025 Téléchargement de l'épisode LesCastCodeurs-Episode–321.mp3 ou en vidéo sur YouTube. News Langages java trend par InfoQ https://www.infoq.com/articles/java-trends-report–2024/ Java 17 finalement depasse 11 et 8 ~30/33% Java 21 est à 1.4% commonhaus apparait GraalVM en early majority Spring AI et langchain4j en innovateurs SB 3 voit son adoption augmenter Un bon résumé sur WebAssembly, les différentes specs comme WASM GC, WASI, WIT, etc https://2ality.com/2025/01/webassembly-language-ecosystem.html WebAssembly (Wasm) est un format d'instructions binaires pour une machine virtuelle basée sur une pile, permettant la portabilité et l'efficacité du code. Wasm a évolué à partir d'asm.js, un sous-ensemble de JavaScript qui pouvait fonctionner à des vitesses proches de celles natives. WASI (WebAssembly System Interface) permet à Wasm de fonctionner en dehors des navigateurs Web, fournissant des API pour le système de fichiers, CLI, HTTP, etc. Le modèle de composant WebAssembly permet l'interopérabilité entre les langages Wasm à l'aide de WIT (Wasm Interface Type) et d'ABI canonique. Les composants Wasm se composent d'un module central et d'interfaces WIT pour les importations/exportations, facilitant l'interaction indépendante du langage. Les interfaces WIT décrivent les types et les fonctions, tandis que les mondes WIT définissent les capacités et les besoins d'un composant (importations/exportations). La gestion des packages Wasm est assurée par Warg, un protocole pour les registres de packages Wasm. Une enquête a montré que Rust est le langage Wasm le plus utilisé, suivi de Kotlin et de C++; de nombreux autres langages sont également en train d'émerger. Un algorithme de comptage a taille limitée ne mémoire a été inventé https://www.quantamagazine.org/computer-scientists-invent-an-efficient-new-way-to-count–20240516/ élimine un mot de manière aléatoire mais avec une probabilité connue quand il y a besoin de récupérer de l'espace cela se fait par round et on augmente la probabilité de suppression à chaque round donc au final, ne nombre de mots / la probabilité d'avoir été éliminé donne une mesure approximative mais plutot précise Librairies Les contributions Spring passent du CLA au DCO https://spring.io/blog/2025/01/06/hello-dco-goodbye-cla-simplifying-contributions-to-spring d'abord manuel amis meme automatisé le CLA est une document legal complexe qui peut limiter les contribuitions le DCO vient le Linux je crois et est super simple accord que la licence de la conmtrib est celle du projet accord que le code est public et distribué en perpetuité s'appuie sur les -s de git pour le sign off Ecrire un serveur MCP en Quarkus https://quarkus.io/blog/mcp-server/ MCP est un protocol proposé paor Antropic pour integrer des outils orchestrables par les LLMs MCP est frais et va plus loin que les outils offre la notion de resource (file), de functions (tools), et de proimpts pre-built pour appeler l'outil de la meilleure façon On en reparlera a pres avec les agent dans un article suivant il y a une extension Quarkus pour simplifier le codage un article plus detaillé sur l'integration Quarkus https://quarkus.io/blog/quarkus-langchain4j-mcp/ GreenMail un mini mail server en java https://greenmail-mail-test.github.io/greenmail/#features-api Utile pour les tests d'integration Supporte SMTP, POP3 et IMAP avec TLS/SSL Propose des integrations JUnit, Spring Une mini UI et des APIs REST permettent d'interagir avec le serveur si par exemple vous le partagé dans un container (il n'y a pas d'integration TestContainer existante mais elle n'est pas compliquée à écrire) Infrastructure Docker Bake in a visual way https://dev.to/aurelievache/understanding-docker-part–47-docker-bake–4p05 docker back propose d'utiliser des fichiers de configuration (format HCL) pour lancer ses builds d'images et docker compose en gros voyez ce DSL comme un Makefile très simplifié pour les commandes docker qui souvent peuvent avoir un peu trop de paramètres Datadog continue de s'etendre avec l'acquisition de Quickwit https://www.datadoghq.com/blog/datadog-acquires-quickwit/ Solution open-source de recherche des logs qui peut être déployée on-premise et dans le cloud https://quickwit.io/ Les logs ne quittent plus votre environment ce qui permet de répondre à des besoins de sécurité, privacy et réglementaire Web 33 concepts en javascript https://github.com/leonardomso/33-js-concepts Call Stack, Primitive Types, Value Types and Reference Types, Implicit, Explicit, Nominal, Structuring and Duck Typing, == vs === vs typeof, Function Scope, Block Scope and Lexical Scope, Expression vs Statement, IIFE, Modules and Namespaces, Message Queue and Event Loop, setTimeout, setInterval and requestAnimationFrame, JavaScript Engines, Bitwise Operators, Type Arrays and Array Buffers, DOM and Layout Trees, Factories and Classes, this, call, apply and bind, new, Constructor, instanceof and Instances, Prototype Inheritance and Prototype Chain, Object.create and Object.assign, map, reduce, filter, Pure Functions, Side Effects, State Mutation and Event Propagation, Closures, High Order Functions, Recursion, Collections and Generators, Promises, async/await, Data Structures, Expensive Operation and Big O Notation, Algorithms, Inheritance, Polymorphism and Code Reuse, Design Patterns, Partial Applications, Currying, Compose and Pipe, Clean Code Data et Intelligence Artificielle Phi 4 et les small language models https://techcommunity.microsoft.com/blog/aiplatformblog/introducing-phi–4-microsoft%e2%80%99s-newest-small-language-model-specializing-in-comple/4357090 Phi 4 un SML pour les usages locaux notamment 14B de parametres belle progression de ~20 points sur un score aggregé et qui le rapproche de Llama 3.3 et ses 70B de parametres bon en math (data set synthétique) Comment utiliser Gemini 2.0 Flash Thinking (le modèle de Google qui fait du raisonnement à la sauce chain of thought) en Java avec LangChain4j https://glaforge.dev/posts/2024/12/20/lets-think-with-gemini–2-thinking-mode-and-langchain4j/ Google a sorti Gemini 2.0 Flash, un petit modèle de la famille Gemini the “thinking mode” simule les cheminements de pensée (Chain of thoughts etc) décompose beaucoup plus les taches coplexes en plusiewurs taches un exemple est montré sur le modele se battant avec le probleme Les recommendations d'Antropic sur les systèmes d'agents https://www.anthropic.com/research/building-effective-agents défini les agents et les workflow Ne recommence pas les frameworks (LangChain, Amazon Bedrock AI Agent etc) le fameux débat sur l'abstraction Beaucoup de patterns implementable avec quelques lignes sans frameworks Plusieurs blocks de complexité croissante Augmented LLM (RAG, memory etc): Anthropic dit que les LLMs savent coordonner cela via MCP apr exemple Second: workflow prompt chaining : avec des gates et appelle les LLMs savent coordonner successivement ; favorise la precision vs la latence vu que les taches sont décomposées en plusieurs calls LLMs Workflow routing: classifie une entree et choisie la route a meilleure: separation de responsabilité Workflow : parallelisation: LLM travaillent en paralllele sur une tache et un aggregateur fait la synthèse. Paralleisaiton avec saucissonage de la tache ou voter sur le meilleur réponse Workflow : orchestrator workers: quand les taches ne sont pas bounded ou connues (genre le nombre de fichiers de code à changer) - les sous taches ne sont pas prédéfinies Workflow: evaluator optimizer: nun LLM propose une réponse, un LLM l'évalue et demande une meilleure réponse au besoin Agents: commande ou interaction avec l;humain puis autonome meme si il peut revenir demander des precisions à l'humain. Agents sont souvent des LLM utilisât des outil pour modifier l'environnement et réagir a feedback en boucle Ideal pour les problèmes ouverts et ou le nombre d'étapes n'est pas connu Recommende d'y aller avec une complexité progressive L'IA c'est pas donné https://techcrunch.com/2025/01/05/openai-is-losing-money-on-its-pricey-chatgpt-pro-plan-ceo-sam-altman-says/ OpenAI annonce que même avec des licenses à 200$/mois ils ne couvrent pas leurs couts associés… A quand l'explosion de la bulle IA ? Outillage Ghostty, un nouveau terminal pour Linux et macOS : https://ghostty.org/ Initié par Mitchell Hashimoto (hashicorp) Ghostty est un émulateur de terminal natif pour macOS et Linux. Il est écrit en Swift et utilise AppKit et SwiftUI sur macOS, et en Zig et utilise l'API GTK4 C sur Linux. Il utilise des composants d'interface utilisateur native et des raccourcis clavier et souris standard. Il prend en charge Quick Look, Force Touch et d'autres fonctionnalités spécifiques à macOS. Ghostty essaie de fournir un ensemble riche de fonctionnalités utiles pour un usage quotidien. Comment Pinterest utilise Honeycomb pour améliorer sa CI https://medium.com/pinterest-engineering/how-pinterest-leverages-honeycomb-to-enhance-ci-observability-and-improve-ci-build-stability–15eede563d75 Pinterest utilise Honeycomb pour améliorer l'observabilité de l'intégration continue (CI). Honeycomb permet à Pinterest de visualiser les métriques de build, d'analyser les tendances et de prendre des décisions basées sur les données. Honeycomb aide également Pinterest à identifier les causes potentielles des échecs de build et à rationaliser les tâches d'astreinte. Honeycomb peut également être utilisé pour suivre les métriques de build locales iOS aux côtés des détails de la machine, ce qui aide Pinterest à prioriser les mises à niveau des ordinateurs portables pour les développeurs. Méthodologies Suite à notre épisode sur les différents types de documentation, cet article parle des bonnes pratiques à suivre pour les tutoriels https://refactoringenglish.com/chapters/rules-for-software-tutorials/ Écrivez des tutoriels pour les débutants, en évitant le jargon et la terminologie complexe. Promettez un résultat clair dans le titre et expliquez l'objectif dans l'introduction. Montrez le résultat final tôt pour réduire les ambiguïtés. Rendez les extraits de code copiables et collables, en évitant les invites de shell et les commandes interactives. Utilisez les versions longues des indicateurs de ligne de commande pour plus de clarté. Séparez les valeurs définies par l'utilisateur de la logique réutilisable à l'aide de variables d'environnement ou de constantes nommées. Épargnez au lecteur les tâches inutiles en utilisant des scripts. Laissez les ordinateurs évaluer la logique conditionnelle, pas le lecteur. Maintenez le code en état de fonctionnement tout au long du tutoriel. Enseignez une chose par tutoriel et minimisez les dépendances. Les Wide events, un “nouveau” concept en observabilité https://jeremymorrell.dev/blog/a-practitioners-guide-to-wide-events/ un autre article https://isburmistrov.substack.com/p/all-you-need-is-wide-events-not-metrics L'idée est de logger des evenements (genre JSON log) avec le plus d'infos possible de la machine, la ram, la versiond e l'appli, l'utilisateur, le numero de build qui a produit l'appli, la derniere PR etc etc ca permet de filtrer et grouper by et de voir des correlations visuelles tres rapidement et de zoomer tiens les ventes baisses de 20% tiens en fait ca vient de l'appli andriod tiens aps correle a la version de l'appli mais la version de l'os si! le deuxieme article est facile a lire le premier est un guide d'usage exhaustif du concept Entre argumenter et se donner 5 minutes https://signalvnoise.com/posts/3124-give-it-five-minutes on veut souvent argumenter aka poser des questions en ayant déjà la reponse en soi emotionnellement mais ca amene beaucoup de verbiage donner 5 minutes à l'idée le temps d'y penser avant d'argumenter Loi, société et organisation Des juges fédéraux arrêtent le principe de la neutralité du net https://www.lemonde.fr/pixels/article/2025/01/03/les-etats-unis-reviennent-en-arriere-sur-le-principe-de-la-neutralite-du-net_6479575_4408996.html?lmd_medium=al&lmd_campaign=envoye-par-appli&lmd_creation=ios&lmd_source=default la neutralité du net c'est l'interdiction de traiter un paquet différemment en fonction de son émetteur Par exemple un paquet Netflix qui serait ralenti vs un paquet Amazon Donald trump est contre cette neutralité. À voir les impacts concrets dans un marché moins régulé. Rubrique débutant Un petit article sur les float vs les double en Java https://www.baeldung.com/java-float-vs-double 4 vs 8 bytes precision max de 7 vs 15 echele 10^38 vs 10^308 (ordre de grandeur) perf a peu pret similaire sauf peut etre pour des modeles d'IA qui vont privilegier une taille plus petite parfois attention overflow et les accumulation d'erreurs d'approximation BigDecimal Conférences La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 20 janvier 2025 : Elastic{ON} - Paris (France) 22–25 janvier 2025 : SnowCamp 2025 - Grenoble (France) 24–25 janvier 2025 : Agile Games Île-de-France 2025 - Paris (France) 6–7 février 2025 : Touraine Tech - Tours (France) 21 février 2025 : LyonJS 100 - Lyon (France) 28 février 2025 : Paris TS La Conf - Paris (France) 6 mars 2025 : DevCon #24 : 100% IA - Paris (France) 13 mars 2025 : Oracle CloudWorld Tour Paris - Paris (France) 14 mars 2025 : Rust In Paris 2025 - Paris (France) 19–21 mars 2025 : React Paris - Paris (France) 20 mars 2025 : PGDay Paris - Paris (France) 20–21 mars 2025 : Agile Niort - Niort (France) 25 mars 2025 : ParisTestConf - Paris (France) 26–29 mars 2025 : JChateau Unconference 2025 - Cour-Cheverny (France) 27–28 mars 2025 : SymfonyLive Paris 2025 - Paris (France) 28 mars 2025 : DataDays - Lille (France) 28–29 mars 2025 : Agile Games France 2025 - Lille (France) 3 avril 2025 : DotJS - Paris (France) 3 avril 2025 : SoCraTes Rennes 2025 - Rennes (France) 4 avril 2025 : Flutter Connection 2025 - Paris (France) 10–11 avril 2025 : Android Makers - Montrouge (France) 10–12 avril 2025 : Devoxx Greece - Athens (Greece) 16–18 avril 2025 : Devoxx France - Paris (France) 23–25 avril 2025 : MODERN ENDPOINT MANAGEMENT EMEA SUMMIT 2025 - Paris (France) 24 avril 2025 : IA Data Day 2025 - Strasbourg (France) 29–30 avril 2025 : MixIT - Lyon (France) 7–9 mai 2025 : Devoxx UK - London (UK) 15 mai 2025 : Cloud Toulouse - Toulouse (France) 16 mai 2025 : AFUP Day 2025 Lille - Lille (France) 16 mai 2025 : AFUP Day 2025 Lyon - Lyon (France) 16 mai 2025 : AFUP Day 2025 Poitiers - Poitiers (France) 24 mai 2025 : Polycloud - Montpellier (France) 5–6 juin 2025 : AlpesCraft - Grenoble (France) 5–6 juin 2025 : Devquest 2025 - Niort (France) 11–13 juin 2025 : Devoxx Poland - Krakow (Poland) 12–13 juin 2025 : Agile Tour Toulouse - Toulouse (France) 12–13 juin 2025 : DevLille - Lille (France) 17 juin 2025 : Mobilis In Mobile - Nantes (France) 24 juin 2025 : WAX 2025 - Aix-en-Provence (France) 25–27 juin 2025 : BreizhCamp 2025 - Rennes (France) 26–27 juin 2025 : Sunny Tech - Montpellier (France) 1–4 juillet 2025 : Open edX Conference - 2025 - Palaiseau (France) 7–9 juillet 2025 : Riviera DEV 2025 - Sophia Antipolis (France) 18–19 septembre 2025 : API Platform Conference - Lille (France) & Online 2–3 octobre 2025 : Volcamp - Clermont-Ferrand (France) 6–10 octobre 2025 : Devoxx Belgium - Antwerp (Belgium) 9–10 octobre 2025 : Forum PHP 2025 - Marne-la-Vallée (France) 16–17 octobre 2025 : DevFest Nantes - Nantes (France) 4–7 novembre 2025 : NewCrafts 2025 - Paris (France) 6 novembre 2025 : dotAI 2025 - Paris (France) 7 novembre 2025 : BDX I/O - Bordeaux (France) 12–14 novembre 2025 : Devoxx Morocco - Marrakech (Morocco) 23–25 avril 2026 : Devoxx Greece - Athens (Greece) 17 juin 2026 : Devoxx Poland - Krakow (Poland) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via X/twitter https://twitter.com/lescastcodeurs ou Bluesky https://bsky.app/profile/lescastcodeurs.com Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/

Talking Drupal
Talking Drupal #478 - WEBAssembly

Talking Drupal

Play Episode Listen Later Dec 2, 2024 65:36


Today we are talking about WEBAssembly, How it's used, and cool things you can use it for with Drupal with guest Matt Glaman. We'll also cover Darkmode JS as our module of the week. For show notes visit: https://www.talkingDrupal.com/478 Topics What is WebAssembly Progressive Web Aoos Open source Does it have a community Browser support How does it work Common use cases How can you use this with Drupal This was an early concept for Drupal trial Challenges Wordpress playground Pieces that do not work for PHP Are there risks Are there resources for people that want to use WebAssembly Do you see it being used with Drupal Resources WEBAssembly WEBAssembly history Browser support 2038 WordPress Playground https://playground.wordpress.net https://github.com/adamziel Slides from Barcelona: The Web APIs powering the Starshot trial: https://mglaman.dev/sites/default/files/2024-09/The Web APIs powering the Starshot trial experience.pdf https://www.youtube.com/watch?v=rJVM_uDGD5I&list=PLpeDXSh4nHjQOfQV-BUgoxHXlr4tHlhPO&index=64 Guests Matt Glaman - mglaman.dev mglaman Hosts Nic Laflin - nLighteneddevelopment.com nicxvan John Picozzi - epam.com johnpicozzi Suzanne Dergacheva - evolvingweb.com pixelite MOTW Correspondent Martin Anderson-Clutz - mandclu.com mandclu Brief description: Have you ever wanted your Drupal site to provide a widget that allows visitors to go over to the dark side of your theme? There's a module for that. Module name/project name: Darkmode JS Brief history How old: created in May 2022 by Arthur Baghdasaryan (arthur.baghdasar) of Last Call Media Versions available: 1.0.7 which works with Drupal 9, 10, and 11 Maintainership Actively maintained Security coverage Number of open issues: 1 open issues which is a bug against the current branch, but is postponed, waiting for more info Usage stats: 89 sites Module features and usage The module is a wrapper for the DarkmodeJS library which gets 1,000 weekly downloads, according to NPM. That library does have its own demo / tutorial site, so if you want to understand the options it exposes, we will add a link in the show notes The module provides options to control where on the page you want the widget to appear, what colors it should use, whether or not to store a user's choices in cookies, and whether or not to automatically match a visitor's OS theme setting of light/dark Installing the module currently requires making some changes to your site's composer.json file, then configuring how you want the widget to appear, and then placing the block in your site theme The module also doesn't currently include a schema file for its configuration, which can cause challenges particularly for sites that run automated tests

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
Bolt.new, Flow Engineering for Code Agents, and >$8m ARR in 2 months as a Claude Wrapper

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Dec 2, 2024 98:39


The full schedule for Latent Space LIVE! at NeurIPS has been announced, featuring Best of 2024 overview talks for the AI Startup Landscape, Computer Vision, Open Models, Transformers Killers, Synthetic Data, Agents, and Scaling, and speakers from Sarah Guo of Conviction, Roboflow, AI2/Meta, Recursal/Together, HuggingFace, OpenHands and SemiAnalysis. Join us for the IRL event/Livestream! Alessio will also be holding a meetup at AWS Re:Invent in Las Vegas this Wednesday. See our new Events page for dates of AI Engineer Summit, Singapore, and World's Fair in 2025. LAST CALL for questions for our big 2024 recap episode! Submit questions and messages on Speakpipe here for a chance to appear on the show!When we first observed that GPT Wrappers are Good, Actually, we did not even have Bolt on our radar. Since we recorded our Anthropic episode discussing building Agents with the new Claude 3.5 Sonnet, Bolt.new (by Stackblitz) has easily cleared the $8m ARR bar, repeating and accelerating its initial $4m feat.There are very many AI code generators and VS Code forks out there, but Bolt probably broke through initially because of its incredible zero shot low effort app generation:But as we explain in the pod, Bolt also emphasized deploy (Netlify)/ backend (Supabase)/ fullstack capabilities on top of Stackblitz's existing WebContainer full-WASM-powered-developer-environment-in-the-browser tech. Since then, the team has been shipping like mad (with weekly office hours), with bugfixing, full screen, multi-device, long context, diff based edits (using speculative decoding like we covered in Inference, Fast and Slow).All of this has captured the imagination of low/no code builders like Greg Isenberg and many others on YouTube/TikTok/Reddit/X/Linkedin etc:Just as with Fireworks, our relationship with Bolt/Stackblitz goes a bit deeper than normal - swyx advised the launch and got a front row seat to this epic journey, as well as demoed it with Realtime Voice at the recent OpenAI Dev Day. So we are very proud to be the first/closest to tell the full open story of Bolt/Stackblitz!Flow Engineering + Qodo/AlphaCodium UpdateIn year 2 of the pod we have been on a roll getting former guests to return as guest cohosts (Harrison Chase, Aman Sanger, Jon Frankle), and it was a pleasure to catch Itamar Friedman back on the pod, giving us an update on all things Qodo and Testing Agents from our last catchup a year and a half ago:Qodo (they renamed in September) went viral in early January this year with AlphaCodium (paper here, code here) beating DeepMind's AlphaCode with high efficiency:With a simple problem solving code agent:* The first step is to have the model reason about the problem. They describe it using bullet points and focus on the goal, inputs, outputs, rules, constraints, and any other relevant details.* Then, they make the model reason about the public tests and come up with an explanation of why the input leads to that particular output. * The model generates two to three potential solutions in text and ranks them in terms of correctness, simplicity, and robustness. * Then, it generates more diverse tests for the problem, covering cases not part of the original public tests. * Iteratively, pick a solution, generate the code, and run it on a few test cases. * If the tests fail, improve the code and repeat the process until the code passes every test.swyx has previously written similar thoughts on types vs tests for putting bounds on program behavior, but AlphaCodium extends this to AI generated tests and code.More recently, Itamar has also shown that AlphaCodium's techniques also extend well to the o1 models:Making Flow Engineering a useful technique to improve code model performance on every model. This is something we see AI Engineers uniquely well positioned to do compared to ML Engineers/Researchers.Full Video PodcastLike and subscribe!Show Notes* Itamar* Qodo* First episode* Eric* Bolt* StackBlitz* Thinkster* AlphaCodium* WebContainersChapters* 00:00:00 Introductions & Updates* 00:06:01 Generic vs. Specific AI Agents* 00:07:40 Maintaining vs Creating with AI* 00:17:46 Human vs Agent Computer Interfaces* 00:20:15 Why Docker doesn't work for Bolt* 00:24:23 Creating Testing and Code Review Loops* 00:28:07 Bolt's Task Breakdown Flow* 00:31:04 AI in Complex Enterprise Environments* 00:41:43 AlphaCodium* 00:44:39 Strategies for Breaking Down Complex Tasks* 00:45:22 Building in Open Source* 00:50:35 Choosing a product as a founder* 00:59:03 Reflections on Bolt Success* 01:06:07 Building a B2C GTM* 01:18:11 AI Capabilities and Pricing Tiers* 01:20:28 What makes Bolt unique* 01:23:07 Future Growth and Product Development* 01:29:06 Competitive Landscape in AI Engineering* 01:30:01 Advice to Founders and Embracing AI* 01:32:20 Having a baby and completing an Iron ManTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol.ai.Swyx [00:00:12]: Hey, and today we're still in our sort of makeshift in-between studio, but we're very delighted to have a former returning guest host, Itamar. Welcome back.Itamar [00:00:21]: Great to be here after a year or more. Yeah, a year and a half.Swyx [00:00:24]: You're one of our earliest guests on Agents. Now you're CEO co-founder of Kodo. Right. Which has just been renamed. You also raised a $40 million Series A, and we can get caught up on everything, but we're also delighted to have our new guest, Eric. Welcome.Eric [00:00:42]: Thank you. Excited to be here. Should I say Bolt or StackBlitz?Swyx [00:00:45]: Like, is it like its own company now or?Eric [00:00:47]: Yeah. Bolt's definitely bolt.new. That's the thing that we're probably the most known for, I imagine, at this point.Swyx [00:00:54]: Which is ridiculous to say because you were working at StackBlitz for so long.Eric [00:00:57]: Yeah. I mean, within a week, we were doing like double the amount of traffic. And StackBlitz had been online for seven years, and we were like, what? But anyways, yeah. So we're StackBlitz, the company behind bolt.new. If you've heard of bolt.new, that's our stuff. Yeah.Swyx [00:01:12]: Yeah.Itamar [00:01:13]: Excellent. I see, by the way, that the founder mode, you need to know to capture opportunities. So kudos on doing that, right? You're working on some technology, and then suddenly you can exploit that to a new world. Yeah.Eric [00:01:24]: Totally. And I think, well, not to jump, but 100%, I mean, a couple of months ago, we had the idea for Bolt earlier this year, but we haven't really shared this too much publicly. But we actually had tried to build it with some of those state-of-the-art models back in January, February, you can kind of imagine which, and they just weren't good enough to actually do the code generation where the code was accurate and it was fast and whatever have you without a ton of like rag, but then there was like issues with that. So we put it on the shelf and then we got kind of a sneak peek of some of the new models that have come out in the past couple of months now. And so once we saw that, once we actually saw the code gen from it, we were like, oh my God, like, okay, we can build a product around this. And so that was really the impetus of us building the thing. But with that, it was StackBlitz, the core StackBlitz product the past seven years has been an IDE for developers. So the entire user experience flow we've built up just didn't make sense. And so when we kind of went out to build Bolt, we just thought, you know, if we were inventing our product today, what would the interface look like given what is now possible with the AI code gen? And so there's definitely a lot of conversations we had internally, but you know, just kind of when we logically laid it out, we were like, yeah, I think it makes sense to just greenfield a new thing and let's see what happens. If it works great, then we'll figure it out. If it doesn't work great, then it'll get deleted at some point. So that's kind of how it actually came to be.Swyx [00:02:49]: I'll mention your background a little bit. You were also founder of Thinkster before you started StackBlitz. So both of you are second time founders. Both of you have sort of re-founded your company recently. Yours was more of a rename. I think a slightly different direction as well. And then we can talk about both. Maybe just chronologically, should we get caught up on where Kodo is first and then you know, just like what people should know since the last pod? Sure.Itamar [00:03:12]: The last pod was two months after we launched and we basically had the vision that we talked about. The idea that software development is about specification, test and code, etc. We are more on the testing part as in essence, we think that if you solve testing, you solve software development. The beautiful chart that we'll put up on screen. And testing is a really big field, like there are many dimensions, unit testing, the level of the component, how big it is, how large it is. And then there is like different type of testing, is it regression or smoke or whatever. So back then we only had like one ID extension with unit tests as in focus. One and a half year later, first ID extension supports more type of testing as context aware. We index local, local repos, but also 10,000s of repos for Fortune 500 companies. We have another agent, another tool that is called, the pure agent is the open source and the commercial one is CodoMerge. And then we have another open source called CoverAgent, which is not yet a commercial product coming very soon. It's very impressive. It could be that already people are approving automated pull requests that they don't even aware in really big open sources. So once we have enough of these, we will also launch another agent. So for the first one and a half year, what we did is grew in our offering and mostly on the side of, does this code actually works, testing, code review, et cetera. And we believe that's the critical milestone that needs to be achieved to actually have the AI engineer for enterprise software. And then like for the first year was everything bottom up, getting to 1 million installation. 2024, that was 2023, 2024 was starting to monetize, to feel like how it is to make the first buck. So we did the teams offering, it went well with a thousand of teams, et cetera. And then we started like just a few months ago to do enterprise with everything you need, which is a lot of things that discussed in the last post that was just released by Codelm. So that's how we call it at Codelm. Just opening the brackets, our company name was Codelm AI, and we renamed to Codo and we call our models Codelm. So back to my point, so we started Enterprise Motion and already have multiple Fortune 100 companies. And then with that, we raised a series of $40 million. And what's exciting about it is that enables us to develop more agents. That's our focus. I think it's very different. We're not coming very soon with an ID or something like that.Swyx [00:06:01]: You don't want to fork this code?Itamar [00:06:03]: Maybe we'll fork JetBrains or something just to be different.Swyx [00:06:08]: I noticed that, you know, I think the promise of general purpose agents has kind of died. Like everyone is doing kind of what you're doing. There's Codogen, Codomerge, and then there's a third one. What's the name of it?Itamar [00:06:17]: Yeah. Codocover. Cover. Which is like a commercial version of a cover agent. It's coming soon.Swyx [00:06:23]: Yeah. It's very similar with factory AI, also doing like droids. They all have special purpose doing things, but people don't really want general purpose agents. Right. The last time you were here, we talked about AutoGBT, the biggest thing of 2023. This year, not really relevant anymore. And I think it's mostly just because when you give me a general purpose agent, I don't know what to do with it.Eric [00:06:42]: Yeah.Itamar [00:06:43]: I totally agree with that. We're seeing it for a while and I think it will stay like that despite the computer use, et cetera, that supposedly can just replace us. You can just like prompt it to be, hey, now be a QA or be a QA person or a developer. I still think that there's a few reasons why you see like a dedicated agent. Again, I'm a bit more focused, like my head is more on complex software for big teams and enterprise, et cetera. And even think about permissions and what are the data sources and just the same way you manage permissions for users. Developers, you probably want to have dedicated guardrails and dedicated approvals for agents. I intentionally like touched a point on not many people think about. And of course, then what you can think of, like maybe there's different tools, tool use, et cetera. But just the first point by itself is a good reason why you want to have different agents.Alessio [00:07:40]: Just to compare that with Bot.new, you're almost focused on like the application is very complex and now you need better tools to kind of manage it and build on top of it. On Bot.new, it's almost like I was using it the other day. There's basically like, hey, look, I'm just trying to get started. You know, I'm not very opinionated on like how you're going to implement this. Like this is what I want to do. And you build a beautiful app with it. What people ask as the next step, you know, going back to like the general versus like specific, have you had people say, hey, you know, this is great to start, but then I want a specific Bot.new dot whatever else to do a more vertical integration and kind of like development or what's the, what do people say?Eric [00:08:18]: Yeah. I think, I think you kind of hit the, hit it head on, which is, you know, kind of the way that we've, we've kind of talked about internally is it's like people are using Bolt to go from like 0.0 to 1.0, like that's like kind of the biggest unlock that Bolt has versus most other things out there. I mean, I think that's kind of what's, what's very unique about Bolt. I think the, you know, the working on like existing enterprise applications is, I mean, it's crazy important because, you know, there's a, you look, when you look at the fortune 500, I mean, these code bases, some of these have been around for 20, 30 plus years. And so it's important to be going from, you know, 101.3 to 101.4, et cetera. I think for us, so what's been actually pretty interesting is we see there's kind of two different users for us that are coming in and it's very distinct. It's like people that are developers already. And then there's people that have never really written software and more if they have, it's been very, very minimal. And so in the first camp, what these developers are doing, like to go from zero to one, they're coming to Bolt and then they're ejecting the thing to get up or just downloading it and, you know, opening cursor, like whatever to, to, you know, keep iterating on the thing. And sometimes they'll bring it back to Bolt to like add in a huge piece of functionality or something. Right. But for the people that don't know how to code, they're actually just, they, they live in this thing. And that was one of the weird things when we launched is, you know, within a day of us being online, one of the most popular YouTube videos, and there's been a ton since, which was, you know, there's like, oh, Bolt is the cursor killer. And I originally saw the headlines and I was like, thanks for the views. I mean, I don't know. This doesn't make sense to me. That's not, that's not what we kind of thought.Swyx [00:09:44]: It's how YouTubers talk to each other. Well, everything kills everything else.Eric [00:09:47]: Totally. But what blew my mind was that there was any comparison because it's like cursor is a, is a local IDE product. But when, when we actually kind of dug into it and we, and we have people that are using our product saying this, I'm not using cursor. And I was like, what? And it turns out there are hundreds of thousands of people that we have seen that we're using cursor and we're trying to build apps with that where they're not traditional software does, but we're heavily leaning on the AI. And as you can imagine, it is very complicated, right? To do that with cursor. So when Bolt came out, they're like, wow, this thing's amazing because it kind of inverts the complexity where it's like, you know, it's not an IDE, it's, it's a, it's a chat-based sort of interface that we have. So that's kind of the split, which is rather interesting. We've had like the first startups now launch off of Bolt entirely where this, you know, tomorrow I'm doing a live stream with this guy named Paul, who he's built an entire CRM using this thing and you know, with backend, et cetera. And people have made their first money on the internet period, you know, launching this with Stripe or whatever have you. So that's, that's kind of the two main, the two main categories of folks that we see using Bolt though.Itamar [00:10:51]: I agree that I don't understand the comparison. It doesn't make sense to me. I think like we have like two type of families of tools. One is like we re-imagine the software development. I think Bolt is there and I think like a cursor is more like a evolution of what we already have. It's like taking the IDE and it's, it's amazing and it's okay, let's, let's adapt the IDE to an era where LLMs can do a lot for us. And Bolt is more like, okay, let's rethink everything totally. And I think we see a few tools there, like maybe Vercel, Veo and maybe Repl.it in that area. And then in the area of let's expedite, let's change, let's, let's progress with what we already have. You can see Cursor and Kodo, but we're different between ourselves, Cursor and Kodo, but definitely I think that comparison doesn't make sense.Alessio [00:11:42]: And just to set the context, this is not a Twitter demo. You've made 4 million of revenue in four weeks. So this is, this is actually working, you know, it's not a, what, what do you think that is? Like, there's been so many people demoing coding agents on Twitter and then it doesn't really work. And then you guys were just like, here you go, it's live, go use it, pay us for it. You know, is there anything in the development that was like interesting and maybe how that compares to building your own agents?Eric [00:12:08]: We had no idea, honestly, like we, we, we've been pretty blown away and, and things have just kind of continued to grow faster since then. We're like, oh, today is week six. So I, I kind of came back to the point you just made, right, where it's, you, you kind of outlined, it's like, there's kind of this new market of like kind of rethinking the software development and then there's heavily augmenting existing developers. I think that, you know, both of which are, you know, AI code gen being extremely good, it's allowed existing developers, it's allowing existing developers to camera out software far faster than they could have ever before, right? It's like the ultimate power tool for an existing developer. But this code gen stuff is now so good. And then, and we saw this over the past, you know, from the beginning of the year when we tried to first build, it's actually lowered the barrier to people that, that aren't traditionally software engineers. But the kind of the key thing is if you kind of think about it from, imagine you've never written software before, right? My co-founder and I, he and I grew up down the street from each other in Chicago. We learned how to code when we were 13 together and we've been building stuff ever since. And this is back in like the mid 2000s or whatever, you know, there was nothing for free to learn from online on the internet and how to code. For our 13th birthdays, we asked our parents for, you know, O'Reilly books cause you couldn't get this at the library, right? And so instead of like an Xbox, we got, you know, programming books. But the hardest part for everyone learning to code is getting an environment set up locally, you know? And so when we built StackBlitz, like kind of the key thesis, like seven years ago, the insight we had was that, Hey, it seems like the browser has a lot of new APIs like WebAssembly and service workers, et cetera, where you could actually write an operating system that ran inside the browser that could boot in milliseconds. And you, you know, basically there's this missing capability of the web. Like the web should be able to build apps for the web, right? You should be able to build the web on the web. Every other platform has that, Visual Studio for Windows, Xcode for Mac. The web has no built in primitive for this. And so just like our built in kind of like nerd instinct on this was like, that seems like a huge hole and it's, you know, it will be very valuable or like, you know, very valuable problem to solve. So if you want to set up that environments, you know, this is what we spent the past seven years doing. And the reality is existing developers have running locally. They already know how to set up that environment. So the problem isn't as acute for them. When we put Bolt online, we took that technology called WebContainer and married it with these, you know, state of the art frontier models. And the people that have the most pain with getting stuff set up locally is people that don't code. I think that's been, you know, really the big explosive reason is no one else has been trying to make dev environments work inside of a browser tab, you know, for the past if since ever, other than basically our company, largely because there wasn't an immediate demand or need. So I think we kind of find ourselves at the right place at the right time. And again, for this market of people that don't know how to write software, you would kind of expect that you should be able to do this without downloading something to your computer in the same way that, hey, I don't have to download Photoshop now to make designs because there's Figma. I don't have to download Word because there's, you know, Google Docs. They're kind of looking at this as that sort of thing, right? Which was kind of the, you know, our impetus and kind of vision from the get-go. But you know, the code gen, the AI code gen stuff that's come out has just been, you know, an order of magnitude multiplier on how magic that is, right? So that's kind of my best distillation of like, what is going on here, you know?Alessio [00:15:21]: And you can deploy too, right?Eric [00:15:22]: Yeah.Alessio [00:15:23]: Yeah.Eric [00:15:24]: And so that's, what's really cool is it's, you know, we have deployment built in with Netlify and this is actually, I think, Sean, you actually built this at Netlify when you were there. Yeah. It's one of the most brilliant integrations actually, because, you know, effectively the API that Sean built, maybe you can speak to it, but like as a provider, we can just effectively give files to Netlify without the user even logging in and they have a live website. And if they want to keep, hold onto it, they can click a link and claim it to their Netlify account. But it basically is just this really magic experience because when you come to Bolt, you say, I want a website. Like my mom, 70, 71 years old, made her first website, you know, on the internet two weeks ago, right? It was about her nursing days.Swyx [00:16:03]: Oh, that's fantastic though. It wouldn't have been made.Eric [00:16:06]: A hundred percent. Cause even in, you know, when we've had a lot of people building personal, like deeply personal stuff, like in the first week we launched this, the sales guy from the East Coast, you know, replied to a tweet of mine and he said, thank you so much for building this to your team. His daughter has a medical condition and so for her to travel, she has to like line up donors or something, you know, so ahead of time. And so he actually used Bolt to make a website to do that, to actually go and send it to folks in the region she was going to travel to ahead of time. I was really touched by it, but I also thought like, why, you know, why didn't he use like Wix or Squarespace? Right? I mean, this is, this is a solved problem, quote unquote, right? And then when I thought, I actually use Squarespace for my, for my, uh, the wedding website for my wife and I, like back in 2021, so I'm familiar, you know, it was, it was faster. I know how to code. I was like, this is faster. Right. And I thought back and I was like, there's a whole interface you have to learn how to use. And it's actually not that simple. There's like a million things you can configure in that thing. When you come to Bolt, there's a, there's a text box. You just say, I need a, I need a wedding website. Here's the date. Here's where it is. And here's a photo of me and my wife, put it somewhere relevant. It's actually the simplest way. And that's what my, when my mom came, she said, uh, I'm Pat Simons. I was a nurse in the seventies, you know, and like, here's the things I did and a website came out. So coming back to why is this such a, I think, why are we seeing this sort of growth? It's, this is the simplest interface I think maybe ever created to actually build it, a deploy a website. And then that website, my mom made, she's like, okay, this looks great. And there's, there's one button, you just click it, deploy, and it's live and you can buy a domain name, attach it to it. And you know, it's as simple as it gets, it's getting even simpler with some of the stuff we're working on. But anyways, so that's, it's, it's, uh, it's been really interesting to see some of the usage like that.Swyx [00:17:46]: I can offer my perspective. So I, you know, I probably should have disclosed a little bit that, uh, I'm a, uh, stack list investor.Alessio [00:17:53]: Canceled the episode. I know, I know. Don't play it now. Pause.Eric actually reached out to ShowMeBolt before the launch. And we, you know, we talked a lot about, like, the framing of, of what we're going to talk about how we marketed the thing, but also, like, what we're So that's what Bolt was going to need, like a whole sort of infrastructure.swyx: Netlify, I was a maintainer but I won't take claim for the anonymous upload. That's actually the origin story of Netlify. We can have Matt Billman talk about it, but that was [00:18:00] how Netlify started. You could drag and drop your zip file or folder from your desktop onto a website, it would have a live URL with no sign in.swyx: And so that was the origin story of Netlify. And it just persists to today. And it's just like it's really nice, interesting that both Bolt and CognitionDevIn and a bunch of other sort of agent type startups, they all use Netlify to deploy because of this one feature. They don't really care about the other features.swyx: But, but just because it's easy for computers to use and talk to it, like if you build an interface for computers specifically, that it's easy for them to Navigate, then they will be used in agents. And I think that's a learning that a lot of developer tools companies are having. That's my bolt launch story and now if I say all that stuff.swyx: And I just wanted to come back to, like, the Webcontainers things, right? Like, I think you put a lot of weight on the technical modes. I think you also are just like, very good at product. So you've, you've like, built a better agent than a lot of people, the rest of us, including myself, who have tried to build these things, and we didn't get as far as you did.swyx: Don't shortchange yourself on products. But I think specifically [00:19:00] on, on infra, on like the sandboxing, like this is a thing that people really want. Alessio has Bax E2B, which we'll have on at some point, talking about like the sort of the server full side. But yours is, you know, inside of the browser, serverless.swyx: It doesn't cost you anything to serve one person versus a million people. It doesn't, doesn't cost you anything. I think that's interesting. I think in theory, we should be able to like run tests because you can run the full backend. Like, you can run Git, you can run Node, you can run maybe Python someday.swyx: We talked about this. But ideally, you should be able to have a fully gentic loop, running code, seeing the errors, correcting code, and just kind of self healing, right? Like, I mean, isn't that the dream?Eric: Totally.swyx: Yeah,Eric: totally. At least in bold, we've got, we've got a good amount of that today. I mean, there's a lot more for us to do, but one of the nice things, because like in web container, you know, there's a lot of kind of stuff you go Google like, you know, turn docker container into wasm.Eric: You'll find a lot of stuff out there that will do that. The problem is it's very big, it's slow, and that ruins the experience. And so what we ended up doing is just writing an operating system from [00:20:00] scratch that was just purpose built to, you know, run in a browser tab. And the reason being is, you know, Docker 2 awesome things will give you an image that's like out 60 to 100 megabits, you know, maybe more, you know, and our, our OS, you know, kind of clocks in, I think, I think we're in like a, maybe, maybe a megabyte or less or something like that.Eric: I mean, it's, it's, you know, really, really, you know, stripped down.swyx: This is basically the task involved is I understand that it's. Mapping every single, single Linux call to some kind of web, web assembly implementation,Eric: but more or less, and, and then there's a lot of things actually, like when you're looking at a dev environment, there's a lot of things that you don't need that a traditional OS is gonna have, right?Eric: Like, you know audio drivers or you like, there's just like, there's just tons of things. Oh, yeah. Right. Yeah. That goes . Yeah. You can just kind, you can, you can kind of tos them. Or alternatively, what you can do is you can actually be the nice thing. And this is, this kind of comes back to the origins of browsers, which is, you know, they're, they're at the beginning of the web and, you know, the late nineties, there was two very different kind of visions for the web where Alan Kay vehemently [00:21:00] disagree with the idea that should be document based, which is, you know, Tim Berners Lee, you know, that, and that's kind of what ended up winning, winning was this document based kind of browsing documents on the web thing.Eric: Alan Kay, he's got this like very famous quote where he said, you know, you want web browsers to be mini operating systems. They should download little mini binaries and execute with like a little mini virtualized operating system in there. And what's kind of interesting about the history, not to geek out on this aspect, what's kind of interesting about the history is both of those folks ended up being right.Eric: Documents were actually the pragmatic way that the web worked. Was, you know, became the most ubiquitous platform in the world to the degree now that this is why WebAssembly has been invented is that we're doing, we need to do more low level things in a browser, same thing with WebGPU, et cetera. And so all these APIs, you know, to build an operating system came to the browser.Eric: And that was actually the realization we had in 2017 was, holy heck, like you can actually, you know, service workers, which were designed for allowing your app to work offline. That was the kind of the key one where it was like, wait a second, you can actually now run. Web servers within a [00:22:00] browser, like you can run a server that you open up.Eric: That's wild. Like full Node. js. Full Node. js. Like that capability. Like, I can have a URL that's programmatically controlled. By a web application itself, boom. Like the web can build the web. The primitive is there. Everyone at the time, like we talked to people that like worked on, you know Chrome and V8 and they were like, uhhhh.Eric: You know, like I don't know. But it's one of those things you just kind of have to go do it to find out. So we spent a couple of years, you know, working on it and yeah. And, and, and got to work in back in 2021 is when we kind of put the first like data of web container online. Butswyx: in partnership with Google, right?swyx: Like Google actually had to help you get over the finish line with stuff.Eric: A hundred percent, because well, you know, over the years of when we were doing the R and D on the thing. Kind of the biggest challenge, the two ways that you can kind of test how powerful and capable a platform are, the two types of applications are one, video games, right, because they're just very compute intensive, a lot of calculations that have to happen, right?Eric: The second one are IDEs, because you're talking about actually virtualizing the actual [00:23:00] runtime environment you are in to actually build apps on top of it, which requires sophisticated capabilities, a lot of access to data. You know, a good amount of compute power, right, to effectively, you know, building app in app sort of thing.Eric: So those, those are the stress tests. So if your platform is missing stuff, those are the things where you find out. Those are, those are the people building games and IDEs. They're the ones filing bugs on operating system level stuff. And for us, browser level stuff.Eric [00:23:47]: yeah, what ended up happening is we were just hammering, you know, the Chromium bug tracker, and they're like, who are these guys? Yeah. And, and they were amazing because I mean, just making Chrome DevTools be able to debug, I mean, it's, it's not, it wasn't originally built right for debugging an operating system, right? They've been phenomenal working with us and just kind of really pushing the limits, but that it's a rising tide that's kind of lifted all boats because now there's a lot of different types of applications that you can debug with Chrome Dev Tools that are running a browser that runs more reliably because just the stress testing that, that we and, you know, games that are coming to the web are kind of pushing as well, but.Itamar [00:24:23]: That's awesome. About the testing, I think like most, let's say coding assistant from different kinds will need this loop of testing. And even I would add code review to some, to some extent that you mentioned. How is testing different from code review? Code review could be, for example, PR review, like a code review that is done at the point of when you want to merge branches. But I would say that code review, for example, checks best practices, maintainability, and so on. It's not just like CI, but more than CI. And testing is like a more like checking functionality, et cetera. So it's different. We call, by the way, all of these together code integrity, but that's a different story. Just to go back to the, to the testing and specifically. Yeah. It's, it's, it's since the first slide. Yeah. We're consistent. So if we go back to the testing, I think like, it's not surprising that for us testing is important and for Bolt it's testing important, but I want to shed some light on a different perspective of it. Like let's think about autonomous driving. Those startups that are doing autonomous driving for highway and autonomous driving for the city. And I think like we saw the autonomous of the highway much faster and reaching to a level, I don't know, four or so much faster than those in the city. Now, in both cases, you need testing and quote unquote testing, you know, verifying validation that you're doing the right thing on the road and you're reading and et cetera. But it's probably like so different in the city that it could be like actually different technology. And I claim that we're seeing something similar here. So when you're building the next Wix, and if I was them, I was like looking at you and being a bit scared. That's what you're disrupting, what you just said. Then basically, I would say that, for example, the UX UI is freaking important. And because you're you're more aiming for the end user. In this case, maybe it's an end user that doesn't know how to develop for developers. It's also important. But let alone those that do not know to develop, they need a slick UI UX. And I think like that's one reason, for example, I think Cursor have like really good technology. I don't know the underlying what's under the hood, but at least what they're saying. But I think also their UX UI is great. It's a lot because they did their own ID. While if you're aiming for the city AI, suddenly like there's a lot of testing and code review technology that it's not necessarily like that important. For example, let's talk about integration tests. Probably like a lot of what you're building involved at the moment is isolated applications. Maybe the vision or the end game is maybe like having one solution for everything. It could be that eventually the highway companies will go into the city and the other way around. But at the beginning, there is a difference. And integration tests are a good example. I guess they're a bit less important. And when you think about enterprise software, they're really important. So to recap, like I think like the idea of looping and verifying your test and verifying your code in different ways, testing or code review, et cetera, seems to be important in the highway AI and the city AI, but in different ways and different like critical for the city, even more and more variety. Actually, I was looking to ask you like what kind of loops you guys are doing. For example, when I'm using Bolt and I'm enjoying it a lot, then I do see like sometimes you're trying to catch the errors and fix them. And also, I noticed that you're breaking down tasks into smaller ones and then et cetera, which is already a common notion for a year ago. But it seems like you're doing it really well. So if you're willing to share anything about it.Eric [00:28:07]: Yeah, yeah. I realized I never actually hit the punchline of what I was saying before. I mentioned the point about us kind of writing an operating system from scratch because what ended up being important about that is that to your point, it's actually a very, like compared to like a, you know, if you're like running cursor on anyone's machine, you kind of don't know what you're dealing with, with the OS you're running on. There could be an error happens. It could be like a million different things, right? There could be some config. There could be, it could be God knows what, right? The thing with WebConnect is because we wrote the entire thing from scratch. It's actually a unified image basically. And we can instrument it at any level that we think is going to be useful, which is exactly what we did when we started building Bolt is we instrumented stuff at like the process level, at the runtime level, you know, et cetera, et cetera, et cetera. Stuff that would just be not impossible to do on local, but to do that in a way that works across any operating system, whatever is, I mean, would just be insanely, you know, insanely difficult to do right and reliably. And that's what you saw when you've used Bolt is that when an error actually will occur, whether it's in the build process or the actual web application itself is failing or anything kind of in between, you can actually capture those errors. And today it's a very primitive way of how we've implemented it largely because the product just didn't exist 90 days ago. So we're like, we got some work ahead of us and we got to hire some more a little bit, but basically we present and we say, Hey, this is, here's kind of the things that went wrong. There's a fix it button and then a ignore button, and then you can just hit fix it. And then we take all that telemetry through our agent, you run it through our agent and say, kind of, here's the state of the application. Here's kind of the errors that we got from Node.js or the browser or whatever, and like dah, dah, dah, dah. And it can take a crack at actually solving it. And it's actually pretty darn good at being able to do that. That's kind of been a, you know, closing the loop and having it be a reliable kind of base has seemed to be a pretty big upgrade over doing stuff locally, just because I think that's a pretty key ingredient of it. And yeah, I think breaking things down into smaller tasks, like that's, that's kind of a key part of our agent. I think like Claude did a really good job with artifacts. I think, you know, us and kind of everyone else has, has kind of taken their approach of like actually breaking out certain tasks in a certain order into, you know, kind of a concrete way. And, and so actually the core of Bolt, I know we actually made open source. So you can actually go and check out like the system prompts and et cetera, and you can run it locally and whatever have you. So anyone that's interested in this stuff, I'd highly recommend taking a look at. There's not a lot of like stuff that's like open source in this realm. It's, that was one of the fun things that we've we thought would be cool to do. And people, people seem to like it. I mean, there's a lot of forks and people adding different models and stuff. So it's been cool to see.Swyx [00:30:41]: Yeah. I'm happy to add, I added real-time voice for my opening day demo and it was really fun to hack with. So thank you for doing that. Yeah. Thank you. I'm going to steal your code.Eric [00:30:52]: Because I want that.Swyx [00:30:52]: It's funny because I built on top of the fork of Bolt.new that already has the multi LLM thing. And so you just told me you're going to merge that in. So then you're going to merge two layers of forks down into this thing. So it'll be fun.Eric [00:31:03]: Heck yeah.Alessio [00:31:04]: Just to touch on like the environment, Itamar, you maybe go into the most complicated environments that even the people that work there don't know how to run. How much of an impact does that have on your performance? Like, you know, it's most of the work you're doing actually figuring out environment and like the libraries, because I'm sure they're using outdated version of languages, they're using outdated libraries, they're using forks that have not been on the public internet before. How much of the work that you're doing is like there versus like at the LLM level?Itamar [00:31:32]: One of the reasons I was asking about, you know, what are the steps to break things down, because it really matters. Like, what's the tech stack? How complicated the software is? It's hard to figure it out when you're dealing with the real world, any environment of enterprise as a city, when I'm like, while maybe sometimes like, I think you do enable like in Bolt, like to install stuff, but it's quite a like controlled environment. And that's a good thing to do, because then you narrow down and it's easier to make things work. So definitely, there are two dimensions, I think, actually spaces. One is the fact just like installing our software without yet like doing anything, making it work, just installing it because we work with enterprise and Fortune 500, etc. Many of them want on prem solution.Swyx [00:32:22]: So you have how many deployment options?Itamar [00:32:24]: Basically, we had, we did a metric metrics, say 96 options, because, you know, they're different dimensions. Like, for example, one dimension, we connect to your code management system to your Git. So are you having like GitHub, GitLab? Subversion? Is it like on cloud or deployed on prem? Just an example. Which model agree to use its APIs or ours? Like we have our Is it TestGPT? Yeah, when we started with TestGPT, it was a huge mistake name. It was cool back then, but I don't think it's a good idea to name a model after someone else's model. Anyway, that's my opinion. So we gotSwyx [00:33:02]: I'm interested in these learnings, like things that you change your mind on.Itamar [00:33:06]: Eventually, when you're building a company, you're building a brand and you want to create your own brand. By the way, when I thought about Bolt.new, I also thought about if it's not a problem, because when I think about Bolt, I do think about like a couple of companies that are already called this way.Swyx [00:33:19]: Curse companies. You could call it Codium just to...Itamar [00:33:24]: Okay, thank you. Touche. Touche.Eric [00:33:27]: Yeah, you got to imagine the board meeting before we launched Bolt, one of our investors, you can imagine they're like, are you sure? Because from the investment side, it's kind of a famous, very notorious Bolt. And they're like, are you sure you want to go with that name? Oh, yeah. Yeah, absolutely.Itamar [00:33:43]: At this point, we have actually four models. There is a model for autocomplete. There's a model for the chat. There is a model dedicated for more for code review. And there is a model that is for code embedding. Actually, you might notice that there isn't a good code embedding model out there. Can you name one? Like dedicated for code?Swyx [00:34:04]: There's code indexing, and then you can do sort of like the hide for code. And then you can embed the descriptions of the code.Itamar [00:34:12]: Yeah, but you do see a lot of type of models that are dedicated for embedding and for different spaces, different fields, etc. And I'm not aware. And I know that if you go to the bedrock, try to find like there's a few code embedding models, but none of them are specialized for code.Swyx [00:34:31]: Is there a benchmark that you would tell us to pay attention to?Itamar [00:34:34]: Yeah, so it's coming. Wait for that. Anyway, we have our models. And just to go back to the 96 option of deployment. So I'm closing the brackets for us. So one is like dimensional, like what Git deployment you have, like what models do you agree to use? Dotter could be like if it's air-gapped completely, or you want VPC, and then you have Azure, GCP, and AWS, which is different. Do you use Kubernetes or do not? Because we want to exploit that. There are companies that do not do that, etc. I guess you know what I mean. So that's one thing. And considering that we are dealing with one of all four enterprises, we needed to deal with that. So you asked me about how complicated it is to solve that complex code. I said, it's just a deployment part. And then now to the software, we see a lot of different challenges. For example, some companies, they did actually a good job to build a lot of microservices. Let's not get to if it's good or not, but let's first assume that it is a good thing. A lot of microservices, each one of them has their own repo. And now you have tens of thousands of repos. And you as a developer want to develop something. And I remember me coming to a corporate for the first time. I don't know where to look at, like where to find things. So just doing a good indexing for that is like a challenge. And moreover, the regular indexing, the one that you can find, we wrote a few blogs on that. By the way, we also have some open source, different than yours, but actually three and growing. Then it doesn't work. You need to let the tech leads and the companies influence your indexing. For example, Mark with different repos with different colors. This is a high quality repo. This is a lower quality repo. This is a repo that we want to deprecate. This is a repo we want to grow, etc. And let that be part of your indexing. And only then things actually work for enterprise and they don't get to a fatigue of, oh, this is awesome. Oh, but I'm starting, it's annoying me. I think Copilot is an amazing tool, but I'm quoting others, meaning GitHub Copilot, that they see not so good retention of GitHub Copilot and enterprise. Ooh, spicy. Yeah. I saw snapshots of people and we have customers that are Copilot users as well. And also I saw research, some of them is public by the way, between 38 to 50% retention for users using Copilot and enterprise. So it's not so good. By the way, I don't think it's that bad, but it's not so good. So I think that's a reason because, yeah, it helps you auto-complete, but then, and especially if you're working on your repo alone, but if it's need that context of remote repos that you're code-based, that's hard. So to make things work, there's a lot of work on that, like giving the controllability for the tech leads, for the developer platform or developer experience department in the organization to influence how things are working. A short example, because if you have like really old legacy code, probably some of it is not so good anymore. If you just fine tune on these code base, then there is a bias to repeat those mistakes or old practices, etc. So you need, for example, as I mentioned, to influence that. For example, in Coda, you can have a markdown of best practices by the tech leads and Coda will include that and relate to that and will not offer suggestions that are not according to the best practices, just as an example. So that's just a short list of things that you need to do in order to deal with, like you mentioned, the 100.1 to 100.2 version of software. I just want to say what you're doing is extremelyEric [00:38:32]: impressive because it's very difficult. I mean, the business of Stackplus, kind of before bulk came online, we sold a version of our IDE that went on-prem. So I understand what you're saying about the difficulty of getting stuff just working on-prem. Holy heck. I mean, that is extremely hard. I guess the question I have for you is, I mean, we were just doing that with kind of Kubernetes-based stuff, but the spread of Fortune 500 companies that you're working with, how are they doing the inference for this? Are you kind of plugging into Azure's OpenAI stuff and AWS's Bedrock, you know, Cloud stuff? Or are they just like running stuff on GPUs? Like, what is that? How are these folks approaching that? Because, man, what we saw on the enterprise side, I mean, I got to imagine that that's a huge challenge. Everything you said and more, like,Itamar [00:39:15]: for example, like someone could be, and I don't think any of these is bad. Like, they made their decision. Like, for example, some people, they're, I want only AWS and VPC on AWS, no matter what. And then they, some of them, like there is a subset, I will say, I'm willing to take models only for from Bedrock and not ours. And we have a problem because there is no good code embedding model on Bedrock. And that's part of what we're doing now with AWS to solve that. We solve it in a different way. But if you are willing to run on AWS VPC, but run your run models on GPUs or inferentia, like the new version of the more coming out, then our models can run on that. But everything you said is right. Like, we see like on-prem deployment where they have their own GPUs. We see Azure where you're using OpenAI Azure. We see cases where you're running on GCP and they want OpenAI. Like this cross, like a case, although there is Gemini or even Sonnet, I think is available on GCP, just an example. So all the options, that's part of the challenge. I admit that we thought about it, but it was even more complicated. And it took us a few months to actually, that metrics that I mentioned, to start clicking each one of the blocks there. A few months is impressive. I mean,Eric [00:40:35]: honestly, just that's okay. Every one of these enterprises is, their networking is different. Just everything's different. Every single one is different. I see you understand. Yeah. So that just cannot be understated. That it is, that's extremely impressive. Hats off.Itamar [00:40:50]: It could be, by the way, like, for example, oh, we're only AWS, but our GitHub enterprise is on-prem. Oh, we forgot. So we need like a private link or whatever, like every time like that. It's not, and you do need to think about it if you want to work with an enterprise. And it's important. Like I understand like their, I respect their point of view.Swyx [00:41:10]: And this primarily impacts your architecture, your tech choices. Like you have to, you can't choose some vendors because...Itamar [00:41:15]: Yeah, definitely. To be frank, it makes us hard for a startup because it means that we want, we want everyone to enjoy all the variety of models. By the way, it was hard for us with our technology. I want to open a bracket, like a window. I guess you're familiar with our Alpha Codium, which is an open source.Eric [00:41:33]: We got to go over that. Yeah. So I'll do that quickly.Itamar [00:41:36]: Yeah. A pin in that. Yeah. Actually, we didn't have it in the last episode. So, so, okay.Swyx [00:41:41]: Okay. We'll come back to that later, but let's talk about...Itamar [00:41:43]: Yeah. So, so just like shortly, and then we can double click on Alpha Codium. But Alpha Codium is a open source tool. You can go and try it and lets you compete on CodeForce. This is a website and a competition and actually reach a master level level, like 95% with a click of a button. You don't need to do anything. And part of what we did there is taking a problem and breaking it to different, like smaller blocks. And then the models are doing a much better job. Like we all know it by now that taking small tasks and solving them, by the way, even O1, which is supposed to be able to do system two thinking like Greg from OpenAI like hinted, is doing better on these kinds of problems. But still, it's very useful to break it down for O1, despite O1 being able to think by itself. And that's what we presented like just a month ago, OpenAI released that now they are doing 93 percentile with O1 IOI left and International Olympiad of Formation. Sorry, I forgot. Exactly. I told you I forgot. And we took their O1 preview with Alpha Codium and did better. Like it just shows like, and there is a big difference between the preview and the IOI. It shows like that these models are not still system two thinkers, and there is a big difference. So maybe they're not complete system two. Yeah, they need some guidance. I call them system 1.5. We can, we can have it. I thought about it. Like, you know, I care about this philosophy stuff. And I think like we didn't see it even close to a system two thinking. I can elaborate later. But closing the brackets, like we take Alpha Codium and as our principle of thinking, we take tasks and break them down to smaller tasks. And then we want to exploit the best model to solve them. So I want to enable anyone to enjoy O1 and SONET and Gemini 1.5, etc. But at the same time, I need to develop my own models as well, because some of the Fortune 500 want to have all air gapped or whatever. So that's a challenge. Now you need to support so many models. And to some extent, I would say that the flow engineering, the breaking down to two different blocks is a necessity for us. Why? Because when you take a big block, a big problem, you need a very different prompt for each one of the models to actually work. But when you take a big problem and break it into small tasks, we can talk how we do that, then the prompt matters less. What I want to say, like all this, like as a startup trying to do different deployment, getting all the juice that you can get from models, etc. is a big problem. And one need to think about it. And one of our mitigation is that process of taking tasks and breaking them down. That's why I'm really interested to know how you guys are doing it. And part of what we do is also open source. So you can see.Swyx [00:44:39]: There's a lot in there. But yeah, flow over prompt. I do believe that that does make sense. I feel like there's a lot that both of you can sort of exchange notes on breaking down problems. And I just want you guys to just go for it. This is fun to watch.Eric [00:44:55]: Yeah. I mean, what's super interesting is the context you're working in is, because for us too with Bolt, we've started thinking because our kind of existing business line was going behind the firewall, right? We were like, how do we do this? Adding the inference aspect on, we're like, okay, how does... Because I mean, there's not a lot of prior art, right? I mean, this is all new. This is all new. So I definitely am going to have a lot of questions for you.Itamar [00:45:17]: I'm here. We're very open, by the way. We have a paper on a blog or like whatever.Swyx [00:45:22]: The Alphacodeum, GitHub, and we'll put all this in the show notes.Itamar [00:45:25]: Yeah. And even the new results of O1, we published it.Eric [00:45:29]: I love that. And I also just, I think spiritually, I like your approach of being transparent. Because I think there's a lot of hype-ium around AI stuff. And a lot of it is, it's just like, you have these companies that are just kind of keep their stuff closed source and then just max hype it, but then it's kind of nothing. And I think it kind of gives a bad rep to the incredible stuff that's actually happening here. And so I think it's stuff like what you're doing where, I mean, true merit and you're cracking open actual code for others to learn from and use. That strikes me as the right approach. And it's great to hear that you're making such incredible progress.Itamar [00:46:02]: I have something to share about the open source. Most of our tools are, we have an open source version and then a premium pro version. But it's not an easy decision to do that. I actually wanted to ask you about your strategy, but I think in your case, there is, in my opinion, relatively a good strategy where a lot of parts of open source, but then you have the deployment and the environment, which is not right if I get it correctly. And then there's a clear, almost hugging face model. Yeah, you can do that, but why should you try to deploy it yourself, deploy it with us? But in our case, and I'm not sure you're not going to hit also some competitors, and I guess you are. I wanted to ask you, for example, on some of them. In our case, one day we looked on one of our competitors that is doing code review. We're a platform. We have the code review, the testing, et cetera, spread over the ID to get. And in each agent, we have a few startups or a big incumbents that are doing only that. So we noticed one of our competitors having not only a very similar UI of our open source, but actually even our typo. And you sit there and you're kind of like, yeah, we're not that good. We don't use enough Grammarly or whatever. And we had a couple of these and we saw it there. And then it's a challenge. And I want to ask you, Bald is doing so well, and then you open source it. So I think I know what my answer was. I gave it before, but still interestingEric [00:47:29]: to hear what you think. GeoHot said back, I don't know who he was up to at this exact moment, but I think on comma AI, all that stuff's open source. And someone had asked him, why is this open source? And he's like, if you're not actually confident that you can go and crush it and build the best thing, then yeah, you should probably keep your stuff closed source. He said something akin to that. I'm probably kind of butchering it, but I thought it was kind of a really good point. And that's not to say that you should just open source everything, because for obvious reasons, there's kind of strategic things you have to kind of take in mind. But I actually think a pretty liberal approach, as liberal as you kind of can be, it can really make a lot of sense. Because that is so validating that one of your competitors is taking your stuff and they're like, yeah, let's just kind of tweak the styles. I mean, clearly, right? I think it's kind of healthy because it keeps, I'm sure back at HQ that day when you saw that, you're like, oh, all right, well, we have to grind even harder to make sure we stay ahead. And so I think it's actually a very useful, motivating thing for the teams. Because you might feel this period of comfort. I think a lot of companies will have this period of comfort where they're not feeling the competition and one day they get disrupted. So kind of putting stuff out there and letting people push it forces you to face reality soon, right? And actually feel that incrementally so you can kind of adjust course. And that's for us, the open source version of Bolt has had a lot of features people have been begging us for, like persisting chat messages and checkpoints and stuff. Within the first week, that stuff was landed in the open source versions. And they're like, why can't you ship this? It's in the open, so people have forked it. And we're like, we're trying to keep our servers and GPUs online. But it's been great because the folks in the community did a great job, kept us on our toes. And we've got to know most of these folks too at this point that have been building these things. And so it actually was very instructive. Like, okay, well, if we're going to go kind of land this, there's some UX patterns we can kind of look at and the code is open source to this stuff. What's great about these, what's not. So anyways, NetNet, I think it's awesome. I think from a competitive point of view for us, I think in particular, what's interesting is the core technology of WebContainer going. And I think that right now, there's really nothing that's kind of on par with that. And we also, we have a business of, because WebContainer runs in your browser, but to make it work, you have to install stuff from NPM. You have to make cores bypass requests, like connected databases, which all require server-side proxying or acceleration. And so we actually sell WebContainer as a service. One of the core reasons we open-sourced kind of the core components of Bolt when we launched was that we think that there's going to be a lot more of these AI, in-your-browser AI co-gen experiences, kind of like what Anthropic did with Artifacts and Clod. By the way, Artifacts uses WebContainers. Not yet. No, yeah. Should I strike that? I think that they've got their own thing at the moment, but there's been a lot of interest in WebContainers from folks doing things in that sort of realm and in the AI labs and startups and everything in between. So I think there'll be, I imagine, over the coming months, there'll be lots of things being announced to folks kind of adopting it. But yeah, I think effectively...Swyx [00:50:35]: Okay, I'll say this. If you're a large model lab and you want to build sandbox environments inside of your chat app, you should call Eric.Itamar [00:50:43]: But wait, wait, wait, wait, wait, wait. I have a question about that. I think OpenAI, they felt that people are not using their model as they would want to. So they built ChatGPT. But I would say that ChatGPT now defines OpenAI. I know they're doing a lot of business from their APIs, but still, is this how you think? Isn't Bolt.new your business now? Why don't you focus on that instead of the...Swyx [00:51:16]: What's your advice as a founder?Eric [00:51:18]: You're right. And so going into it, we, candidly, we were like, Bolt.new, this thing is super cool. We think people are stoked. We think people will be stoked. But we were like, maybe that's allowed. Best case scenario, after month one, we'd be mind blown if we added a couple hundred K of error or something. And we were like, but we think there's probably going to be an immediate huge business. Because there was some early poll on folks wanting to put WebContainer into their product offerings, kind of similar to what Bolt is doing or whatever. We were actually prepared for the inverse outcome here. But I mean, well, I guess we've seen poll on both. But I mean, what's happened with Bolt, and you're right, it's actually the same strategy as like OpenAI or Anthropic, where we have our ChatGPT to OpenAI's APIs is Bolt to WebContainer. And so we've kind of taken that same approach. And we're seeing, I guess, some of the similar results, except right now, the revenue side is extremely lopsided to Bolt.Itamar [00:52:16]: I think if you ask me what's my advice, I think you have three options. One is to focus on Bolt. The other is to focus on the WebContainer. The third is to raise one billion dollars and do them both. I'm serious. I think otherwise, you need to choose. And if you raise enough money, and I think it's big bucks, because you're going to be chased by competitors. And I think it will be challenging to do both. And maybe you can. I don't know. We do see these numbers right now, raising above $100 million, even without havingEric [00:52:49]: a product. You can see these. It's excellent advice. And I think what's been amazing, but also kind of challenging is we're trying to forecast, okay, well, where are these things going? I mean, in the initial weeks, I think us and all the investors in the company that we're sharing this with, it was like, this is cool. Okay, we added 500k. Wow, that's crazy. Wow, we're at a million now. Most things, you have this kind of the tech crunch launch of initiation and then the thing of sorrow. And if there's going to be a downtrend, it's just not coming yet. Now that we're kind of looking ahead, we're six weeks in. So now we're getting enough confidence in our convictions to go, okay, this se

Open at Intel
Growing the Helm Community

Open at Intel

Play Episode Listen Later Nov 20, 2024 23:29


In this episode, Matt Butcher, CEO of Fermyon and a creator of the Helm project, returns to discuss his work with Helm—a nearly ubiquitous project in Kubernetes management. Matt provides insights into Helm's evolution from version 2 to version 3 and shares his vision for Helm 4. He emphasizes the importance of maintaining stability while embracing necessary changes and highlights the role of community contributions in open source projects like Helm. The conversation covers the new features and architectural changes planned for Helm 4, as well as how individuals can get involved in its development. Matt reflects on the significance of fostering a supportive and inclusive community and encourages new contributors to join the effort, noting the current opportune moment to influence Helm's future. 00:00 Introduction 00:37 The Helm Project 01:08 WebAssembly and Spin 3 01:54 Helm's Evolution and Future 04:22 Philosophy Behind Helm 4 11:35 Community Involvement and Contribution 18:46 Encouraging New Contributors   Guest: Matt Butcher is co-founder and CEO of Fermyon, the serverless WebAssembly in the cloud company. He is one of the original creators of Helm, Brigade, CNAB, OAM, Glide, and Krustlet. He has written or co-written many books, including Learning Helm and Go in Practice. He is a co-creator of the Illustrated Children's Guide to Kubernetes series. These days, he works mostly on WebAssembly projects such as Spin, Fermyon Cloud and Bartholomew. He holds a Ph.D. in Philosophy. He lives in Colorado, where he drinks lots of coffee.

The CTO Advisor
Replay-Server-side Web Assembly with Adobe's Colin Murphy

The CTO Advisor

Play Episode Listen Later Nov 13, 2024


After meeting at Kubecon, Keith gets in touch with Colin Murphy, a Senior Software Engineer at Adobe Systems, to discuss Web Assembly. They delve into how Adobe has employed Web Assembly to bring end-user applications to web browsers. Colin also discusses the promising aspects of server-side Web Assembly and highlights areas that need further development. [...]

Syntax - Tasty Web Development Treats
842: There's Python in my JavaScript! with Andrea Giammarchi

Syntax - Tasty Web Development Treats

Play Episode Listen Later Nov 1, 2024 53:45


Scott and Wes talk with Andrea Giammarchi (aka WebReflection) about his projects, including LinkDOM and PyScript, and the exciting future of running Python in the browser via WebAssembly. Show Notes 00:00 Welcome to Syntax! 01:04 Andrea's background and early work LinkDOM 07:25 Brought to you by Sentry.io 09:56 Pyscript 14:31 Why run Python in the browser? 20:17 Using WebAssembly to run different languages in JS 23:33 The advantages of WebAssembly 25:55 What excites Andrea about WASM Proposal: ESX as core JS feature 31:10 What is WASI? 32:21 Andrea's experience with IOT and microcontrollers 35:35 How can the JS ecosystem be improved? 38:07 Should we have reactivity in the browser? Signals 41:06 Andrea's thoughts on server-side APIs 43:43 Andrea's thoughts on TypeScript 49:13 Sick Picks & Shameless Plugs Sick Picks Andrea: ESP32 Shameless Plugs Andrea: Andrea's X / Twitter Hit us up on Socials! Syntax: X Instagram Tiktok LinkedIn Threads Wes: X Instagram Tiktok LinkedIn Threads Scott: X Instagram Tiktok LinkedIn Threads Randy: X Instagram YouTube Threads

Rust in Production
InfinyOn with Deb Chowdhury

Rust in Production

Play Episode Listen Later Oct 31, 2024 55:43 Transcription Available


Picture this: Your organization's data infrastructure resembles a busy kitchen with too many cooks. You're juggling Kafka for messaging, Flink for processing, Spark for analytics, Airflow for orchestration, and various Lambda functions scattered about. Each tool excellent at its job, but together they've created a complex feast of integration challenges. Your data teams are spending more time managing tools than extracting value from data. InfinyOn reimagines this chaos with a radically simple approach: a unified system for data streaming that runs everywhere. Unlike traditional solutions that struggle at the edge, InfinyOn gracefully handles data streams from IoT devices to cloud servers. And instead of cobbling together different tools, developers can build complete data pipelines using their preferred languages - be it Rust, Python, or SQL - with built-in state management. At the heart of InfinyOn is Fluvio, a Rust-based data streaming platform that's fast, reliable, and easy to use.

Cloud Security Podcast by Google
EP195 Containers vs. VMs: The Security Showdown!

Cloud Security Podcast by Google

Play Episode Listen Later Oct 21, 2024 41:16


Cross-over hosts: Kaslin Fields, co-host at Kubernetes Podcast Abdel Sghiouar, co-host at Kubernetes Podcast Guest: Michele Chubirka, Cloud Security Advocate, Google Cloud Topics: How would you approach answering the question ”what is more secure, container or a virtual machine (VM)?” Could you elaborate on the real-world implications of this for security, and perhaps provide some examples of when one might be a more suitable choice than the other? While containers boast a smaller attack surface (what about the orchestrator though?), VMs present a full operating system. How should organizations weigh these factors against each other? The speed of patching and updates is a clear advantage of containers. How significant is this in the context of today's rapidly evolving threat landscape? Are there any strategies organizations can employ to mitigate the slower update cycles associated with VMs? Both containers and VMs can be susceptible to misconfigurations, but container orchestration systems introduce another layer of complexity. How can organizations address this complexity and minimize the risk of misconfigurations leading to security vulnerabilities? What about combining containers and VMs. Can you provide some concrete examples of how this might be implemented? What benefits can organizations expect from such an approach, and what challenges might they face? How do you envision the security landscape for containers and VMs evolving in the coming years? Are there any emerging trends or technologies that could significantly impact the way we approach security for these two technologies? Resources: Container Security, with Michele Chubrika (the same episode - with extras! - at our peer podcast, “Kubernetes Podcast from Google”) EP105 Security Architect View: Cloud Migration Successes, Failures and Lessons EP54 Container Security: The Past or The Future? DORA 2024 report Container Security: It's All About the Supply Chain - Michele Chubirka Software composition analysis (SCA) DevSecOps Decisioning Principles Kubernetes CIS Benchmark Cloud-Native Consumption Principles State of WebAssembly outside the Browser - Abdel Sghiouar Why Perfect Compliance Is the Enemy of Good Kubernetes Security - Michele Chubirka - KubeCon NA 2024  

ThoughtWorks Podcast
Themes from Technology Radar Vol.31

ThoughtWorks Podcast

Play Episode Listen Later Oct 17, 2024 39:39


Volume 31 of the Technology Radar will be released on October 23, 2024. As always, it will feature 100+ technologies and techniques that we've been using with clients around the world. Alongside them will be a set of key themes that emerged during the process of putting it together. We think they offer another way into the Radar and give a unique insight on some of the most interesting issues impacting the software industry. In this episode of the Technology Podcast we discuss them: coding assistance antipatterns, Rust being anything but rusty, the rise of WebAssembly and what we describe as the "cambrian explosion of generative AI tools." To do so, Alexey Boas is joined by guests and podcast regulars Ken Mugrage and Neal Ford. Ken and Neal provide an insight into the conversations that happened during the process, and offer their perspective on the implications of these themes for the wider tech industry.  

Kubernetes Podcast from Google
Container Security, with Michele Chubrika

Kubernetes Podcast from Google

Play Episode Listen Later Oct 15, 2024 55:49


This episode is special. We collaborated with the folks behind the Cloud Security Podcast from Google, Anton Chuvakin(LinkedIn)and Tim Peacock, to bring you a joint episode. We had the pleasure to jointly interview Michelle Chubirka, a Cloud Security Developer Advocate. We talked about VM and Container security, debunked some myths about isolation, attack surfaces, immutability of containers, and more.   Do you have something cool to share? Some questions? Let us know: - web: kubernetespodcast.com - mail: kubernetespodcast@google.com - twitter: @kubernetespod News of the week Nvidia NIM on GKE Kubernetes Steering Committee Election Results for 2024 The schedule for KubeCon and CloudNativeCon India Diagrid Catalyst Beta Dapr on the Kubernetes Podcast with Salaboy Links from the interview Cloud Security Podcast Anton Chuvakin Tim Peacock Michelle Chubirka Dora report Container Security: It's All About the Supply Chain - Michele Chubirka Software composition analysis (SCA) DevSecOps Decisioning Principles Kubernetes CIS Benchmark Cloud-Native Consumption Principles State of WebAssembly outside the Browser - Abdel Sghiouar Why Perfect Compliance Is the Enemy of Good Kubernetes Security - Michele Chubirka - KubeCon NA 2024 Links from the post-interview chat Cloud Code Skaffold Introduction to Distributed ML Workloads with Ray on Kubernetes - Mofi Rahman & Abdel Sghiouar - KubeCon NA 2024

The Kubelist Podcast
Ep. #43, SpinKube with Kate Goldenring of Fermyon

The Kubelist Podcast

Play Episode Listen Later Sep 25, 2024 45:49


In episode 43 of The Kubelist Podcast, Kate Goldenring shares her journey from Microsoft, where she contributed to Kubernetes edge computing and the development of Akri, to joining Fermyon to work on serverless WebAssembly. She dives deep into the technical details of Spin and SpinKube, Fermyon's open-source tools for running serverless WebAssembly applications on Kubernetes. The conversation explores the potential of WebAssembly for building efficient, event-driven applications, and discusses how this technology can lead to more sustainable cloud solutions.

Heavybit Podcast Network: Master Feed
Ep. #43, SpinKube with Kate Goldenring of Fermyon

Heavybit Podcast Network: Master Feed

Play Episode Listen Later Sep 25, 2024 45:49


In episode 43 of The Kubelist Podcast, Kate Goldenring shares her journey from Microsoft, where she contributed to Kubernetes edge computing and the development of Akri, to joining Fermyon to work on serverless WebAssembly. She dives deep into the technical details of Spin and SpinKube, Fermyon's open-source tools for running serverless WebAssembly applications on Kubernetes. The conversation explores the potential of WebAssembly for building efficient, event-driven applications, and discusses how this technology can lead to more sustainable cloud solutions.

Inside Facebook Mobile
66: Inside Bento - Serverless Jupyter Notebooks at Meta

Inside Facebook Mobile

Play Episode Listen Later Aug 30, 2024 44:21


Bento is Meta's internal distribution of Jupyter Notebooks, an open-source web-based computing platform. Host Pascal is joined by Steve who worked with his team on building many features on top of Jupyter, including scheduled notebooks, sharing with colleagues and running notebooks without a remote server component by leveraging Webassembly in the browser. Got feedback? Send it to us on Threads (https://threads.net/@metatechpod), Twitter (https://twitter.com/metatechpod), Instagram (https://instagram.com/metatechpod) and don't forget to follow our host @passy (https://twitter.com/passy, https://mastodon.social/@passy, and https://threads.net/@passy_). Fancy working with us? Check out https://www.metacareers.com/. Links Scheduling Jupyter Notebooks at Meta: https://engineering.fb.com/2023/08/29/security/scheduling-jupyter-notebooks-meta/ Serverless Jupyter Notebooks at Meta: https://engineering.fb.com/2024/06/10/data-infrastructure/serverless-jupyter-notebooks-bento-meta/ Jupyter Notebooks: https://jupyter.org/  Timestamps Intro 0:06 Who is Steve? 1:49 What are Jupyter and Bento? 2:48 Who is Bento for? 3:40 Internal-only Bento features 4:42 Scheduled notebooks 11:39 Integrating with existing batch jobs 17:10 The case for serverless notebooks 20:59 Enter wasm 24:29 Upgrade paths from serverless to server 26:29 Bringing more Python libraries to the browser 30:21 Adding magick(s) 31:52 DataFrame magic and AI 36:41 What's next? 38:29 Outro 43:17

Paul's Security Weekly
Changing the Course of IoT's Future from Its Insecure Past - Paddy Harrington - ASW #297

Paul's Security Weekly

Play Episode Listen Later Aug 27, 2024 64:28


IoT devices are notorious for weak designs, insecure implementations, and a lifecycle that mostly ignores patching. We look at external factors that might lead to change, like the FCC's cybersecurity labeling for IoT. We explore the constraints that often influence poor security on these devices, whether those constraints are as consequential given modern appsec practices, and what the opportunities are to make these devices more secure for everyone. Segment resources: https://www.fcc.gov/document/cybersecurity-labeling-program-internet-things-iot-products Research by Orange Tsai into Apache HTTPD's architecture reveals several vulns, NCC Group shows techniques for hacking IoT devices with Sonos speakers, finding use cases for WebAssembly, Slack's AI leaks data, DARPA wants a future of Rust, and more! Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-297

Paul's Security Weekly TV
Apache HTTPD Vulns, Hacking IoT Speakers, Use Cases for WASM, Slack AI Leak - ASW #297

Paul's Security Weekly TV

Play Episode Listen Later Aug 27, 2024 27:08


Research by Orange Tsai into Apache HTTPD's architecture reveals several vulns, NCC Group shows techniques for hacking IoT devices with Sonos speakers, finding use cases for WebAssembly, Slack's AI leaks data, DARPA wants a future of Rust, and more! Show Notes: https://securityweekly.com/asw-297

Application Security Weekly (Audio)
Changing the Course of IoT's Future from Its Insecure Past - Paddy Harrington - ASW #297

Application Security Weekly (Audio)

Play Episode Listen Later Aug 27, 2024 64:28


IoT devices are notorious for weak designs, insecure implementations, and a lifecycle that mostly ignores patching. We look at external factors that might lead to change, like the FCC's cybersecurity labeling for IoT. We explore the constraints that often influence poor security on these devices, whether those constraints are as consequential given modern appsec practices, and what the opportunities are to make these devices more secure for everyone. Segment resources: https://www.fcc.gov/document/cybersecurity-labeling-program-internet-things-iot-products Research by Orange Tsai into Apache HTTPD's architecture reveals several vulns, NCC Group shows techniques for hacking IoT devices with Sonos speakers, finding use cases for WebAssembly, Slack's AI leaks data, DARPA wants a future of Rust, and more! Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-297

DevOps and Docker Talk
Traefik 3.0: What's New?

DevOps and Docker Talk

Play Episode Listen Later Aug 23, 2024 54:15


Bret and Nirmal were joined by Emile Vauge, CTO of Traefik Labs to talk all about Traefik 3.0.We talk about what's new in Traefik 3, 2.x to 3.0 migrations, Kubernetes Gateway API, WebAssembly (Cloud Native Wasm), HTTP3, Tailscale, OpenTelemetry, and much more!Be sure to check out the live recording of the complete show from June 6, 2024 on YouTube (Stream 269). Includes demos.★Topics★Traefik WebsiteTraefik Labs Community ForumTraefik's YouTube ChannelGateway API helper CLIingress2gateway migration toolCreators & Guests Cristi Cotovan - Editor Beth Fisher - Producer Bret Fisher - Host Nirmal Mehta - Host Emile Vauge - Guest   (00:00) - Intro (02:20) - Origins of Traefik (05:01) - The Road to 3.0 (06:20) - Balancing Stability and Innovation (08:25) - Migration to Traefik 3.0 (14:58) - WebAssembly and Plugins in Traefik (21:43) - Gateway API and gRPC Support (30:32) - Gateway API Components and Configuration (33:35) - Tools for Gateway API Management (40:08) - OpenTelemetry Integration (47:21) - Future Plans and Community Contributions You can also support my free material by subscribing to my YouTube channel and my weekly newsletter at bret.news!Grab the best coupons for my Docker and Kubernetes courses.Join my cloud native DevOps community on Discord.Grab some merch at Bret's Loot BoxHomepage bretfisher.com

ITSPmagazine | Technology. Cybersecurity. Society
Behind the Scenes of SquareX's Exposing DEF CON Talk and Their Latest Browser Security Innovations | A Brand Story Conversation From Black Hat USA 2024 | A SquareX Story with Vivek Ramachandran | On Location Coverage with Sean Martin and Marco Ciappe

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Aug 8, 2024 20:21


In this Brand Story episode, Sean Martin gets to chat with Vivek Ramachandran, Co-Founder and CEO of SquareX, at the Black Hat USA conference in Las Vegas. The discussion centers around SquareX's innovative approach to browser security and its relevance in today's cybersecurity landscape.Vivek explains that SquareX is developing a browser-native security product designed to detect, mitigate, and hunt threats in real-time, specifically focusing on the online activities of enterprise employees. This solution operates entirely within the browser, leveraging advanced technologies like WebAssembly to ensure minimal impact on the user experience.The conversation shifts to the upcoming DEF CON talk by Vivek, titled “Breaking Secure Web Gateways for Fun and Profit,” which highlights the seven sins of secure web gateways and SASE SSE solutions. According to Vivek, these cloud proxies often fail to detect and block web attacks due to inherent architectural limitations. He mentions SquareX's research revealing over 25 different bypasses, emphasizing the need for a new approach to tackle these vulnerabilities effectively.Sean and Vivek further discuss the practical implementation of SquareX's solution. Vivek underscores that traditional security measures often overlook browser activities, presenting a blind spot for many organizations. SquareX aims to fill this gap by providing comprehensive visibility and real-time threat detection without relying on cloud connectivity.Vivek also answers questions about the automatic nature of the browser extension deployment, ensuring it does not disrupt day-to-day operations for users or IT teams. Additionally, he touches on the importance of organizational training and awareness, helping security teams interpret new types of alerts and attacks that occur within the browser environment.Towards the end of the episode, Vivek introduces a new attack toolkit designed for organizations to test their own secure web gateways and SASE SSE solutions, empowering them to identify vulnerabilities firsthand. He encourages security leaders to use this tool and visit a dedicated website for practical demonstrations.Listeners are invited to connect with Vivek and the SquareX team, especially those attending Black Hat and DEF CON, to learn more about this innovative approach to browser security.Learn more about SquareX: https://itspm.ag/sqrx-l91Note: This story contains promotional content. Learn more.Guest: Vivek Ramachandran, Founder, SquareX [@getsquarex]On LinkedIn | https://www.linkedin.com/in/vivekramachandran/ResourcesLearn more and catch more stories from SquareX: https://www.itspmagazine.com/directory/squarexView all of our Black Hat USA  2024 coverage: https://www.itspmagazine.com/black-hat-usa-2024-hacker-summer-camp-2024-event-coverage-in-las-vegasAre you interested in telling your story?https://www.itspmagazine.com/telling-your-story

Rustacean Station
Full-stack development of a B2B payment infrastructure in Rust, with Florent Bécart

Rustacean Station

Play Episode Listen Later Jul 29, 2024 27:13


Florent Bécart, CTO at Nikulipe, sits down with Luca Palmieri. Florent discusses Nikulipe's reasons for adopting Rust: lower operational costs, scalability, safety, security and maintainability. Nikulipe has also made a bet on Rust for its frontend development needs, using Yew and WebAssembly. The interview closes with an overview of the challenges they faced, including long compile times and workspace management. Contributing to Rustacean Station Rustacean Station is a community project; get in touch with us if you'd like to suggest an idea for an episode or offer your services as a host or audio editor! Twitter: @rustaceanfm Discord: Rustacean Station Github: @rustacean-station Email: hello@rustacean-station.org Timestamps & referenced resources [@00:00] - Introduction Video recording of the interview [@00:33] - Start of the interview [@01:22] - Florent's presentation [@02:56] - Nikulipe's decision to adopt Rust [@05:10] - Managing spiky workloads with Rust [@06:41] - Using Rust for frontend development [@13:05] - Nikulipe's challenges working with Rust [@22:31] - The future of Rust at Nikulipe [@23:37] - Florent's advice on Rust for decision-makers [@26:30] - Conclusion Credits Intro Theme: Aerocity Audio Editing: Mainmatter Hosting Infrastructure: Jon Gjengset Show Notes: Mainmatter Hosts: Luca Palmieri

ITSPmagazine | Technology. Cybersecurity. Society
A Deep Dive into SquareX | A Short Brand Story from Black Hat USA 2024 | A SquareX Story with Chief Architect Jeswin Mathai | On Location Coverage with Sean Martin and Marco Ciappelli

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Jul 26, 2024 22:45


Welcome to another edition of Brand Stories, part of our On Location coverage of Black Hat Conference 2024 in Las Vegas. In this episode, Sean Martin and Marco Ciappelli chat with Jeswin Mathai, Chief Architect at SquareX, one of our esteemed sponsors for this year's coverage. Jeswin brings his in-depth knowledge and experience in cybersecurity to discuss the innovative solutions SquareX is bringing to the table and what to expect at this year's event.Getting Ready for Black Hat 2024The conversation kicks off with Marco and Sean sharing their excitement about the upcoming Black Hat USA 2024 in Las Vegas. They fondly recall their past experiences and the anticipation that comes with one of the most significant cybersecurity events of the year. Both hosts highlight the significance of the event for ITSP Magazine, marking ten years since its inception at Black Hat.Introducing Jeswin Mathai and SquareXJeswin Mathai introduces himself as the Chief Architect at SquareX. He oversees managing the backend infrastructure and ensuring the product's efficiency and security, particularly as a browser extension designed to be non-intrusive and highly effective. With six years of experience in the security industry, Jeswin has made significant contributions through his work published at various conferences and the development of open-source tools like AWS Goat and Azure Goat.The Birth of SquareXSean and Marco delve deeper into the origins of SquareX. Jeswin shares the story of how SquareX was founded by Vivek Ramachandran, who previously founded Pentester Academy, a cybersecurity education company. Seeing the persistent issues in consumer security and the inefficacy of existing antivirus solutions, Vivek decided to shift focus to consumer security, particularly the visibility gap in browser-level security.Addressing Security GapsJeswin explains how traditional security solutions, like endpoint security and secure web gateways, often lack visibility at the browser level. Attacks originating from browsers go unnoticed, creating significant vulnerabilities. SquareX aims to fill this gap by providing comprehensive browser security, detecting and mitigating threats in real time without hampering user productivity.Innovative Security SolutionsSquareX started as a consumer-based product and later expanded to enterprise solutions. The core principles are privacy, productivity, and scalability. Jeswin elaborates on how SquareX leverages advanced web technologies like WebAssembly to perform extensive computations directly on the browser, ensuring minimal dependency on cloud resources and optimizing user experience.A Scalable and Privacy-Safe SolutionMarco raises the question of data privacy regulations like GDPR in Europe and the California Consumer Privacy Act (CCPA). Jeswin reassures that SquareX is designed to be highly configurable, allowing administrators to adjust data privacy settings based on regional regulations. This flexibility ensures that user data remains secure and compliant with local laws.Real-World Use CasesTo illustrate SquareX's capabilities, Jeswin discusses common use cases like phishing attacks and how SquareX protects users. Attackers often exploit legitimate platforms like SharePoint and GitHub to bypass traditional security measures. With SquareX, administrators can enforce policies to block unauthorized credential entry, perform live analysis, and categorize content to prevent phishing scams and other threats.Looking Ahead to Black Hat and DEF CONThe discussion wraps up with a look at what attendees can expect from SquareX at Black Hat and DEF CON. SquareX will have a booth at both events, and Jeswin previews some of the talks on breaking secure web gateways and the dangers of malicious browser extensions. He encourages everyone to visit their booths and attend the talks to gain deeper insights into today's cybersecurity challenges and solutions.ConclusionIn conclusion, the conversation with Jeswin Mathai offers a comprehensive look at how SquareX is revolutionizing browser security. Their innovative solutions address critical gaps in traditional security measures, ensuring both consumer and enterprise users are protected against sophisticated threats. Join us at Black Hat Conference 2024 to learn more and engage with the experts at SquareX.Learn more about SquareX: https://itspm.ag/sqrx-l91Note: This story contains promotional content. Learn more.Guest: Jeswin Mathai, Chief Architect, SquareX [@getsquarex]On LinkedIn | https://www.linkedin.com/in/jeswinmathai/ResourcesLearn more and catch more stories from SquareX: https://www.itspmagazine.com/directory/squarexView all of our Black Hat USA  2024 coverage: https://www.itspmagazine.com/black-hat-usa-2024-hacker-summer-camp-2024-event-coverage-in-las-vegasAre you interested in telling your story?https://www.itspmagazine.com/telling-your-story

PodRocket - A web development podcast from LogRocket
The future of serverless is WASM with David Flanagan

PodRocket - A web development podcast from LogRocket

Play Episode Listen Later Jul 25, 2024 34:15


David Flanagan, founder of Rawdoke Academy, discusses why WebAssembly (WASM) could be the future of serverless technology and explores the evolution, benefits, and potential of WASM in transforming server-side applications across various environments. Links https://davidflanagan.com https://github.com/davidflanagan https://twitter.com/__DavidFlanagan https://www.linkedin.com/in/rawkode https://rawkode.academy https://youtube.com/@RawkodeAcademy https://www.hopp.bio/rawkode We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Let us know by sending an email to our producer, Emily, at emily.kochanekketner@logrocket.com (mailto:emily.kochanekketner@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understand where your users are struggling by trying it for free at [LogRocket.com]. Try LogRocket for free today.(https://logrocket.com/signup/?pdr) Special Guest: David Flanagan.