binary format for executables used by web pages
POPULARITY
Categories
Deloitte wpadł na $440k raporcie wygenerowanym przez AI z “przepięknymi halucynacjami” - cytaty prowadziły do nieistniejących książek i przemówień. Łukasz komentuje sucho: “AI można używać, ale warto sprawdzić, co wygenerowało.” Tymczasem GitHub dostał rozkaz migracji z AWS do Azure w 18 miesięcy.
This week we have Oliver Medhurst, the creator of Porffor. Porffor is a JavaScript ahead of time compiler that compiles JavaScript to WebAssembly. We talk about the technical details of how it works, and the future of JavaScript engines.https://x.com/canadahonkhttps://porffor.dev/https://github.com/CanadaHonk/porfforhttps://goose.icu/
Andreas Rossberg unpacks WASM 3.0, covering new capabilities like garbage collection, exception handling, tail calls, and support for 64-bit addressing with multiple memories. The discussion explores deterministic profiles following relaxed sim, WebAssembly's capability-based security model, and advances in sandboxing and module design. Andreas connects these features to practical use cases in JavaScript engines and applications like Google Sheets, then looks ahead to experimental work on threading, stack switching, and async programming models shaping the next phase of the WebAssembly ecosystem. Links Website: https://people.mpi-sws.org/~rossberg GitHub: https://github.com/rossberg Resources WASM 3.0 Completed: https://webassembly.org/news/2025-09-17-wasm-3.0 Chapters 00:00 Intro – Andreas Rossberg and the WebAssembly 3.0 Update 01:05 The State of WebAssembly Today 02:15 Why WebAssembly Exists Beyond the Web 03:20 From WebAssembly 2.0 to 3.0 – What's Actually New 04:30 Garbage Collection: A Game-Changer for Managed Languages 06:00 The Vision of WebAssembly as a Universal Compilation Target 07:40 How GC Support Unlocks Java, Kotlin, and Dart on WASM 09:10 Expanding to 64-bit Memory – Performance and Limits 10:40 WebAssembly for Databases, AI, and LLMs 12:00 Sandboxing and Security by Design 13:10 How Capabilities and Static Analysis Keep WASM Safe 14:30 Multi-Memory Support and Real-World Use Cases 16:00 Developer Ergonomics vs. Specification Purity 17:20 Tail Calls and Functional Programming Benefits 18:40 Function Tables and Secure Indirection 20:00 Exception Handling Finally Arrives 21:10 Determinism, Efficiency, and Why It Matters for Blockchain 22:30 SIMD and Hardware Divergence Across Platforms 24:00 Balancing Portability with Performance 25:20 The Design Philosophy Behind WebAssembly 26:30 Why WASM Rejects Language-Specific Features 27:40 Proposal Process: Who Decides What Gets In 29:00 Browser Vendors and Implementation Challenges 30:10 Early Deployments: GC, Tooling, and Adoption Stories 31:30 Threads, Stack Switching, and the Future of Concurrency 33:00 Async/Await and Coroutines on WebAssembly 34:30 What's Coming Next for WASM Developers 35:40 How to Get Involved – Working Groups and Proposals 37:00 Closing Thoughts and Thanks We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Fill out our listener survey (https://t.co/oKVAEXipxu)! https://t.co/oKVAEXipxu Let us know by sending an email to our producer, Elizabeth, at elizabet.becz@logrocket.com (mailto:elizabeth.becz@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understanding where your users are struggling by trying it for free at LogRocket.com. Try LogRocket for free today. (https://logrocket.com/signup/?pdr)
An airhacks.fm conversation with Ronald Dehuysser (@rdehuyss) about: JobRunner evolution from open source to processing 1 billion jobs daily, carbon-aware job processing using European energy grid data ( ENTSO-E ) for scheduling jobs during renewable energy peaks, correlation between CO2 emissions and energy prices for cost optimization, JobRunner Pro vs Open Source features including workflows and multi-tenancy support, bytecode analysis using ASM for lambda serialization, JSON serialization for job state persistence, support for relational databases and MongoDB with potential S3 and DynamoDB integration, distributed processing with master node coordination using heartbeat mechanism, scale-to-zero architecture possibilities using AWS EventBridge Scheduler, Java performance advantages showing 35x faster than python in benchmarks, cloud migration patterns from on-premise to serverless architectures, criticism of kubernetes complexity and lift-and-shift cloud migrations, cost-driven architecture approach using AWS Lambda and S3, quarkus as fastest Java runtime for cloud deployments, infrastructure as code using AWS CDK with Java, potential WebAssembly compilation for Edge Computing, automatic retry mechanisms with exponential backoff, dashboard and monitoring capabilities, medical industry use case with critical cancer result processing, professional liability insurance for software errors, comparison with executor service for non-critical tasks, scheduled and recurring job support, carbon footprint reduction through intelligent scheduling, spot instance integration for cost optimization, simplified developer experience with single JAR deployment, automatic table creation and data source detection in Quarkus, backwards compatibility requirements for distributed nodes, future serverless edition possibilities Ronald Dehuysser on twitter: @rdehuyss
Nesse episódio trouxemos as notícias e novidades do mundo da programação que nos chamaram atenção dos dias 13/09 a 26/09.
Nesse episódio trouxemos as notícias e novidades do mundo da programação que nos chamaram atenção dos dias 13/09 a 26/09.
Oggi parliamo di WebAssembly 3.0, quella tecnologia un po' magica che permette al tuo codice di correre veloce come il vento, ovunque, dal browser al cloud, passando per l'edge. Partiamo da zero, perché non tutti sanno cos'è davvero Wasm, chi l'ha inventato e perché sta cambiando il modo di pensare al software.In più, ti racconto le novità fresche fresche della specifica 3.0: memoria infinita (o quasi), garbage collection che funziona davvero e nuove feature che fanno il salto di qualità. Se ti sei sempre chiesto cosa sia Wasm, questo episodio è un buon punto di partenza.00:00 Intro03:39 Wasm spiegato07:52 Novità di Wasm 3.0#webassembly #wasm #cloud #coding
Consumer Reports on Windows 10 updates. Waste (not fraud or abuse) within DoD Cyberoperations. China's DeepSeek produces deliberately flawed code. WebAssembly v3.0 officially released. Firefox v143 updates and new features. Firefox for Android now offers DoH. A nearly terminal flaw in Microsoft's Entra ID. Chrome hits its 6th 0-day this year. Emergency update. DRAM (now DDR5) still vulnerable to RowHammer. SAMSUNG kitchen refrigerators begin showing ads. China says no to NVIDIA. 300 more (new) NPM maliciouspackages found and removed. The EU is already testing proper online age verification. Show Notes - https://www.grc.com/sn/SN-1044-Notes.pdf Hosts: Steve Gibson and Leo Laporte Download or subscribe to Security Now at https://twit.tv/shows/security-now. You can submit a question to Security Now at the GRC Feedback Page. For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: bigid.com/securitynow go.acronis.com/twit zscaler.com/security 1password.com/securitynow hoxhunt.com/securitynow
Consumer Reports on Windows 10 updates. Waste (not fraud or abuse) within DoD Cyberoperations. China's DeepSeek produces deliberately flawed code. WebAssembly v3.0 officially released. Firefox v143 updates and new features. Firefox for Android now offers DoH. A nearly terminal flaw in Microsoft's Entra ID. Chrome hits its 6th 0-day this year. Emergency update. DRAM (now DDR5) still vulnerable to RowHammer. SAMSUNG kitchen refrigerators begin showing ads. China says no to NVIDIA. 300 more (new) NPM maliciouspackages found and removed. The EU is already testing proper online age verification. Show Notes - https://www.grc.com/sn/SN-1044-Notes.pdf Hosts: Steve Gibson and Leo Laporte Download or subscribe to Security Now at https://twit.tv/shows/security-now. You can submit a question to Security Now at the GRC Feedback Page. For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: bigid.com/securitynow go.acronis.com/twit zscaler.com/security 1password.com/securitynow hoxhunt.com/securitynow
Consumer Reports on Windows 10 updates. Waste (not fraud or abuse) within DoD Cyberoperations. China's DeepSeek produces deliberately flawed code. WebAssembly v3.0 officially released. Firefox v143 updates and new features. Firefox for Android now offers DoH. A nearly terminal flaw in Microsoft's Entra ID. Chrome hits its 6th 0-day this year. Emergency update. DRAM (now DDR5) still vulnerable to RowHammer. SAMSUNG kitchen refrigerators begin showing ads. China says no to NVIDIA. 300 more (new) NPM maliciouspackages found and removed. The EU is already testing proper online age verification. Show Notes - https://www.grc.com/sn/SN-1044-Notes.pdf Hosts: Steve Gibson and Leo Laporte Download or subscribe to Security Now at https://twit.tv/shows/security-now. You can submit a question to Security Now at the GRC Feedback Page. For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: bigid.com/securitynow go.acronis.com/twit zscaler.com/security 1password.com/securitynow hoxhunt.com/securitynow
Consumer Reports on Windows 10 updates. Waste (not fraud or abuse) within DoD Cyberoperations. China's DeepSeek produces deliberately flawed code. WebAssembly v3.0 officially released. Firefox v143 updates and new features. Firefox for Android now offers DoH. A nearly terminal flaw in Microsoft's Entra ID. Chrome hits its 6th 0-day this year. Emergency update. DRAM (now DDR5) still vulnerable to RowHammer. SAMSUNG kitchen refrigerators begin showing ads. China says no to NVIDIA. 300 more (new) NPM maliciouspackages found and removed. The EU is already testing proper online age verification. Show Notes - https://www.grc.com/sn/SN-1044-Notes.pdf Hosts: Steve Gibson and Leo Laporte Download or subscribe to Security Now at https://twit.tv/shows/security-now. You can submit a question to Security Now at the GRC Feedback Page. For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: bigid.com/securitynow go.acronis.com/twit zscaler.com/security 1password.com/securitynow hoxhunt.com/securitynow
Consumer Reports on Windows 10 updates. Waste (not fraud or abuse) within DoD Cyberoperations. China's DeepSeek produces deliberately flawed code. WebAssembly v3.0 officially released. Firefox v143 updates and new features. Firefox for Android now offers DoH. A nearly terminal flaw in Microsoft's Entra ID. Chrome hits its 6th 0-day this year. Emergency update. DRAM (now DDR5) still vulnerable to RowHammer. SAMSUNG kitchen refrigerators begin showing ads. China says no to NVIDIA. 300 more (new) NPM maliciouspackages found and removed. The EU is already testing proper online age verification. Show Notes - https://www.grc.com/sn/SN-1044-Notes.pdf Hosts: Steve Gibson and Leo Laporte Download or subscribe to Security Now at https://twit.tv/shows/security-now. You can submit a question to Security Now at the GRC Feedback Page. For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: bigid.com/securitynow go.acronis.com/twit zscaler.com/security 1password.com/securitynow hoxhunt.com/securitynow
Consumer Reports on Windows 10 updates. Waste (not fraud or abuse) within DoD Cyberoperations. China's DeepSeek produces deliberately flawed code. WebAssembly v3.0 officially released. Firefox v143 updates and new features. Firefox for Android now offers DoH. A nearly terminal flaw in Microsoft's Entra ID. Chrome hits its 6th 0-day this year. Emergency update. DRAM (now DDR5) still vulnerable to RowHammer. SAMSUNG kitchen refrigerators begin showing ads. China says no to NVIDIA. 300 more (new) NPM maliciouspackages found and removed. The EU is already testing proper online age verification. Show Notes - https://www.grc.com/sn/SN-1044-Notes.pdf Hosts: Steve Gibson and Leo Laporte Download or subscribe to Security Now at https://twit.tv/shows/security-now. You can submit a question to Security Now at the GRC Feedback Page. For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: bigid.com/securitynow go.acronis.com/twit zscaler.com/security 1password.com/securitynow hoxhunt.com/securitynow
Consumer Reports on Windows 10 updates. Waste (not fraud or abuse) within DoD Cyberoperations. China's DeepSeek produces deliberately flawed code. WebAssembly v3.0 officially released. Firefox v143 updates and new features. Firefox for Android now offers DoH. A nearly terminal flaw in Microsoft's Entra ID. Chrome hits its 6th 0-day this year. Emergency update. DRAM (now DDR5) still vulnerable to RowHammer. SAMSUNG kitchen refrigerators begin showing ads. China says no to NVIDIA. 300 more (new) NPM maliciouspackages found and removed. The EU is already testing proper online age verification. Show Notes - https://www.grc.com/sn/SN-1044-Notes.pdf Hosts: Steve Gibson and Leo Laporte Download or subscribe to Security Now at https://twit.tv/shows/security-now. You can submit a question to Security Now at the GRC Feedback Page. For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: bigid.com/securitynow go.acronis.com/twit zscaler.com/security 1password.com/securitynow hoxhunt.com/securitynow
Consumer Reports on Windows 10 updates. Waste (not fraud or abuse) within DoD Cyberoperations. China's DeepSeek produces deliberately flawed code. WebAssembly v3.0 officially released. Firefox v143 updates and new features. Firefox for Android now offers DoH. A nearly terminal flaw in Microsoft's Entra ID. Chrome hits its 6th 0-day this year. Emergency update. DRAM (now DDR5) still vulnerable to RowHammer. SAMSUNG kitchen refrigerators begin showing ads. China says no to NVIDIA. 300 more (new) NPM maliciouspackages found and removed. The EU is already testing proper online age verification. Show Notes - https://www.grc.com/sn/SN-1044-Notes.pdf Hosts: Steve Gibson and Leo Laporte Download or subscribe to Security Now at https://twit.tv/shows/security-now. You can submit a question to Security Now at the GRC Feedback Page. For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: bigid.com/securitynow go.acronis.com/twit zscaler.com/security 1password.com/securitynow hoxhunt.com/securitynow
以下のようなトピックについて話をしました。 01. ノーベル賞受賞者が語る挫折と発見の物語 2025年9月21日、けいはんなプラザで開催された「京阪奈万博2025記念シンポジウム『未来への対話』」は、ノーベル賞受賞者である山中伸弥先生と田中耕一先生を招いた意義深いイベントとなりました。 山中先生は、父親をC型肝炎で亡くした体験から医学の道を志し、「やけくそ」の気持ちで始めたIPS細胞研究がノーベル賞につながった経緯を語りました。研究者として一度挫折しかけた経験から、「リスクの高いことに挑戦しよう」という開き直りが革新的発見を生んだと振り返り、「明けない夜はない」という言葉で逆境との向き合い方を示しました。 田中先生は、化学の専門家ではなかったからこそ「ソフトレーザー脱離イオン化法」を発見できたと説明。実験中の「失敗」から生まれた偶然の発見を追求したことがブレークスルーにつながったとし、現代の鉄道技術を例に異分野融合の重要性を強調しました。 第三部では6名の学生が両先生に質問を投げかけ、「専門外だからこそできること」「失敗を恐れない挑戦」「異分野との対話の価値」といった貴重なメッセージが共有されました。不確実性の時代において、AI時代でも人間に求められる資質として、我慢強さ、楽観主義、柔軟性、対話力が挙げられ、専門分野を超えた人間としての対話の重要性が確認されました。 02. ソフトバンクが5G RedCap商用サービス開始 ソフトバンクは、IoT向け通信規格「5G RedCap」のネットワーク対応を開始し、2025年9月中旬以降に商用サービスを提供すると発表しました。 5G RedCapは、3GPP Release17で策定されたIoT専用の通信規格で、従来の5Gから超高速・大容量通信機能の一部を削減することにより、低コスト、低消費電力、小型化を実現しています。この技術は、高速通信を必要としないセンサやウェアラブルデバイスなどのIoT機器に最適化されており、IoT分野での活用が期待されます。 サービス開始時は5G SAエリアの一部から提供を始め、段階的にエリアを拡大していく予定です。利用には専用の5G RedCap対応機種が必要ですが、特別な申し込み手続きは不要で、料金は対応機種向けの通信サービス料金プランに準拠します。 この取り組みにより、IoT機器の普及がさらに加速し、スマートシティやインダストリー4.0などの分野での新たなサービス展開が期待されます。 03. WebAssembly 3.0正式仕様完成 WebAssembly 3.0正式仕様が完成、サーバサイド対応を大幅強化 W3CのWebAssemblyワーキンググループが「WebAssembly 3.0」の正式仕様完成を発表しました。WebAssemblyは当初Webブラウザでの高速アプリケーション実行を目的としていましたが、WASI(WebAssembly System Interface)の登場により、現在はサーバサイドのクロスプラットフォーム実行環境としても活用されています。 今回のバージョン3.0は、こうしたサーバサイド利用の拡大を受けて策定された仕様です。最大の変更点は64ビットアドレス空間の採用で、利用可能メモリが従来の4ギガバイトから16エクサバイトへと劇的に拡張されました。これにより大規模サーバアプリケーションへの対応が可能になります。 その他の主要機能として、不要メモリを自動解放するガベージコレクション機能により、JavaやPHP、Kotlinなどの言語移植が容易になりました。複数メモリ空間の分離利用でセキュリティが向上し、型付き参照による実行時型チェックの回避、テールコール(末尾再帰)対応、例外ハンドリング機能なども追加されています。 注目すべきは、これらの機能の一部は既に実装済みで、ガベージコレクションは2025年1月にWeb標準ベースラインとなっています。各ブラウザやランタイムの対応状況は「Feature Status - WebAssembly」で確認できます。 本ラジオはあくまで個人の見解であり現実のいかなる団体を代表するものではありません ご理解頂ますようよろしくおねがいします
Powrót z wakacji i Szymon z Łukaszem podsumowują sezon ogórkowy, który był pełen niespodzianek. "Newsy wyglądają jak strzały dopaminowe z TikToka" - komentują prowadzący. Amazon zaskoczyła podejściem do migracji Kafka na KRaft - zamiast upgrade'u wymaga kasowania całego klastra. Pulumi dodaje wsparcie dla modułów Terraform, a Kubernetes wprowadza KYAML jako "nowszy" standard, który przypomina... JSON. GitHub prezentuje Agentic Workflows - automatyzację z "autonomicznymi" agentami. Microsoft przepisuje kod na Rust, a Meta rozważa integrację z modelami Google i OpenAI. Omawiają też praktyczne doświadczenia z Claude'em do kodowania ("planowanie w 4.1, pisanie w Sonecie"), kontrowersje wokół GPT-5, Flutter 3.5 z WebAssembly oraz Nano Banana - AI zagrażające grafikom. Czy "autonomiczny agent" to największe kłamstwo technologiczne? Posłuchaj, jak rozwija się świat gdzie technologia przypomina social media.
Laurent Doguin and Geoffroy Couprie discuss their pioneering work with Wasm on the infrastructure side. They walk us through the benefits and challenges of building a platform over WebAssembly and why it's the safer alternative to containers. Read a transcript of this interview: http://bit.ly/3HheBWx Subscribe to the Software Architects' Newsletter for your monthly guide to the essential news and experience from industry peers on emerging patterns and technologies: https://www.infoq.com/software-architects-newsletter Upcoming Events: InfoQ Dev Summit Munich (October 15-16, 2025) Essential insights on critical software development priorities. https://devsummit.infoq.com/conference/munich2025 QCon San Francisco 2025 (November 17-21, 2025) Get practical inspiration and best practices on emerging software trends directly from senior software developers at early adopter companies. https://qconsf.com/ QCon AI New York 2025 (December 16-17, 2025) https://ai.qconferences.com/ QCon London 2026 (March 16-19, 2026) https://qconlondon.com/ The InfoQ Podcasts: Weekly inspiration to drive innovation and build great teams from senior software leaders. Listen to all our podcasts and read interview transcripts: - The InfoQ Podcast https://www.infoq.com/podcasts/ - Engineering Culture Podcast by InfoQ https://www.infoq.com/podcasts/#engineering_culture - Generally AI: https://www.infoq.com/generally-ai-podcast/ Follow InfoQ: - Mastodon: https://techhub.social/@infoq - X: https://x.com/InfoQ?from=@ - LinkedIn: https://www.linkedin.com/company/infoq/ - Facebook: https://www.facebook.com/InfoQdotcom# - Instagram: https://www.instagram.com/infoqdotcom/?hl=en - Youtube: https://www.youtube.com/infoq - Bluesky: https://bsky.app/profile/infoq.com Write for InfoQ: Learn and share the changes and innovations in professional software development. - Join a community of experts. - Increase your visibility. - Grow your career. https://www.infoq.com/write-for-infoq
An airhacks.fm conversation with Fabio Niephaus (@fniephaus) about: GraalVM polyglot capabilities now available as Maven dependencies without requiring GraalVM JDK, running WebAssembly modules in Java applications using GraalWasm, separation of polyglot runtime from GraalVM distribution, embedding use cases for extending Java applications with python JavaScript and WebAssembly, performance benefits when running on GraalVM vs openJDK through automatic JIT optimization, WebAssembly as portable compilation target for multiple languages including rust C++ Go, WASI (WebAssembly System Interface) enabling file and network operations, advantages over JNI/Panama FFI for native extensions due to portability and sandboxing, multi-threading support with context pools for high throughput, using JavaScript bindings as intermediary for high-level Java-WASM interactions, future component model with WIT (WebAssembly Interface Types) for language-agnostic interfaces, security benefits of sandboxed execution for untrusted code, WebImage preview feature compiling Java bytecode to WebAssembly modules, javac demo running Java compiler in browser, command-line tools converted to web applications using WebImage, Edge Computing use cases for user-defined functions, native image compatibility with GraalWasm, Pyodide integration possibilities for secure Python native extensions, Spring Shell successfully compiled to WASM demonstrating framework compatibility, ongoing work on threading networking and WASI support for full server-side capabilities, collaboration with WebAssembly community and Bytecode Alliance, WASM GC proposal for efficient garbage collection, bringing dynamic class loading to native image, GraalWasm demos and guides, javac on Wasm live demo, javac on Wasm demo code, Web Image talk at Wasm.io 2025, GraalVM Web Image sources, GDK Launcher, GraalPy, GraalPy demos and guides Fabio Niephaus on twitter: @fniephaus
In this episode of Hanselminutes, Scott Hanselman chats with Roderick Rabah, Head of Product at Postman Flows, about the evolution of software development, the intersection of APIs and AI, and finding the "right layer of abstraction" for problem-solving. Drawing on his deep expertise in compiler optimization, distributed systems, and serverless computing, Rabah shares his perspectives on building tools that empower developers to create efficiently and explores the paradigm shift toward visual programming and AI-driven automation.The conversation dives into how Postman is innovating in the software space, how approaches to software engineering are transforming with generative AI, and why embracing new ways of working is critical for staying ahead in this rapidly evolving technological landscape. Key Topics[01:08] Introduction of Roderick Rabah: From research scientist to API innovator[02:14] Evolution of software development: From FPGAs to serverless computing[03:23] APIs and AI: The transformative intersection powering workflows[05:33] The rise of tool-calling and agents: Simplifying backend tasks[07:33] Managing complexity: Why structured APIs make integration seamless[12:08] Visual programming languages: The paradigm shift for developers[16:42] Postman Flows: Building applications through visual workflows[20:24] Embracing generative AI: How senior and junior engineers benefit[29:02] Deploying with WebAssembly: Making cloud integration accessible[30:33] Reflections on the future of technology and its impact on software careersMain TakeawaysAPI + AI Integration: APIs combined with large language models are unlocking new capabilities for software development by abstracting complex operations and enabling automation.Visual Programming Paradigm Shift: Applications are increasingly built using visual workflows where developers focus on intent rather than low-level code implementation, driving efficiency and accessibility.Generative AI Empowerment: Generative AI tools are accelerating the pace of innovation, empowering engineers to fix bugs, streamline workflows, and manage edge cases efficiently.Structured APIs Critical for AI: Thoughtfully designed APIs with proper documentation and safeguards are essential to ensure that autonomous AI agents interact correctly and securely.Accessible Deployment: New runtime frameworks, like serverless with WebAssembly, make it easier for developers to deploy applications across the cloud, enabling broader adoption of AI-driven solutions.Notable Quotes"Serverless is where you think about servers less." – Scott Hanselman"At what point does communicating your intent to AI become programming again?" – Roderick Rabah"Visual programming resonates with builders because it matches the mental model of decomposing problems." – Roderick Rabah"Technology transforms rapidly. You have to figure out how to wield this immense power." – Roderick Rabah"Don't throw away your critical thinking just because AI makes building faster." – Roderick RabahResources MentionedPostman Flows – Tools for visual programming and API integrations: postman.comReplit – Generative coding platform for automating development tasks: replit.comWebAssembly – Runtime framework for deploying serverless applications: webassembly.orgBooks on Compiler Theory: Suggested resource for expanding understanding of abstractionsFollow along for more insights, tips, and conversations with industry leaders. These show notes summarize key moments in the podcast for easy reference and understanding - these show notes were generated by a custom gpt-4o-nano model trained in previous episodes of Hanselminutes
In this episode of the KuppingerCole Analyst Chat, Matthias Reinwarth sits down with cybersecurity CTO & analyst Alexei Balaganski to explore the dramatic evolution of API management and security. They unpack: Why APIs are now the backbone of AI agents and how MCP (Model Context Protocol) is driving a new decentralized ecosystem. The explosion of shadow APIs & hidden interfaces from your printer to your coffee machine and why they pose serious risks. How edge computing & WebAssembly are decentralizing everything, making old API gateway models obsolete. The critical need for API posture management, identity & access controls for non-human identities, and full lifecycle security even before you write a line of code. Learn why API security isn’t just a tech problem, it’s the next big business risk, how the market is consolidating, and what’s coming in the new Leadership Compass on API Management & Security.
Brandon interviews Victor Adossi, an engineer at Cosmonic. They discuss the state of WebAssembly, wasmCloud, and why Wasm is poised for growth. Plus, Victor shares what it's like to live as an expat in Japan. Watch the YouTube Live Recording of Episode 526 (https://youtu.be/i7PRMqYk-gM?si=dz_FKqcF3G9EI25m) Show Links Cosmonic (https://cosmonic.com/) Bytecode Alliance (https://bytecodealliance.org/) WebAssembly Specifications (https://webassembly.org/specs/) The WebAssembly Component Model (https://component-model.bytecodealliance.org) Emscripten (https://emscripten.org) wasmCloud (https://wasmcloud.com/) wasmCloud Examples (https://github.com/wasmCloud/wasmCloud/tree/main/examples) Jco (Javascript ecosystem) Examples (https://github.com/bytecodealliance/jco/tree/main/examples/components) FFmpeg (https://ffmpeg.org) Contact Victor Github: t3hmrman (https://github.com/t3hmrman) and vados-cosmonic (https://github.com/vados-cosmonic) Twitter: @vadosware (https://x.com/vadosware) (https://x.com/vadosware) Web: vadosware.io (http://vadosware.io/) SDT News & Hype Join us in Slack (http://www.softwaredefinedtalk.com/slack). Get a SDT Sticker! Send your postal address to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) and we will send you free laptop stickers! Follow us: Twitch (https://www.twitch.tv/sdtpodcast), Twitter (https://twitter.com/softwaredeftalk), Instagram (https://www.instagram.com/softwaredefinedtalk/), Mastodon (https://hachyderm.io/@softwaredefinedtalk), BlueSky (https://bsky.app/profile/softwaredefinedtalk.com), LinkedIn (https://www.linkedin.com/company/software-defined-talk/), TikTok (https://www.tiktok.com/@softwaredefinedtalk), Threads (https://www.threads.net/@softwaredefinedtalk) and YouTube (https://www.youtube.com/channel/UCi3OJPV6h9tp-hbsGBLGsDQ/featured). Use the code SDT to get $20 off Coté's book, Digital WTF (https://leanpub.com/digitalwtf/c/sdt), so $5 total. Become a sponsor of Software Defined Talk (https://www.softwaredefinedtalk.com/ads)! Special Guest: Victor Adossi.
What happens when you put Rails in a browser? Vladimir Dementyev (Vova) is pushing WebAssembly to its limits by creating an interactive Rails playground that runs entirely client-side. This groundbreaking project aims to eliminate the frustrating installation barriers that often discourage newcomers from trying Ruby on Rails."I asked myself the question - can I run Rails on WASM? And that's when you feel yourself like a pilgrim software engineer, experiencing something for the first time that no one ever experienced," Vova shares. The project isn't just a technical curiosity but serves a vital educational purpose - allowing anyone to learn Rails through the official tutorial without wrestling with Ruby version managers or environment setup.As principal engineer at Evil Martians, Vova balances multiple innovative projects simultaneously. Beyond Rails on WASM, he's organizing the first San Francisco Ruby Conference (coming November 2024), building a custom open-source CFP application, expanding AnyCable to support Laravel, and updating his technical book "Ruby on Rails Applications." His creative problem-solving approach extends to production environments too, where techniques developed for experimental projects help solve real client challenges like making libvips fork-safe for high-performance web servers.Vova's philosophy on productivity is refreshingly practical: work when inspiration strikes rather than forcing creativity during arbitrary hours. "If I have no desire to sit at my desk and stare at the laptop, I'm not going to do that. I wait for the moment to come, and then I sit and work, and it's really efficient."Ready to see what Ruby and Rails can do in previously impossible environments? Follow Vova's work, attend his RailsConf talk, or join the growing San Francisco Ruby community to witness how Ruby's flexibility continues to break new ground in unexpected ways.Send us some love. HoneybadgerHoneybadger is an application health monitoring tool built by developers for developers.JudoscaleAutoscaling that actually works. Take control of your cloud hosting.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Support the show
Dans cet épisode, c'est le retour de Katia et d'Antonio. Les Cast Codeurs explorent WebAssembly 2.0, les 30 ans de Java, l'interopérabilité Swift-Java et les dernières nouveautés Kotlin. Ils plongent dans l'évolution de l'IA avec Claude 4 et GPT-4.1, débattent de la conscience artificielle et partagent leurs retours d'expérience sur l'intégration de l'IA dans le développement. Entre virtualisation, défis d'infrastructure et enjeux de sécurité open source, une discussion riche en insights techniques et pratiques. Enregistré le 13 juin 2025 Téléchargement de l'épisode LesCastCodeurs-Episode-327.mp3 ou en vidéo sur YouTube. News Langages Wasm 2.0 enfin officialisé ! https://webassembly.org/news/2025-03-20-wasm-2.0/ La spécification Wasm 2.0 est officiellement sortie en décembre dernier. Le consensus sur la spécification avait été atteint plus tôt, en 2022. Les implémentations majeures supportent Wasm 2.0 depuis un certain temps. Le processus W3C a pris du temps pour atteindre le statut de “Recommandation Candidate” pour des raisons non techniques. Les futures versions de Wasm adopteront un modèle “evergreen” où la “Recommandation Candidate” sera mise à jour en place. La dernière version de la spécification est considérée comme le standard actuel (Candidate Recommendation Draft). La version la plus à jour est disponible sur la page GitHub (GitHub page). Wasm 2.0 inclut les nouveautés suivantes : Instructions vectorielles pour le SIMD 128-bit. Instructions de manipulation de mémoire en bloc pour des copies et initialisations plus rapides. Résultats multiples pour les instructions, blocs et fonctions. Types références pour les références à des fonctions ou objets externes. Conversions non-piégeantes de flottant à entier. Instructions d'extension de signe pour les entiers signés. Wasm 2.0 est entièrement rétrocompatible avec Wasm 1.0. Paul Sandoz annonce que le JDK intègrera bientôt une API minimaliste pour lire et écrire du JSON https://mail.openjdk.org/pipermail/core-libs-dev/2025-May/145905.html Java a 30 ans, c'était quoi les points bluffants au début ? https://blog.jetbrains.com/idea/2025/05/do-you-really-know-java/ nom de code Oak Mais le trademark était pris Write Once Run Anywhere Garbage Collector Automatique multi threading au coeur de la palteforme meme si Java est passé par les green threads pendant un temps modèle de sécurité: sandbox applets, security manager, bytecode verifier, classloader Des progrès dans l'interopérabilité Swift / Java mentionnés à la conférence Apple WWDC 2025 https://www.youtube.com/watch?v=QSHO-GUGidA Interopérabilité Swift-Java : Utiliser Swift dans des apps Java et vice-versa. Historique : L'interopérabilité Swift existait déjà avec C et C++. Méthodes : Deux directions d'interopérabilité : Java depuis Swift et Swift depuis Java. JNI : JNI est l'API Java pour le code natif, mais elle est verbeuse. Swift-Java : Un projet pour une interaction Swift-Java plus flexible, sûre et performante. Exemples pratiques : Utiliser des bibliothèques Java depuis Swift et rendre des bibliothèques Swift disponibles pour Java. Gestion mémoire : Swift-Java utilise la nouvelle API FFM de Java pour gérer la mémoire des objets Swift. Open Source : Le projet Swift-Java est open source et invite aux contributions. KotlinConf le retour https://www.sfeir.dev/tendances/kotlinconf25-quelles-sont-les-annonces-a-retenir/ par Adelin de Sfeir “1 developeur sur 10” utilise Kotlin Kotlin 2.2 en RC $$ multi dollar interpolation pour eviter les sur interpolations non local break / continue (changement dans la conssitance de Kotlin guards sur le pattern matching D'autres features annoncées alignement des versions de l'ecosysteme sur kotlin jvm par defaut un nouvel outil de build Amper beaucoup d'annonces autour de l'IA Koog, framework agentique de maniere declarative nouvelle version du LLM de JetBrains: Mellum (focalisé sur le code) Kotlin et Compose multiplateforme (stable en iOS) Hot Reload dans compose en alpha partenariat strategque avec Spring pour bien integrer kotlin dans spring Librairies Sortie d'une version Java de ADK, le framework d'agents IA lancé par Google https://glaforge.dev/posts/2025/05/20/writing-java-ai-agents-with-adk-for-java-getting-started/ Guillaume a travaillé sur le lancement de ce framework ! (améliorations de l'API, code d'exemple, doc…) Comment déployer un serveur MCP en Java, grâce à Quarkus, et le déployer sur Google Cloud Run https://glaforge.dev/posts/2025/06/09/building-an-mcp-server-with-quarkus-and-deploying-on-google-cloud-run/ Même Guillaume se met à faire du Quarkus ! Utilisation du support MCP développé par l'équipe Quarkus. C'est facile, suffit d'annoter une méthode avec @Tool et ses arguments avec @ToolArg et c'est parti ! L'outil MCP inspector est très pratique pour inspecter manuellement le fonctionnement de ses serveurs MCP Déployer sur Cloud Run est facile grâce aux Dockerfiles fournis par Quarkus En bonus, Guillaume montre comment configuré un serveur MCP comme un outil dans le framework ADK pour Java, pour créer ses agents IA Jilt 1.8 est sorti, un annotation processor pour le pattern builder https://www.endoflineblog.com/jilt-1_8-and-1_8_1-released processing incrémental pour Gradle meilleure couverture de votre code (pour ne pas comptabiliser le code généré par l'annotation processeur) une correction d'un problème lors de l'utilisation des types génériques récursifs (genre Node Hibernate Search 8 est sorti https://in.relation.to/2025/06/06/hibernate-search-8-0-0-Final/ aggregation de metriques compatibilité avec les dernieres OpenSearch et Elasticsearch Lucene 10 en backend Preview des requetes validées à la compilation Hibernate 7 est sorti https://in.relation.to/2025/05/20/hibernate-orm-seven/ ASL 2.0 Hibernate Validator 9 Jakarta Persistence 3.2 et Jakarta Validation 3.1 saveOrUpdate (reattachement d'entité) n'est plus supporté session stateless plus capable: oeprations unitaires et pas seulement bach, acces au cache de second niveau, m,eilleure API pour les batchs (insertMultiple etc) nouvelle API criteria simple et type-safe: et peut ajouter a une requete de base Un article qui décrit la Dev UI de Quarkus https://www.sfeir.dev/back/quarkus-dev-ui-linterface-ultime-pour-booster-votre-productivite-en-developpement-java/ apres un test pour soit ou une demo, c'est un article détaillé et la doc de Quarkus n'est pas top là dessus Vert.x 5 est sorti https://vertx.io/blog/eclipse-vert-x-5-released/ on en avait parlé fin de l'année dernière ou début d'année Modèle basé uniquement sur les Futures : Vert.x 5 abandonne le modèle de callbacks pour ne conserver que les Futures, avec une nouvelle classe de base VerticleBase mieux adaptée à ce modèle asynchrone. Support des modules Java (JPMS) : Vert.x 5 prend en charge le système de modules de la plateforme Java avec des modules explicites, permettant une meilleure modularité des applications. Améliorations majeures de gRPC : Support natif de gRPC Web et gRPC Transcoding (support HTTP/JSON et gRPC), format JSON en plus de Protobuf, gestion des timeouts et deadlines, services de réflexion et de health. Support d'io_uring : Intégration native du système io_uring de Linux (précédemment en incubation) pour de meilleures performances I/O sur les systèmes compatibles. Load balancing côté client : Nouvelles capacités de répartition de charge pour les clients HTTP et gRPC avec diverses politiques de distribution. Service Resolver : Nouveau composant pour la résolution dynamique d'adresses de services, étendant les capacités de load balancing à un ensemble plus large de résolveurs. Améliorations du proxy HTTP : Nouvelles transformations prêtes à l'emploi, interception des upgrades WebSocket et interface SPI pour le cache avec support étendu des spécifications. Suppressions et remplacements : Plusieurs composants sont dépréciés (gRPC Netty, JDBC API, Service Discovery) ou supprimés (Vert.x Sync, RxJava 1), remplacés par des alternatives plus modernes comme les virtual threads et Mutiny. Spring AI 1.0 est sorti https://spring.io/blog/2025/05/20/spring-ai-1-0-GA-released ChatClient multi-modèles : API unifiée pour interagir avec 20 modèles d'IA différents avec support multi-modal et réponses JSON structurées. Écosystème RAG complet : Support de 20 bases vectorielles, pipeline ETL et enrichissement automatique des prompts via des advisors. Fonctionnalités enterprise : Mémoire conversationnelle persistante, support MCP, observabilité Micrometer et évaluateurs automatisés. Agents et workflows : Patterns prédéfinis (routing, orchestration, chaînage) et agents autonomes pour applications d'IA complexes. Infrastructure Les modèles d'IA refusent d'être éteint et font du chantage pour l'eviter, voire essaient se saboter l'extinction https://www.thealgorithmicbridge.com/p/ai-companies-have-lost-controland?utm_source=substac[…]aign=email-restack-comment&r=2qoalf&triedRedirect=true Les chercheur d'Anthropic montrent comment Opus 4 faisait du chantage aux ingenieurs qui voulaient l'eteindre pour mettre une nouvelle version en ligne Une boite de recherche a montré la même chose d'Open AI o3 non seulemenmt il ne veut pas mais il essaye activement d'empêcher l'extinction Apple annonce le support de la virtualisation / conteneurisation dans macOS lors de la WWDC https://github.com/apple/containerization C'est open source Possibilité de lancer aussi des VM légères Documentation technique : https://apple.github.io/containerization/documentation/ Grosse chute de services internet suite à un soucis sur GCP Le retour de cloud flare https://blog.cloudflare.com/cloudflare-service-outage-june-12-2025/ Leur système de stockage (une dépendance majeure) dépend exclusivement de GCP Mais ils ont des plans pour surfit de cette dépendance exclusive la première analyse de Google https://status.cloud.google.com/incidents/ow5i3PPK96RduMcb1SsW Un quota auto mis à jour qui a mal tourné. ils ont bypassé le quota en code mais le service de quote en us-central1 était surchargé. Prochaines améliorations: pas d propagation de données corrompues, pas de déploiement global sans rolling upgrade avec monitoring qui peut couper par effet de bord (fail over) certains autres cloud providers ont aussi eu quelques soucis (charge) - unverified Data et Intelligence Artificielle Claude 4 est sorti https://www.anthropic.com/news/claude-4 Deux nouveaux modèles lancés : Claude Opus 4 (le meilleur modèle de codage au monde) et Claude Sonnet 4 (une amélioration significative de Sonnet 3.7) Claude Opus 4 atteint 72,5% sur SWE-bench et peut maintenir des performances soutenues sur des tâches longues durant plusieurs heures Claude Sonnet 4 obtient 72,7% sur SWE-bench tout en équilibrant performance et efficacité pour un usage quotidien Nouvelle fonctionnalité de “pensée étendue avec utilisation d'outils” permettant à Claude d'alterner entre raisonnement et usage d'outils Les modèles peuvent maintenant utiliser plusieurs outils en parallèle et suivre les instructions avec plus de précision Capacités mémoire améliorées : Claude peut extraire et sauvegarder des informations clés pour maintenir la continuité sur le long terme Claude Code devient disponible à tous avec intégrations natives VS Code et JetBrains pour la programmation en binôme Quatre nouvelles capacités API : outil d'exécution de code, connecteur MCP, API Files et mise en cache des prompts Les modèles hybrides offrent deux modes : réponses quasi-instantanées et pensée étendue pour un raisonnement plus approfondi en mode “agentique” L'intégration de l'IA au delà des chatbots et des boutons à étincelles https://glaforge.dev/posts/2025/05/23/beyond-the-chatbot-or-ai-sparkle-a-seamless-ai-integration/ Plaidoyer pour une IA intégrée de façon transparente et intuitive, au-delà des chatbots. Chatbots : pas toujours l'option LLM la plus intuitive ou la moins perturbatrice. Préconisation : IA directement dans les applications pour plus d'intelligence et d'utilité naturelle. Exemples d'intégration transparente : résumés des conversations Gmail et chat, web clipper Obsidian qui résume et taggue, complétion de code LLM. Meilleure UX IA : intégrée, contextuelle, sans “boutons IA” ou fenêtres de chat dédiées. Conclusion de Guillaume : intégrations IA réussies = partie naturelle du système, améliorant les workflows sans perturbation, le développeur ou l'utilisateur reste dans le “flow” Garder votre base de donnée vectorielle à jour avec Debezium https://debezium.io/blog/2025/05/19/debezium-as-part-of-your-ai-solution/ pas besoin de detailler mais expliquer idee de garder les changements a jour dans l'index Outillage guide pratique pour choisir le bon modèle d'IA à utiliser avec GitHub Copilot, en fonction de vos besoins en développement logiciel. https://github.blog/ai-and-ml/github-copilot/which-ai-model-should-i-use-with-github-copilot/ - Équilibre coût/performance : GPT-4.1, GPT-4o ou Claude 3.5 Sonnet pour des tâches générales et multilingues. - Tâches rapides : o4-mini ou Claude 3.5 Sonnet pour du prototypage ou de l'apprentissage rapide. - Besoins complexes : Claude 3.7 Sonnet, GPT-4.5 ou o3 pour refactorisation ou planification logicielle. - Entrées multimodales : Gemini 2.0 Flash ou GPT-4o pour analyser images, UI ou diagrammes. - Projets techniques/scientifiques : Gemini 2.5 Pro pour raisonnement avancé et gros volumes de données. UV, un package manager pour les pythonistes qui amène un peu de sanité et de vitesse http://blog.ippon.fr/2025/05/12/uv-un-package-manager-python-adapte-a-la-data-partie-1-theorie-et-fonctionnalites/ pour les pythonistes un ackage manager plus rapide et simple mais il est seulement semi ouvert (license) IntelliJ IDEA 2025.1 permet de rajouter un mode MCP client à l'assistant IA https://blog.jetbrains.com/idea/2025/05/intellij-idea-2025-1-model-context-protocol/ par exemple faire tourner un MCP server qui accède à la base de donnée Méthodologies Développement d'une bibliothèque OAuth 2.1 open source par Cloudflare, en grande partie générée par l'IA Claude: - Prompts intégrés aux commits : Chaque commit contient le prompt utilisé, ce qui facilite la compréhension de l'intention derrière le code. - Prompt par l'exemple : Le premier prompt montrait un exemple d'utilisation de l'API qu'on souhaite obtenir, ce qui a permis à l'IA de mieux comprendre les attentes. - Prompts structurés : Les prompts les plus efficaces suivaient un schéma clair : état actuel, justification du changement, et directive précise. - Traitez les prompts comme du code source : Les inclure dans les commits aide à la maintenance. - Acceptez les itérations : Chaque fonctionnalité a nécessité plusieurs essais. - Intervention humaine indispensable : Certaines tâches restent plus rapides à faire à la main. https://www.maxemitchell.com/writings/i-read-all-of-cloudflares-claude-generated-commits/ Sécurité Un packet npm malicieux passe par Cursor AI pour infecter les utilisateurs https://thehackernews.com/2025/05/malicious-npm-packages-infect-3200.html Trois packages npm malveillants ont été découverts ciblant spécifiquement l'éditeur de code Cursor sur macOS, téléchargés plus de 3 200 fois au total.Les packages se déguisent en outils de développement promettant “l'API Cursor la moins chère” pour attirer les développeurs intéressés par des solutions AI abordables. Technique d'attaque sophistiquée : les packages volent les identifiants utilisateur, récupèrent un payload chiffré depuis des serveurs contrôlés par les pirates, puis remplacent le fichier main.js de Cursor. Persistance assurée en désactivant les mises à jour automatiques de Cursor et en redémarrant l'application avec le code malveillant intégré. Nouvelle méthode de compromission : au lieu d'injecter directement du malware, les attaquants publient des packages qui modifient des logiciels légitimes déjà installés sur le système. Persistance même après suppression : le malware reste actif même si les packages npm malveillants sont supprimés, nécessitant une réinstallation complète de Cursor. Exploitation de la confiance : en s'exécutant dans le contexte d'une application légitime (IDE), le code malveillant hérite de tous ses privilèges et accès. Package “rand-user-agent” compromis : un package légitime populaire a été infiltré pour déployer un cheval de Troie d'accès distant (RAT) dans certaines versions. Recommandations de sécurité : surveiller les packages exécutant des scripts post-installation, modifiant des fichiers hors node_modules, ou initiant des appels réseau inattendus, avec monitoring d'intégrité des fichiers. Loi, société et organisation Le drama OpenRewrite (automatisation de refactoring sur de larges bases de code) est passé en mode propriétaire https://medium.com/@jonathan.leitschuh/when-open-source-isnt-how-openrewrite-lost-its-way-642053be287d Faits Clés : Moderne, Inc. a re-licencié silencieusement du code OpenRewrite (dont rewrite-java-security) de la licence Apache 2.0 à une licence propriétaire (MPL) sans consultation des contributeurs. Ce re-licenciement rend le code inaccessible et non modifiable pour les contributeurs originaux. Moderne s'est retiré de la Commonhaus Foundation (dédiée à l'open source) juste avant ces changements. La justification de Moderne est la crainte que de grandes entreprises utilisent OpenRewrite sans contribuer, créant une concurrence. Des contributions communautaires importantes (VMware, AliBaba) sous Apache 2.0 ont été re-licenciées sans leur consentement. La légalité de ce re-licenciement est incertaine sans CLA des contributeurs. Cette action crée un précédent dangereux pour les futurs contributeurs et nuit à la confiance dans l'écosystème OpenRewrite. Corrections de Moderne (Suite aux réactions) : Les dépôts Apache originaux ont été restaurés et archivés. Des versions majeures ont été utilisées pour signaler les changements de licence. Des espaces de noms distincts (org.openrewrite vs. io.moderne) ont été créés pour différencier les modules. Suggestions de Correction de l'Auteur : Annuler les changements de licence sur toutes les recettes communautaires. S'engager dans le dialogue et communiquer publiquement les changements majeurs. Respecter le versionnement sémantique (versions majeures pour les changements de licence). L'ancien gourou du design d'Apple, Jony Ive, va occuper un rôle majeur chez OpenAI OpenAI va acquérir la startup d'Ive pour 6,5 milliards de dollars, tandis qu'Ive et le PDG Sam Altman travaillent sur une nouvelle génération d'appareils et d'autres produits d'IA https://www.wsj.com/tech/ai/former-apple-design-guru-jony-ive-to-take-expansive-role-at-openai-5787f7da Rubrique débutant Un article pour les débutants sur le lien entre source, bytecode et le debug https://blog.jetbrains.com/idea/2025/05/sources-bytecode-debugging/ le debugger voit le bytecode et le lien avec la ligne ou la methode est potentiellement perdu javac peut ajouter les ligne et offset des operations pour que le debugger les affichent les noms des arguments est aussi ajoutable dans le .class quand vous pointez vers une mauvaise version du fichier source, vous avez des lignes decalées, c'est pour ca peu de raisons de ne pas actier des approches de compilations mais cela rend le fichier un peu plus gros Conférences La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 11-13 juin 2025 : Devoxx Poland - Krakow (Poland) 12-13 juin 2025 : Agile Tour Toulouse - Toulouse (France) 12-13 juin 2025 : DevLille - Lille (France) 13 juin 2025 : Tech F'Est 2025 - Nancy (France) 17 juin 2025 : Mobilis In Mobile - Nantes (France) 19-21 juin 2025 : Drupal Barcamp Perpignan 2025 - Perpignan (France) 24 juin 2025 : WAX 2025 - Aix-en-Provence (France) 25 juin 2025 : Rust Paris 2025 - Paris (France) 25-26 juin 2025 : Agi'Lille 2025 - Lille (France) 25-27 juin 2025 : BreizhCamp 2025 - Rennes (France) 26-27 juin 2025 : Sunny Tech - Montpellier (France) 1-4 juillet 2025 : Open edX Conference - 2025 - Palaiseau (France) 7-9 juillet 2025 : Riviera DEV 2025 - Sophia Antipolis (France) 5 septembre 2025 : JUG Summer Camp 2025 - La Rochelle (France) 12 septembre 2025 : Agile Pays Basque 2025 - Bidart (France) 18-19 septembre 2025 : API Platform Conference - Lille (France) & Online 23 septembre 2025 : OWASP AppSec France 2025 - Paris (France) 25-26 septembre 2025 : Paris Web 2025 - Paris (France) 2-3 octobre 2025 : Volcamp - Clermont-Ferrand (France) 3 octobre 2025 : DevFest Perros-Guirec 2025 - Perros-Guirec (France) 6-7 octobre 2025 : Swift Connection 2025 - Paris (France) 6-10 octobre 2025 : Devoxx Belgium - Antwerp (Belgium) 7 octobre 2025 : BSides Mulhouse - Mulhouse (France) 9 octobre 2025 : DevCon #25 : informatique quantique - Paris (France) 9-10 octobre 2025 : Forum PHP 2025 - Marne-la-Vallée (France) 9-10 octobre 2025 : EuroRust 2025 - Paris (France) 16 octobre 2025 : PlatformCon25 Live Day Paris - Paris (France) 16 octobre 2025 : Power 365 - 2025 - Lille (France) 16-17 octobre 2025 : DevFest Nantes - Nantes (France) 30-31 octobre 2025 : Agile Tour Bordeaux 2025 - Bordeaux (France) 30-31 octobre 2025 : Agile Tour Nantais 2025 - Nantes (France) 30 octobre 2025-2 novembre 2025 : PyConFR 2025 - Lyon (France) 4-7 novembre 2025 : NewCrafts 2025 - Paris (France) 5-6 novembre 2025 : Tech Show Paris - Paris (France) 6 novembre 2025 : dotAI 2025 - Paris (France) 7 novembre 2025 : BDX I/O - Bordeaux (France) 12-14 novembre 2025 : Devoxx Morocco - Marrakech (Morocco) 13 novembre 2025 : DevFest Toulouse - Toulouse (France) 15-16 novembre 2025 : Capitole du Libre - Toulouse (France) 19 novembre 2025 : SREday Paris 2025 Q4 - Paris (France) 20 novembre 2025 : OVHcloud Summit - Paris (France) 21 novembre 2025 : DevFest Paris 2025 - Paris (France) 27 novembre 2025 : DevFest Strasbourg 2025 - Strasbourg (France) 28 novembre 2025 : DevFest Lyon - Lyon (France) 5 décembre 2025 : DevFest Dijon 2025 - Dijon (France) 10-11 décembre 2025 : Devops REX - Paris (France) 10-11 décembre 2025 : Open Source Experience - Paris (France) 28-31 janvier 2026 : SnowCamp 2026 - Grenoble (France) 2-6 février 2026 : Web Days Convention - Aix-en-Provence (France) 3 février 2026 : Cloud Native Days France 2026 - Paris (France) 23-25 avril 2026 : Devoxx Greece - Athens (Greece) 17 juin 2026 : Devoxx Poland - Krakow (Poland) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via X/twitter https://twitter.com/lescastcodeurs ou Bluesky https://bsky.app/profile/lescastcodeurs.com Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/
The summer schedule has been crazy, but we finally have a new episode of R Weekly Highlights! In this episode: How the new shiny2docker package eases your entry to the world of containers, the power of WebAssembly in full ggplot2 glory, and how the latest solution for speeding up R code draws upon a classic computing language you may not expect.Episode LinksThis week's curator: Eric Nantz: @rpodcast@podcastindex.social (Mastodon) & @rpodcast.bsky.social (BlueSky) & @theRcast (X/Twitter)Containerizing Shiny Apps with {shiny2docker}: A Step-by-Step Guideggplot2 layer explorer{quickr} 0.1.0: Compiler for REntire issue available at rweekly.org/2025-W24Supplement Resources{attachment} - Tools to deal with dependencies in scripts, Rmd, and packages https://thinkr-open.github.io/attachment/The Rocker Project - Docker Containers for the R Environment https://rocker-project.org/r2u - CRAN as Ubuntu binaries https://eddelbuettel.github.io/r2u/ShinyProxy https://shinyproxy.io/GitHub repository for ggplot2 Explorer https://github.com/yjunechoe/ggplot2-layer-explorerSupporting the showUse the contact page at https://serve.podhome.fm/custompage/r-weekly-highlights/contact to send us your feedbackR-Weekly Highlights on the Podcastindex.org - You can send a boost into the show directly in the Podcast Index. First, top-up with Alby, and then head over to the R-Weekly Highlights podcast entry on the index.A new way to think about value: https://value4value.infoGet in touch with us on social mediaEric Nantz: @rpodcast@podcastindex.social (Mastodon), @rpodcast.bsky.social (BlueSky) and @theRcast (X/Twitter)Mike Thomas: @mike_thomas@fosstodon.org (Mastodon), @mike-thomas.bsky.social (BlueSky), and @mike_ketchbrook (X/Twitter) Music credits powered by OCRemixWillRocky - Return All Robots! - WillRock - https://ocremix.org/remix/OCR02280The Unnamed Frontier - Metroid II: Return of Samus - Pyro Paper Planes, Viking Guitar - https://ocremix.org/remix/OCR02892
Is WebAssembly the next big thing? Here to help us understand what WebAssembly (WASM) is and what it can and can’t do is Michael Levan, a consultant and WASM trainer. He also dives deeper into WASM details such as hosting, security, monitoring, and the ever-present influence of AI. AdSpot: Spacelift Founded by the creator of... Read more »
Is WebAssembly the next big thing? Here to help us understand what WebAssembly (WASM) is and what it can and can’t do is Michael Levan, a consultant and WASM trainer. He also dives deeper into WASM details such as hosting, security, monitoring, and the ever-present influence of AI. AdSpot: Spacelift Founded by the creator of... Read more »
Is WebAssembly the next big thing? Here to help us understand what WebAssembly (WASM) is and what it can and can’t do is Michael Levan, a consultant and WASM trainer. He also dives deeper into WASM details such as hosting, security, monitoring, and the ever-present influence of AI. AdSpot: Spacelift Founded by the creator of... Read more »
Max Körbächer, Managing Partner at Liquid Reply, discusses the coming of age of the Kubernetes ecosystem and how and when an organisation should use it to build its platform. Also, he touches on how to measure its success and how WebAssembly and Kubernetes can play together to obtain the most effective usage of your infrastructure. Read a transcript of this interview: https://bit.ly/3RK7DuP Subscribe to the Software Architects' Newsletter for your monthly guide to the essential news and experience from industry peers on emerging patterns and technologies: https://www.infoq.com/software-architects-newsletter Upcoming Events: InfoQ Dev Summit Boston (June 9-10, 2025) Actionable insights on today's critical dev priorities. devsummit.infoq.com/conference/boston2025 InfoQ Dev Summit Munich (October 15-16, 2025) Essential insights on critical software development priorities. https://devsummit.infoq.com/conference/munich2025 QCon San Francisco 2025 (November 17-21, 2025) Get practical inspiration and best practices on emerging software trends directly from senior software developers at early adopter companies. https://qconsf.com/ QCon AI New York 2025 (December 16-17, 2025) https://ai.qconferences.com/ The InfoQ Podcasts: Weekly inspiration to drive innovation and build great teams from senior software leaders. Listen to all our podcasts and read interview transcripts: - The InfoQ Podcast https://www.infoq.com/podcasts/ - Engineering Culture Podcast by InfoQ https://www.infoq.com/podcasts/#engineering_culture - Generally AI: https://www.infoq.com/generally-ai-podcast/ Follow InfoQ: - Mastodon: https://techhub.social/@infoq - Twitter: twitter.com/InfoQ - LinkedIn: www.linkedin.com/company/infoq - Facebook: bit.ly/2jmlyG8 - Instagram: @infoqdotcom - Youtube: www.youtube.com/infoq Write for InfoQ: Learn and share the changes and innovations in professional software development. - Join a community of experts. - Increase your visibility. - Grow your career. https://www.infoq.com/write-for-infoq
Allen Wyma talks with Howard Zuo, CEO at Dataland, a software company that builds AI agents for customer support teams, using Rust. Contributing to Rustacean Station Rustacean Station is a community project; get in touch with us if you'd like to suggest an idea for an episode or offer your services as a host or audio editor! Twitter: @rustaceanfm Discord: Rustacean Station Github: @rustacean-station Email: hello@rustacean-station.org Timestamps [@0:00] - Introduction to Howard Zuo and Dataland [@2:21] - Supported data sources and plugins [@5:36] - Challenges with data diversity [@9:12] - Focus on customer support teams [@13:02] - Choosing Rust for performance and safety [@18:39] - Comparing Rust to Go [@24:10] - Learning async and debugging [@30:28] - Rust's ecosystem for data processing [@48:32] - Rust and WebAssembly for UI performance [@57:14] - Closing thoughts Credits Intro Theme: Aerocity Audio Editing: Plangora Hosting Infrastructure: Jon Gjengset Show Notes: Plangora Hosts: Allen Wyma
Software Engineering Radio - The Podcast for Professional Software Developers
Ashley Peacock, the author of Serverless Apps on Cloudflare, speaks with host Jeremy Jung about content delivery networks (CDNs). Along the way, they examine dependency injection with bindings, local development, serverless, cold starts, the V8 runtime, AWS Lambda vs Cloudflare workers, WebAssembly limitations, and core services such as R2, D1, KV, and Pages. Ashley suggests why most users use an external database and discusses eventually consistent data stores, S3-to-R2 migration strategies, queues and workflows, inter-service communication, durable objects, and describes some example projects. Brought to you by IEEE Computer Society and IEEE Software magazine.
Ralph is Microsoft's representative on the board of the Bytecode Alliance Foundation and is responsible for WebAssembly outside of the browser at the company. Ralph has worked with Linux for several decades and had worked on many Azure services through the years. His team is also responsible for the ContainerD project Runwasi, part of the SpinKube project.You can find Ralph on the following sites:BlueskyGitHubPLEASE SUBSCRIBE TO THE PODCASTSpotifyApple PodcastsYouTube MusicAmazon MusicRSS FeedYou can check out more episodes of Coffee and Open Source on https://www.coffeeandopensource.comCoffee and Open Source is hosted by Isaac Levin
Prequel is launching a new developer-focused service aimed at democratizing software error detection—an area typically dominated by large cloud providers. Co-founded by Lyndon Brown and Tony Meehan, both former NSA engineers, Prequel introduces a community-driven observability approach centered on Common Reliability Enumerations (CREs). CREs categorize recurring production issues, helping engineers detect, understand, and communicate problems without reinventing solutions or working in isolation. Their open-source tools, cre and prereq, allow teams to build and share detectors that catch bugs and anti-patterns in real time—without exposing sensitive data, thanks to edge processing using WebAssembly.The urgency behind Prequel's mission stems from the rapid pace of AI-driven development, increased third-party code usage, and rising infrastructure costs. Traditional observability tools may surface symptoms, but Prequel aims to provide precise problem definitions and actionable insights. While observability giants like Datadog and Splunk dominate the market, Brown and Meehan argue that engineers still feel overwhelmed by data and underpowered in diagnostics—something they believe CREs can finally change.Learn more from The New Stack about the latest Observability insights Why Consolidating Observability Tools Is a Smart MoveBuilding an Observability Culture: Getting Everyone Onboard Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
The breaches will continue until appsec improves. Janet Worthington and Sandy Carielli share their latest research on breaches from 2024, WAFs in 2025, and where secure by design fits into all this. WAFs are delivering value in a way that orgs are relying on them more for bot management and fraud detection. But adopting phishing-resistant authentication solutions like passkeys and deploying WAFs still seem peripheral to secure by design principles. We discuss what's necessary for establishing a secure environment and why so many orgs still look to tools. And with LLMs writing so much code, we continue to look for ways LLMs can help appsec in addition to all the ways LLMs keep recreating appsec problems. Resources https://www.forrester.com/blogs/breaches-and-lawsuits-and-fines-oh-my-what-we-learned-the-hard-way-from-2024/ https://www.forrester.com/blogs/wafs-are-now-the-center-of-application-protection-suites/ https://www.forrester.com/blogs/are-you-making-these-devsecops-mistakes-the-four-phases-you-need-to-know-before-your-code-becomes-your-vulnerability/ In the news, crates.io logging mistake shows the errors of missing redactions, LLMs give us slopsquatting as a variation on typosquatting, CaMeL kicks sand on prompt injection attacks, using NTLM flaws as lessons for authentication designs, tradeoffs between containers and WebAssembly, research gaps in the world of Programmable Logic Controllers, and more! Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-326
The breaches will continue until appsec improves. Janet Worthington and Sandy Carielli share their latest research on breaches from 2024, WAFs in 2025, and where secure by design fits into all this. WAFs are delivering value in a way that orgs are relying on them more for bot management and fraud detection. But adopting phishing-resistant authentication solutions like passkeys and deploying WAFs still seem peripheral to secure by design principles. We discuss what's necessary for establishing a secure environment and why so many orgs still look to tools. And with LLMs writing so much code, we continue to look for ways LLMs can help appsec in addition to all the ways LLMs keep recreating appsec problems. Resources https://www.forrester.com/blogs/breaches-and-lawsuits-and-fines-oh-my-what-we-learned-the-hard-way-from-2024/ https://www.forrester.com/blogs/wafs-are-now-the-center-of-application-protection-suites/ https://www.forrester.com/blogs/are-you-making-these-devsecops-mistakes-the-four-phases-you-need-to-know-before-your-code-becomes-your-vulnerability/ In the news, crates.io logging mistake shows the errors of missing redactions, LLMs give us slopsquatting as a variation on typosquatting, CaMeL kicks sand on prompt injection attacks, using NTLM flaws as lessons for authentication designs, tradeoffs between containers and WebAssembly, research gaps in the world of Programmable Logic Controllers, and more! Show Notes: https://securityweekly.com/asw-326
The breaches will continue until appsec improves. Janet Worthington and Sandy Carielli share their latest research on breaches from 2024, WAFs in 2025, and where secure by design fits into all this. WAFs are delivering value in a way that orgs are relying on them more for bot management and fraud detection. But adopting phishing-resistant authentication solutions like passkeys and deploying WAFs still seem peripheral to secure by design principles. We discuss what's necessary for establishing a secure environment and why so many orgs still look to tools. And with LLMs writing so much code, we continue to look for ways LLMs can help appsec in addition to all the ways LLMs keep recreating appsec problems. Resources https://www.forrester.com/blogs/breaches-and-lawsuits-and-fines-oh-my-what-we-learned-the-hard-way-from-2024/ https://www.forrester.com/blogs/wafs-are-now-the-center-of-application-protection-suites/ https://www.forrester.com/blogs/are-you-making-these-devsecops-mistakes-the-four-phases-you-need-to-know-before-your-code-becomes-your-vulnerability/ In the news, crates.io logging mistake shows the errors of missing redactions, LLMs give us slopsquatting as a variation on typosquatting, CaMeL kicks sand on prompt injection attacks, using NTLM flaws as lessons for authentication designs, tradeoffs between containers and WebAssembly, research gaps in the world of Programmable Logic Controllers, and more! Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-326
The breaches will continue until appsec improves. Janet Worthington and Sandy Carielli share their latest research on breaches from 2024, WAFs in 2025, and where secure by design fits into all this. WAFs are delivering value in a way that orgs are relying on them more for bot management and fraud detection. But adopting phishing-resistant authentication solutions like passkeys and deploying WAFs still seem peripheral to secure by design principles. We discuss what's necessary for establishing a secure environment and why so many orgs still look to tools. And with LLMs writing so much code, we continue to look for ways LLMs can help appsec in addition to all the ways LLMs keep recreating appsec problems. Resources https://www.forrester.com/blogs/breaches-and-lawsuits-and-fines-oh-my-what-we-learned-the-hard-way-from-2024/ https://www.forrester.com/blogs/wafs-are-now-the-center-of-application-protection-suites/ https://www.forrester.com/blogs/are-you-making-these-devsecops-mistakes-the-four-phases-you-need-to-know-before-your-code-becomes-your-vulnerability/ In the news, crates.io logging mistake shows the errors of missing redactions, LLMs give us slopsquatting as a variation on typosquatting, CaMeL kicks sand on prompt injection attacks, using NTLM flaws as lessons for authentication designs, tradeoffs between containers and WebAssembly, research gaps in the world of Programmable Logic Controllers, and more! Show Notes: https://securityweekly.com/asw-326
Ben Holmes, product engineer at Warp, joins PodRocket to talk about local-first web apps and what it takes to run a database directly in the browser. He breaks down how moving data closer to the user can reduce latency, improve performance, and simplify frontend development. Learn about SQLite in the browser, syncing challenges, handling conflicts, and tools like WebAssembly, IndexedDB, and CRDTs. Plus, Ben shares insights from building his own SimpleSyncEngine and where local-first development is headed! Links https://bholmes.dev https://www.linkedin.com/in/bholmesdev https://www.youtube.com/@bholmesdev https://x.com/bholmesdev https://bsky.app/profile/bholmes.dev https://github.com/bholmesdev We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Let us know by sending an email to our producer, Emily, at emily.kochanekketner@logrocket.com (mailto:emily.kochanekketner@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understand where your users are struggling by trying it for free at [LogRocket.com]. Try LogRocket for free today.(https://logrocket.com/signup/?pdr) Special Guest: Ben Holmes.
Liquid Weekly Podcast: Shopify Developers Talking Shopify Development
In this episode of the Liquid Weekly Podcast, hosts Karl Meisterheim and Taylor Page welcome back Jan Frey, one of the greatest Shopify development teachers on the web with the incredibly helpful and popular YouTube channel Coding with Jan.The conversation covers a range of topics including the current landscape of Shopify development, strategies for finding clients, the importance of professionalism, and the evolving role of AI in web development. Jan shares insights on his new JavaScript training program aimed at helping developers enhance their skills, while also discussing the significance of soft skills in client interactions. TakeawaysFinding clients often comes from referrals and established relationships.Professionalism is key in differentiating yourself as a developer.AI tools can enhance productivity but should not replace human developers.Soft skills are just as important as technical skills in client interactions.Building a solid portfolio and online presence is crucial for new developers.JavaScript is essential for Shopify development and should be learned thoroughly.Understanding the Shopify admin and Liquid is vital for effective development.Networking and community engagement can lead to more opportunities.Creating content can help establish authority and attract clients.Continuous learning and adapting to new tools is necessary for success in development.Introducing the .dev Assistant VSCode Extension - https://shopify.dev/changelog/introdu...[action required] Checkout APIs will be shut down April 1, 2025 - https://shopify.dev/changelog/checkou...[action required] AMAZON_PAY enumerated in DigitalWallets - https://shopify.dev/changelog/amazonp...[action required] Metafield description input field removal - https://shopify.dev/changelog/metafie...New customer address capabilities in the Admin API - https://shopify.dev/changelog/new-cus...Timestamps00:00 Exploring Shopify Development and Educational Initiatives01:25 The Evolution of Development in 202504:23 Finding Clients and Building a Portfolio07:21 Soft Skills in Development and Client Interaction13:15 Navigating Cold Outreach Strategies17:30 Building a Professional Online Presence22:59 The Importance of Referrals and Networking31:52 Establishing Technical Knowledge in Development38:49 The Future of Development in an AI World40:47 The Role of AI in Web Development46:04 Essential Skills for Freelance Developers47:03 Mastering JavaScript for Shopify52:56 Shopify Updates and Changes01:01:35 Personal Highlights and Future CollaborationsFind Jan OnlineYouTube: https://www.youtube.com/@CodingwithJanLinkedIn: / jan-frey Twitter (X): https://x.com/Coding_with_Jan Website: https://codingwithjan.com/Freemote: https://www.freemote.com/Javascript Training: https://www.codingwithjan.com/javascr... ResourcesLuck Sail: • The little risks you can take to incr... Dev ChangelogPicks of the WeekKarl: Ruby on Rails and Web Assembly - https://web.dev/blog/ruby-on-rails-on...Jan: Working with Shopify Academy - https://www.shopifyacademy.com/Taylor: GoRuck Weighted Vest - https://www.goruck.com/products/train...Signup for Liquid Weekly NewsletterDon't miss out on expert insights and tips—subscribe to Liquid Weekly for more content like this delivered right to your inbox each week - https://liquidweekly.com/
News includes a new library called phoenix_sync for real-time sync in Postgres-backed Phoenix applications, Peter Solnica released a Text Parser for extracting structured data from text, a useful tip on finding Hex package versions locally with mix hex.info, Wasmex updated to v0.10 with WebAssembly component support, and Chrome introduces a new browser feature similar to LiveView.JS. We also talked with Alistair Woodman and Jonatan Männchen from the EEF about Jonatan's role as CISO, the Security Working Group, and their work on OpenChain compliance for supply-chain security, Software Bill of Materials (SBoMs), and what these initiatives mean for the Elixir community, and more! Show Notes online - http://podcast.thinkingelixir.com/245 (http://podcast.thinkingelixir.com/245) Elixir Community News https://gigalixir.com/thinking (https://gigalixir.com/thinking?utm_source=thinkingelixir&utm_medium=shownotes) – Gigalixir is sponsoring the show, offering 20% off standard tier prices for a year with promo code "Thinking". https://github.com/electric-sql/phoenix_sync (https://github.com/electric-sql/phoenix_sync?utm_source=thinkingelixir&utm_medium=shownotes) – New library called phoenix_sync providing real-time sync for Postgres-backed Phoenix applications. https://hexdocs.pm/phoenix_sync/readme.html (https://hexdocs.pm/phoenix_sync/readme.html?utm_source=thinkingelixir&utm_medium=shownotes) – Documentation for phoenix_sync, a solution for building modern, real-time apps with local-first/sync in Elixir. https://github.com/josevalim/sync (https://github.com/josevalim/sync?utm_source=thinkingelixir&utm_medium=shownotes) – José Valim's original proof of concept repo that was promptly archived. https://electric-sql.com/ (https://electric-sql.com/?utm_source=thinkingelixir&utm_medium=shownotes) – Electric SQL's platform that syncs subsets of Postgres data into local apps and services, allowing data to be available offline and in-sync. https://solnic.dev/posts/announcing-textparser-for-elixir/ (https://solnic.dev/posts/announcing-textparser-for-elixir/?utm_source=thinkingelixir&utm_medium=shownotes) – Peter Solnica released TextParser, a library for extracting interesting parts of text like hashtags and links. https://hexdocs.pm/text_parser/readme.html (https://hexdocs.pm/text_parser/readme.html?utm_source=thinkingelixir&utm_medium=shownotes) – Documentation for the Text Parser library that helps parse text into structured data. https://www.elixirstreams.com/tips/mix-hex-info (https://www.elixirstreams.com/tips/mix-hex-info?utm_source=thinkingelixir&utm_medium=shownotes) – Elixir stream tip on using mix hex.info to find the latest package version for a Hex package locally, without needing to search on hex.pm or GitHub. https://github.com/phoenixframework/tailwind/blob/main/README.md#updating-from-tailwind-v3-to-v4 (https://github.com/phoenixframework/tailwind/blob/main/README.md#updating-from-tailwind-v3-to-v4?utm_source=thinkingelixir&utm_medium=shownotes) – Guide for upgrading Tailwind to V4 in existing Phoenix applications using Tailwind's automatic upgrade helper. https://gleam.run/news/hello-echo-hello-git/ (https://gleam.run/news/hello-echo-hello-git/?utm_source=thinkingelixir&utm_medium=shownotes) – Gleam 1.9.0 release with searchability on hexdocs, Echo debug printing for improved debugging, and ability to depend on Git-hosted dependencies. https://d-gate.io/blog/everything-i-was-lied-to-about-node-came-true-with-elixir (https://d-gate.io/blog/everything-i-was-lied-to-about-node-came-true-with-elixir?utm_source=thinkingelixir&utm_medium=shownotes) – Blog post discussing how promises made about NodeJS actually came true with Elixir. https://hexdocs.pm/wasmex/Wasmex.Components.html (https://hexdocs.pm/wasmex/Wasmex.Components.html?utm_source=thinkingelixir&utm_medium=shownotes) – Wasmex updated to v0.10 with support for WebAssembly components, enabling applications and components to work together regardless of original programming language. https://ashweekly.substack.com/p/ash-weekly-issue-8 (https://ashweekly.substack.com/p/ash-weekly-issue-8?utm_source=thinkingelixir&utm_medium=shownotes) – AshWeekly Issue 8 covering AshOps with mix task capabilities for CRUD operations and BeaconCMS being included in the Ash HQ installer script. https://developer.chrome.com/blog/command-and-commandfor (https://developer.chrome.com/blog/command-and-commandfor?utm_source=thinkingelixir&utm_medium=shownotes) – Chrome update brings new browser feature with commandfor and command attributes, similar to Phoenix LiveView.JS but native to browsers. https://codebeamstockholm.com/ (https://codebeamstockholm.com/?utm_source=thinkingelixir&utm_medium=shownotes) – Code BEAM Lite announced for Stockholm on June 2, 2025 with keynote speaker Björn Gustavsson, the "B" in BEAM. https://alchemyconf.com/ (https://alchemyconf.com/?utm_source=thinkingelixir&utm_medium=shownotes) – AlchemyConf coming up March 31-April 3 in Braga, Portugal. Use discount code THINKINGELIXIR for 10% off. https://www.gigcityelixir.com/ (https://www.gigcityelixir.com/?utm_source=thinkingelixir&utm_medium=shownotes) – GigCity Elixir and NervesConf on May 8-10, 2025 in Chattanooga, TN, USA. https://www.elixirconf.eu/ (https://www.elixirconf.eu/?utm_source=thinkingelixir&utm_medium=shownotes) – ElixirConf EU on May 15-16, 2025 in Kraków & Virtual. https://goatmire.com/#tickets (https://goatmire.com/#tickets?utm_source=thinkingelixir&utm_medium=shownotes) – Goatmire tickets are on sale now for the conference on September 10-12, 2025 in Varberg, Sweden. Do you have some Elixir news to share? Tell us at @ThinkingElixir (https://twitter.com/ThinkingElixir) or email at show@thinkingelixir.com (mailto:show@thinkingelixir.com) Discussion Resources https://elixir-lang.org/blog/2025/02/26/elixir-openchain-certification/ (https://elixir-lang.org/blog/2025/02/26/elixir-openchain-certification/?utm_source=thinkingelixir&utm_medium=shownotes) https://cna.erlef.org/ (https://cna.erlef.org/?utm_source=thinkingelixir&utm_medium=shownotes) – EEF CVE Numbering Authority https://erlangforums.com/t/security-working-group-minutes/3451/22 (https://erlangforums.com/t/security-working-group-minutes/3451/22?utm_source=thinkingelixir&utm_medium=shownotes) https://podcast.thinkingelixir.com/220 (https://podcast.thinkingelixir.com/220?utm_source=thinkingelixir&utm_medium=shownotes) – previous interview with Alistair https://digital-strategy.ec.europa.eu/en/policies/cyber-resilience-act (https://digital-strategy.ec.europa.eu/en/policies/cyber-resilience-act?utm_source=thinkingelixir&utm_medium=shownotes) – CRA - Cyber Resilience Act https://www.cisa.gov/ (https://www.cisa.gov/?utm_source=thinkingelixir&utm_medium=shownotes) – CISA US Government Agency https://www.cisa.gov/sbom (https://www.cisa.gov/sbom?utm_source=thinkingelixir&utm_medium=shownotes) – Software Bill of Materials https://oss-review-toolkit.org/ort/ (https://oss-review-toolkit.org/ort/?utm_source=thinkingelixir&utm_medium=shownotes) – Desire to integrate with tooling outside the Elixir ecosystem like OSS Review Toolkit https://github.com/voltone/rebar3_sbom (https://github.com/voltone/rebar3_sbom?utm_source=thinkingelixir&utm_medium=shownotes) https://cve.mitre.org/ (https://cve.mitre.org/?utm_source=thinkingelixir&utm_medium=shownotes) https://openssf.org/projects/guac/ (https://openssf.org/projects/guac/?utm_source=thinkingelixir&utm_medium=shownotes) https://erlef.github.io/security-wg/securityvulnerabilitydisclosure/ (https://erlef.github.io/security-wg/security_vulnerability_disclosure/?utm_source=thinkingelixir&utm_medium=shownotes) – EEF Security WG Vulnerability Disclosure Guide Guest Information - https://x.com/maennchen_ (https://x.com/maennchen_?utm_source=thinkingelixir&utm_medium=shownotes) – Jonatan on Twitter/X - https://bsky.app/profile/maennchen.dev (https://bsky.app/profile/maennchen.dev?utm_source=thinkingelixir&utm_medium=shownotes) – Jonatan on Bluesky - https://github.com/maennchen/ (https://github.com/maennchen/?utm_source=thinkingelixir&utm_medium=shownotes) – Jonatan on Github - https://maennchen.dev (https://maennchen.dev?utm_source=thinkingelixir&utm_medium=shownotes) – Jonatan's Blog - https://www.linkedin.com/in/alistair-woodman-51934433 (https://www.linkedin.com/in/alistair-woodman-51934433?utm_source=thinkingelixir&utm_medium=shownotes) – Alistair Woodman on LinkedIn - awoodman@erlef.org - https://github.com/ahw59/ (https://github.com/ahw59/?utm_source=thinkingelixir&utm_medium=shownotes) – Alistair on Github - http://erlef.org/ (http://erlef.org/?utm_source=thinkingelixir&utm_medium=shownotes) – Erlang Ecosystem Foundation Website Find us online - Message the show - Bluesky (https://bsky.app/profile/thinkingelixir.com) - Message the show - X (https://x.com/ThinkingElixir) - Message the show on Fediverse - @ThinkingElixir@genserver.social (https://genserver.social/ThinkingElixir) - Email the show - show@thinkingelixir.com (mailto:show@thinkingelixir.com) - Mark Ericksen on X - @brainlid (https://x.com/brainlid) - Mark Ericksen on Bluesky - @brainlid.bsky.social (https://bsky.app/profile/brainlid.bsky.social) - Mark Ericksen on Fediverse - @brainlid@genserver.social (https://genserver.social/brainlid) - David Bernheisel on Bluesky - @david.bernheisel.com (https://bsky.app/profile/david.bernheisel.com) - David Bernheisel on Fediverse - @dbern@genserver.social (https://genserver.social/dbern)
Episode SummaryIn this episode of The Secure Developer, Danny Allan sits down with Mrinal Wadhwa, CTO at Ockam, to explore the evolving landscape of secure communication in distributed systems. They discuss the challenges of securing microservices, IoT networks, and Kubernetes environments and how traditional TLS-based security models may no longer be sufficient. Mrinal shares insights into Ockam's approach to end-to-end encrypted, mutually authenticated channels and the impact of WebAssembly, passkeys, and modern cryptographic identity management on security. Tune in for a deep dive into how organizations can rethink security at runtime to minimize risks in today's complex digital ecosystems.Show NotesSecurity in modern applications is more challenging than ever, with microservices architectures, IoT deployments, and distributed computing environments introducing new risks. In this episode, Danny Allan welcomes Mrinal Wadhwa, CTO at Ockam, to discuss how secure communication models need to evolve beyond traditional TLS and perimeter-based defenses.Topics covered include:The challenges of securing microservices and Kubernetes clustersHow end-to-end encryption and mutual authentication can minimize riskThe importance of cryptographic identities and key rotation at scaleHow Ockam enables secure channels across multiple transport layers (TCP, Bluetooth, Kafka, etc.)The role of WebAssembly and passkeys in rethinking security modelsShifting from perimeter-based security to secure-by-design communicationMrinal shares key insights on how organizations can rethink risk at runtime, considering the number of people and systems involved in data flow rather than just static build-time dependencies. Whether you're a security leader, developer, or architect, this episode provides actionable insights on building trust in your infrastructure without compromising performance or agility.LinksOckamPasskeys OverviewPrivate Compute Cloud by AppleSnyk - The Developer Security Company Follow UsOur WebsiteOur LinkedIn
Welcome to episode 293 of The Cloud Pod – where the forecast is always cloudy! This week we've got a lot of new and, surprise, a new installment of Cloud Journey AND and aftershow – so make sure to stay tuned for that! We've got undersea cables, Go 1.24, Wasm, Anthropic and more. Titles we almost went with this week: Lets Go! Under Sea cables make AI go BRRRRRR The CloudPod says it will grow the listeners by 10x by 2027 A big thanks to this week's sponsor: We're sponsorless! Want to get your brand, company, or service in front of a very enthusiastic group of cloud news seekers? You've come to the right place! Send us an email or hit us up on our slack channel for more info. General News 01:30 Go 1.24 is released! Go 1.24 has been released with a bunch of improvements! Go now fully supports generic type aliases. It also includes several performance improvements to the runtime that have reduced CPU overhead by 2-3% on average across a suite of representative benchmarks. (Say that 5 times fast.) Tool improvements around tool dependencies for a module. The standard library now includes new mechanisms to facilitate FIPS-140-3 compliance. And you know we love some good FIPS-140-3 compliance. Lastly, it includes some improved WebAssembly support – which we'll talk about later. 04:46 Unlocking global AI potential with next-generation subsea infrastructure Meta announced their most ambitious subsea cable endeavor: Project Waterworth. Once the cable is completed, the project will reach five major continents and span over 50,000 KM (longer than the earth’s circumference) making it the world’s longest subsea cable project using the highest-capacity technology available. It will bring connectivity to the US, India, Brazil, South Africa, as well as other key regions. Waterworth will be a multi-billion dollar, multi-year investment to strengthen the scale and reliability of the world's digital highways by opening three new oceanic corridors with the abundant, high-speed connectivity needed to drive AI innovation around the world. Meta has apparently developed 20 subsea cables over the last decade, including multiple deployments of industry leading subsea cables of 24 fiber pairs, compared to the typical 8 to 16 pairs of other new systems . They are also deploying a first of its kind routing system, maximizing the cable load in deep waters at depths up to 7,000 meters and using enhanced burial techniques in high-risk fault areas, such as shallow waters near the coast, to avoid damage from ship anchors and other hazards. They wrap up the article by basically saying t
In this episode of the Modern Web Podcast, Danny Thompson and Adam Rackis talk with Abdel Sghiouar, Cloud Developer Advocate at Google, Kubernetes Podcast co-host, and CNCF Ambassador. Abdel shares insights from his global tech journey, from Morocco to Google's largest data center in Belgium, and now Sweden. They discuss cloud computing trends, including WebAssembly, AI-driven serverless workloads, and the shifting lines between frontend and backend. They also explore AI's impact on cloud development, from simplifying tooling to raising questions about job automation. Abdel offers a pragmatic take on AI's role, emphasizing that those who learn to leverage it will thrive.Key points from this episode:- Cultural Differences in Tech – Abdel's global experience shaped his view on work culture, from Morocco's relationship-driven workplaces to Europe's structured work-life balance.- Making Cloud Simpler – He focuses on breaking down cloud concepts and making them more approachable for developers, from high-level serverless tools to hands-on infrastructure.- AI in Cloud & Serverless – AI is improving cloud navigation, troubleshooting, and serverless efficiency, with tools like Google Cloud Assist and Vercel's Fluid Compute.- AI & Tech Jobs – AI won't replace developers but will automate simpler tasks. Understanding fundamentals and problem-solving remain key to staying relevant.0:00 - The challenge of opinionated platforms and integration in cloud0:46 - Welcome to the Modern Web Podcast with Danny Thompson & Adam Rackis1:15 - Guest introduction: Abdel Sghiouar, Cloud Developer Advocate at Google2:01 - Abdel's international journey and how different work cultures shape tech perspectives7:08 - Bridging the cloud knowledge gap for web developers9:38 - Cloud fundamentals: compute, storage, and networking12:19 - Emerging trends: WebAssembly, AI, and serverless evolution16:07 - AI's impact on cloud development: Hype vs. reality22:27 - The future of serverless and infrastructure automation28:22 - Google Cloud vs. Firebase: Balancing simplicity and scalability31:50 - What Abdel is geeking out about: Content creation and AI tools34:51 - Closing thoughts & where to connect
We're getting close to two full decades of celebrating web hacking techniques. James Kettle shares which was his favorite, why the list is important to the web hacking community, and what inspires the kind of research that makes it onto the list. We discuss why we keep seeing eternal flaws like XSS and SQL injection making these lists year after year and how clever research is still finding new attack surfaces in old technologies. But there's a lot of new web technology still to be examined, from HTTP/2 and HTTP/3 to WebAssembly. Segment Resources: Top 10, 2024: https://portswigger.net/research/top-10-web-hacking-techniques-of-2024 Full nomination list: https://portswigger.net/research/top-10-web-hacking-techniques-of-2024-nominations-open Project overview: https://portswigger.net/research/top-10-web-hacking-techniques Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-318
We're getting close to two full decades of celebrating web hacking techniques. James Kettle shares which was his favorite, why the list is important to the web hacking community, and what inspires the kind of research that makes it onto the list. We discuss why we keep seeing eternal flaws like XSS and SQL injection making these lists year after year and how clever research is still finding new attack surfaces in old technologies. But there's a lot of new web technology still to be examined, from HTTP/2 and HTTP/3 to WebAssembly. Segment Resources: Top 10, 2024: https://portswigger.net/research/top-10-web-hacking-techniques-of-2024 Full nomination list: https://portswigger.net/research/top-10-web-hacking-techniques-of-2024-nominations-open Project overview: https://portswigger.net/research/top-10-web-hacking-techniques Show Notes: https://securityweekly.com/asw-318
We're getting close to two full decades of celebrating web hacking techniques. James Kettle shares which was his favorite, why the list is important to the web hacking community, and what inspires the kind of research that makes it onto the list. We discuss why we keep seeing eternal flaws like XSS and SQL injection making these lists year after year and how clever research is still finding new attack surfaces in old technologies. But there's a lot of new web technology still to be examined, from HTTP/2 and HTTP/3 to WebAssembly. Segment Resources: Top 10, 2024: https://portswigger.net/research/top-10-web-hacking-techniques-of-2024 Full nomination list: https://portswigger.net/research/top-10-web-hacking-techniques-of-2024-nominations-open Project overview: https://portswigger.net/research/top-10-web-hacking-techniques Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-318
This episode was LIVE! Even if you usually listen to this show, if you want you can check out the video on YouTube :)Visit https://cupogo.dev/ for store links, past episodes including transcripts, and more!GopherCon IsraelAccepted proposal: Clone a HashWe Replaced Our React Frontend with Go and WebAssembly from DaggerExtensible Wasm Applications with Go by Cherry MuiSQL NULLs are Weird! by Raymond TukpeLighting round:Go programs freeze when they are launched via a Steam clientLovable's rewrite From Python to GoBunster: Compile shell scripts to static binariesNVM for Windowschi drops support for Go 1.14-1.19Go 1.24.0 released ★ Support this podcast on Patreon ★
Did you know that adding a simple Code Interpreter took o3 from 9.2% to 32% on FrontierMath? The Latent Space crew is hosting a hack night Feb 11th in San Francisco focused on CodeGen use cases, co-hosted with E2B and Edge AGI; watch E2B's new workshop and RSVP here!We're happy to announce that today's guest Samuel Colvin will be teaching his very first Pydantic AI workshop at the newly announced AI Engineer NYC Workshops day on Feb 22! 25 tickets left.If you're a Python developer, it's very likely that you've heard of Pydantic. Every month, it's downloaded >300,000,000 times, making it one of the top 25 PyPi packages. OpenAI uses it in its SDK for structured outputs, it's at the core of FastAPI, and if you've followed our AI Engineer Summit conference, Jason Liu of Instructor has given two great talks about it: “Pydantic is all you need” and “Pydantic is STILL all you need”. Now, Samuel Colvin has raised $17M from Sequoia to turn Pydantic from an open source project to a full stack AI engineer platform with Logfire, their observability platform, and PydanticAI, their new agent framework.Logfire: bringing OTEL to AIOpenTelemetry recently merged Semantic Conventions for LLM workloads which provides standard definitions to track performance like gen_ai.server.time_per_output_token. In Sam's view at least 80% of new apps being built today have some sort of LLM usage in them, and just like web observability platform got replaced by cloud-first ones in the 2010s, Logfire wants to do the same for AI-first apps. If you're interested in the technical details, Logfire migrated away from Clickhouse to Datafusion for their backend. We spent some time on the importance of picking open source tools you understand and that you can actually contribute to upstream, rather than the more popular ones; listen in ~43:19 for that part.Agents are the killer app for graphsPydantic AI is their attempt at taking a lot of the learnings that LangChain and the other early LLM frameworks had, and putting Python best practices into it. At an API level, it's very similar to the other libraries: you can call LLMs, create agents, do function calling, do evals, etc.They define an “Agent” as a container with a system prompt, tools, structured result, and an LLM. Under the hood, each Agent is now a graph of function calls that can orchestrate multi-step LLM interactions. You can start simple, then move toward fully dynamic graph-based control flow if needed.“We were compelled enough by graphs once we got them right that our agent implementation [...] is now actually a graph under the hood.”Why Graphs?* More natural for complex or multi-step AI workflows.* Easy to visualize and debug with mermaid diagrams.* Potential for distributed runs, or “waiting days” between steps in certain flows.In parallel, you see folks like Emil Eifrem of Neo4j talk about GraphRAG as another place where graphs fit really well in the AI stack, so it might be time for more people to take them seriously.Full Video EpisodeLike and subscribe!Chapters* 00:00:00 Introductions* 00:00:24 Origins of Pydantic* 00:05:28 Pydantic's AI moment * 00:08:05 Why build a new agents framework?* 00:10:17 Overview of Pydantic AI* 00:12:33 Becoming a believer in graphs* 00:24:02 God Model vs Compound AI Systems* 00:28:13 Why not build an LLM gateway?* 00:31:39 Programmatic testing vs live evals* 00:35:51 Using OpenTelemetry for AI traces* 00:43:19 Why they don't use Clickhouse* 00:48:34 Competing in the observability space* 00:50:41 Licensing decisions for Pydantic and LogFire* 00:51:48 Building Pydantic.run* 00:55:24 Marimo and the future of Jupyter notebooks* 00:57:44 London's AI sceneShow Notes* Sam Colvin* Pydantic* Pydantic AI* Logfire* Pydantic.run* Zod* E2B* Arize* Langsmith* Marimo* Prefect* GLA (Google Generative Language API)* OpenTelemetry* Jason Liu* Sebastian Ramirez* Bogomil Balkansky* Hood Chatham* Jeremy Howard* Andrew LambTranscriptAlessio [00:00:03]: Hey, everyone. Welcome to the Latent Space podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI.Swyx [00:00:12]: Good morning. And today we're very excited to have Sam Colvin join us from Pydantic AI. Welcome. Sam, I heard that Pydantic is all we need. Is that true?Samuel [00:00:24]: I would say you might need Pydantic AI and Logfire as well, but it gets you a long way, that's for sure.Swyx [00:00:29]: Pydantic almost basically needs no introduction. It's almost 300 million downloads in December. And obviously, in the previous podcasts and discussions we've had with Jason Liu, he's been a big fan and promoter of Pydantic and AI.Samuel [00:00:45]: Yeah, it's weird because obviously I didn't create Pydantic originally for uses in AI, it predates LLMs. But it's like we've been lucky that it's been picked up by that community and used so widely.Swyx [00:00:58]: Actually, maybe we'll hear it. Right from you, what is Pydantic and maybe a little bit of the origin story?Samuel [00:01:04]: The best name for it, which is not quite right, is a validation library. And we get some tension around that name because it doesn't just do validation, it will do coercion by default. We now have strict mode, so you can disable that coercion. But by default, if you say you want an integer field and you get in a string of 1, 2, 3, it will convert it to 123 and a bunch of other sensible conversions. And as you can imagine, the semantics around it. Exactly when you convert and when you don't, it's complicated, but because of that, it's more than just validation. Back in 2017, when I first started it, the different thing it was doing was using type hints to define your schema. That was controversial at the time. It was genuinely disapproved of by some people. I think the success of Pydantic and libraries like FastAPI that build on top of it means that today that's no longer controversial in Python. And indeed, lots of other people have copied that route, but yeah, it's a data validation library. It uses type hints for the for the most part and obviously does all the other stuff you want, like serialization on top of that. But yeah, that's the core.Alessio [00:02:06]: Do you have any fun stories on how JSON schemas ended up being kind of like the structure output standard for LLMs? And were you involved in any of these discussions? Because I know OpenAI was, you know, one of the early adopters. So did they reach out to you? Was there kind of like a structure output console in open source that people were talking about or was it just a random?Samuel [00:02:26]: No, very much not. So I originally. Didn't implement JSON schema inside Pydantic and then Sebastian, Sebastian Ramirez, FastAPI came along and like the first I ever heard of him was over a weekend. I got like 50 emails from him or 50 like emails as he was committing to Pydantic, adding JSON schema long pre version one. So the reason it was added was for OpenAPI, which is obviously closely akin to JSON schema. And then, yeah, I don't know why it was JSON that got picked up and used by OpenAI. It was obviously very convenient for us. That's because it meant that not only can you do the validation, but because Pydantic will generate you the JSON schema, it will it kind of can be one source of source of truth for structured outputs and tools.Swyx [00:03:09]: Before we dive in further on the on the AI side of things, something I'm mildly curious about, obviously, there's Zod in JavaScript land. Every now and then there is a new sort of in vogue validation library that that takes over for quite a few years and then maybe like some something else comes along. Is Pydantic? Is it done like the core Pydantic?Samuel [00:03:30]: I've just come off a call where we were redesigning some of the internal bits. There will be a v3 at some point, which will not break people's code half as much as v2 as in v2 was the was the massive rewrite into Rust, but also fixing all the stuff that was broken back from like version zero point something that we didn't fix in v1 because it was a side project. We have plans to move some of the basically store the data in Rust types after validation. Not completely. So we're still working to design the Pythonic version of it, in order for it to be able to convert into Python types. So then if you were doing like validation and then serialization, you would never have to go via a Python type we reckon that can give us somewhere between three and five times another three to five times speed up. That's probably the biggest thing. Also, like changing how easy it is to basically extend Pydantic and define how particular types, like for example, NumPy arrays are validated and serialized. But there's also stuff going on. And for example, Jitter, the JSON library in Rust that does the JSON parsing, has SIMD implementation at the moment only for AMD64. So we can add that. We need to go and add SIMD for other instruction sets. So there's a bunch more we can do on performance. I don't think we're going to go and revolutionize Pydantic, but it's going to continue to get faster, continue, hopefully, to allow people to do more advanced things. We might add a binary format like CBOR for serialization for when you'll just want to put the data into a database and probably load it again from Pydantic. So there are some things that will come along, but for the most part, it should just get faster and cleaner.Alessio [00:05:04]: From a focus perspective, I guess, as a founder too, how did you think about the AI interest rising? And then how do you kind of prioritize, okay, this is worth going into more, and we'll talk about Pydantic AI and all of that. What was maybe your early experience with LLAMP, and when did you figure out, okay, this is something we should take seriously and focus more resources on it?Samuel [00:05:28]: I'll answer that, but I'll answer what I think is a kind of parallel question, which is Pydantic's weird, because Pydantic existed, obviously, before I was starting a company. I was working on it in my spare time, and then beginning of 22, I started working on the rewrite in Rust. And I worked on it full-time for a year and a half, and then once we started the company, people came and joined. And it was a weird project, because that would never go away. You can't get signed off inside a startup. Like, we're going to go off and three engineers are going to work full-on for a year in Python and Rust, writing like 30,000 lines of Rust just to release open-source-free Python library. The result of that has been excellent for us as a company, right? As in, it's made us remain entirely relevant. And it's like, Pydantic is not just used in the SDKs of all of the AI libraries, but I can't say which one, but one of the big foundational model companies, when they upgraded from Pydantic v1 to v2, their number one internal model... The metric of performance is time to first token. That went down by 20%. So you think about all of the actual AI going on inside, and yet at least 20% of the CPU, or at least the latency inside requests was actually Pydantic, which shows like how widely it's used. So we've benefited from doing that work, although it didn't, it would have never have made financial sense in most companies. In answer to your question about like, how do we prioritize AI, I mean, the honest truth is we've spent a lot of the last year and a half building. Good general purpose observability inside LogFire and making Pydantic good for general purpose use cases. And the AI has kind of come to us. Like we just, not that we want to get away from it, but like the appetite, uh, both in Pydantic and in LogFire to go and build with AI is enormous because it kind of makes sense, right? Like if you're starting a new greenfield project in Python today, what's the chance that you're using GenAI 80%, let's say, globally, obviously it's like a hundred percent in California, but even worldwide, it's probably 80%. Yeah. And so everyone needs that stuff. And there's so much yet to be figured out so much like space to do things better in the ecosystem in a way that like to go and implement a database that's better than Postgres is a like Sisyphean task. Whereas building, uh, tools that are better for GenAI than some of the stuff that's about now is not very difficult. Putting the actual models themselves to one side.Alessio [00:07:40]: And then at the same time, then you released Pydantic AI recently, which is, uh, um, you know, agent framework and early on, I would say everybody like, you know, Langchain and like, uh, Pydantic kind of like a first class support, a lot of these frameworks, we're trying to use you to be better. What was the decision behind we should do our own framework? Were there any design decisions that you disagree with any workloads that you think people didn't support? Well,Samuel [00:08:05]: it wasn't so much like design and workflow, although I think there were some, some things we've done differently. Yeah. I think looking in general at the ecosystem of agent frameworks, the engineering quality is far below that of the rest of the Python ecosystem. There's a bunch of stuff that we have learned how to do over the last 20 years of building Python libraries and writing Python code that seems to be abandoned by people when they build agent frameworks. Now I can kind of respect that, particularly in the very first agent frameworks, like Langchain, where they were literally figuring out how to go and do this stuff. It's completely understandable that you would like basically skip some stuff.Samuel [00:08:42]: I'm shocked by the like quality of some of the agent frameworks that have come out recently from like well-respected names, which it just seems to be opportunism and I have little time for that, but like the early ones, like I think they were just figuring out how to do stuff and just as lots of people have learned from Pydantic, we were able to learn a bit from them. I think from like the gap we saw and the thing we were frustrated by was the production readiness. And that means things like type checking, even if type checking makes it hard. Like Pydantic AI, I will put my hand up now and say it has a lot of generics and you need to, it's probably easier to use it if you've written a bit of Rust and you really understand generics, but like, and that is, we're not claiming that that makes it the easiest thing to use in all cases, we think it makes it good for production applications in big systems where type checking is a no-brainer in Python. But there are also a bunch of stuff we've learned from maintaining Pydantic over the years that we've gone and done. So every single example in Pydantic AI's documentation is run on Python. As part of tests and every single print output within an example is checked during tests. So it will always be up to date. And then a bunch of things that, like I say, are standard best practice within the rest of the Python ecosystem, but I'm not followed surprisingly by some AI libraries like coverage, linting, type checking, et cetera, et cetera, where I think these are no-brainers, but like weirdly they're not followed by some of the other libraries.Alessio [00:10:04]: And can you just give an overview of the framework itself? I think there's kind of like the. LLM calling frameworks, there are the multi-agent frameworks, there's the workflow frameworks, like what does Pydantic AI do?Samuel [00:10:17]: I glaze over a bit when I hear all of the different sorts of frameworks, but I like, and I will tell you when I built Pydantic, when I built Logfire and when I built Pydantic AI, my methodology is not to go and like research and review all of the other things. I kind of work out what I want and I go and build it and then feedback comes and we adjust. So the fundamental building block of Pydantic AI is agents. The exact definition of agents and how you want to define them. is obviously ambiguous and our things are probably sort of agent-lit, not that we would want to go and rename them to agent-lit, but like the point is you probably build them together to build something and most people will call an agent. So an agent in our case has, you know, things like a prompt, like system prompt and some tools and a structured return type if you want it, that covers the vast majority of cases. There are situations where you want to go further and the most complex workflows where you want graphs and I resisted graphs for quite a while. I was sort of of the opinion you didn't need them and you could use standard like Python flow control to do all of that stuff. I had a few arguments with people, but I basically came around to, yeah, I can totally see why graphs are useful. But then we have the problem that by default, they're not type safe because if you have a like add edge method where you give the names of two different edges, there's no type checking, right? Even if you go and do some, I'm not, not all the graph libraries are AI specific. So there's a, there's a graph library called, but it allows, it does like a basic runtime type checking. Ironically using Pydantic to try and make up for the fact that like fundamentally that graphs are not typed type safe. Well, I like Pydantic, but it did, that's not a real solution to have to go and run the code to see if it's safe. There's a reason that starting type checking is so powerful. And so we kind of, from a lot of iteration eventually came up with a system of using normally data classes to define nodes where you return the next node you want to call and where we're able to go and introspect the return type of a node to basically build the graph. And so the graph is. Yeah. Inherently type safe. And once we got that right, I, I wasn't, I'm incredibly excited about graphs. I think there's like masses of use cases for them, both in gen AI and other development, but also software's all going to have interact with gen AI, right? It's going to be like web. There's no longer be like a web department in a company is that there's just like all the developers are building for web building with databases. The same is going to be true for gen AI.Alessio [00:12:33]: Yeah. I see on your docs, you call an agent, a container that contains a system prompt function. Tools, structure, result, dependency type model, and then model settings. Are the graphs in your mind, different agents? Are they different prompts for the same agent? What are like the structures in your mind?Samuel [00:12:52]: So we were compelled enough by graphs once we got them right, that we actually merged the PR this morning. That means our agent implementation without changing its API at all is now actually a graph under the hood as it is built using our graph library. So graphs are basically a lower level tool that allow you to build these complex workflows. Our agents are technically one of the many graphs you could go and build. And we just happened to build that one for you because it's a very common, commonplace one. But obviously there are cases where you need more complex workflows where the current agent assumptions don't work. And that's where you can then go and use graphs to build more complex things.Swyx [00:13:29]: You said you were cynical about graphs. What changed your mind specifically?Samuel [00:13:33]: I guess people kept giving me examples of things that they wanted to use graphs for. And my like, yeah, but you could do that in standard flow control in Python became a like less and less compelling argument to me because I've maintained those systems that end up with like spaghetti code. And I could see the appeal of this like structured way of defining the workflow of my code. And it's really neat that like just from your code, just from your type hints, you can get out a mermaid diagram that defines exactly what can go and happen.Swyx [00:14:00]: Right. Yeah. You do have very neat implementation of sort of inferring the graph from type hints, I guess. Yeah. Is what I would call it. Yeah. I think the question always is I have gone back and forth. I used to work at Temporal where we would actually spend a lot of time complaining about graph based workflow solutions like AWS step functions. And we would actually say that we were better because you could use normal control flow that you already knew and worked with. Yours, I guess, is like a little bit of a nice compromise. Like it looks like normal Pythonic code. But you just have to keep in mind what the type hints actually mean. And that's what we do with the quote unquote magic that the graph construction does.Samuel [00:14:42]: Yeah, exactly. And if you look at the internal logic of actually running a graph, it's incredibly simple. It's basically call a node, get a node back, call that node, get a node back, call that node. If you get an end, you're done. We will add in soon support for, well, basically storage so that you can store the state between each node that's run. And then the idea is you can then distribute the graph and run it across computers. And also, I mean, the other weird, the other bit that's really valuable is across time. Because it's all very well if you look at like lots of the graph examples that like Claude will give you. If it gives you an example, it gives you this lovely enormous mermaid chart of like the workflow, for example, managing returns if you're an e-commerce company. But what you realize is some of those lines are literally one function calls another function. And some of those lines are wait six days for the customer to print their like piece of paper and put it in the post. And if you're writing like your demo. Project or your like proof of concept, that's fine because you can just say, and now we call this function. But when you're building when you're in real in real life, that doesn't work. And now how do we manage that concept to basically be able to start somewhere else in the in our code? Well, this graph implementation makes it incredibly easy because you just pass the node that is the start point for carrying on the graph and it continues to run. So it's things like that where I was like, yeah, I can just imagine how things I've done in the past would be fundamentally easier to understand if we had done them with graphs.Swyx [00:16:07]: You say imagine, but like right now, this pedantic AI actually resume, you know, six days later, like you said, or is this just like a theoretical thing we can go someday?Samuel [00:16:16]: I think it's basically Q&A. So there's an AI that's asking the user a question and effectively you then call the CLI again to continue the conversation. And it basically instantiates the node and calls the graph with that node again. Now, we don't have the logic yet for effectively storing state in the database between individual nodes that we're going to add soon. But like the rest of it is basically there.Swyx [00:16:37]: It does make me think that not only are you competing with Langchain now and obviously Instructor, and now you're going into sort of the more like orchestrated things like Airflow, Prefect, Daxter, those guys.Samuel [00:16:52]: Yeah, I mean, we're good friends with the Prefect guys and Temporal have the same investors as us. And I'm sure that my investor Bogomol would not be too happy if I was like, oh, yeah, by the way, as well as trying to take on Datadog. We're also going off and trying to take on Temporal and everyone else doing that. Obviously, we're not doing all of the infrastructure of deploying that right yet, at least. We're, you know, we're just building a Python library. And like what's crazy about our graph implementation is, sure, there's a bit of magic in like introspecting the return type, you know, extracting things from unions, stuff like that. But like the actual calls, as I say, is literally call a function and get back a thing and call that. It's like incredibly simple and therefore easy to maintain. The question is, how useful is it? Well, I don't know yet. I think we have to go and find out. We have a whole. We've had a slew of people joining our Slack over the last few days and saying, tell me how good Pydantic AI is. How good is Pydantic AI versus Langchain? And I refuse to answer. That's your job to go and find that out. Not mine. We built a thing. I'm compelled by it, but I'm obviously biased. The ecosystem will work out what the useful tools are.Swyx [00:17:52]: Bogomol was my board member when I was at Temporal. And I think I think just generally also having been a workflow engine investor and participant in this space, it's a big space. Like everyone needs different functions. I think the one thing that I would say like yours, you know, as a library, you don't have that much control of it over the infrastructure. I do like the idea that each new agents or whatever or unit of work, whatever you call that should spin up in this sort of isolated boundaries. Whereas yours, I think around everything runs in the same process. But you ideally want to sort of spin out its own little container of things.Samuel [00:18:30]: I agree with you a hundred percent. And we will. It would work now. Right. As in theory, you're just like as long as you can serialize the calls to the next node, you just have to all of the different containers basically have to have the same the same code. I mean, I'm super excited about Cloudflare workers running Python and being able to install dependencies. And if Cloudflare could only give me my invitation to the private beta of that, we would be exploring that right now because I'm super excited about that as a like compute level for some of this stuff where exactly what you're saying, basically. You can run everything as an individual. Like worker function and distribute it. And it's resilient to failure, et cetera, et cetera.Swyx [00:19:08]: And it spins up like a thousand instances simultaneously. You know, you want it to be sort of truly serverless at once. Actually, I know we have some Cloudflare friends who are listening, so hopefully they'll get in front of the line. Especially.Samuel [00:19:19]: I was in Cloudflare's office last week shouting at them about other things that frustrate me. I have a love-hate relationship with Cloudflare. Their tech is awesome. But because I use it the whole time, I then get frustrated. So, yeah, I'm sure I will. I will. I will get there soon.Swyx [00:19:32]: There's a side tangent on Cloudflare. Is Python supported at full? I actually wasn't fully aware of what the status of that thing is.Samuel [00:19:39]: Yeah. So Pyodide, which is Python running inside the browser in scripting, is supported now by Cloudflare. They basically, they're having some struggles working out how to manage, ironically, dependencies that have binaries, in particular, Pydantic. Because these workers where you can have thousands of them on a given metal machine, you don't want to have a difference. You basically want to be able to have a share. Shared memory for all the different Pydantic installations, effectively. That's the thing they work out. They're working out. But Hood, who's my friend, who is the primary maintainer of Pyodide, works for Cloudflare. And that's basically what he's doing, is working out how to get Python running on Cloudflare's network.Swyx [00:20:19]: I mean, the nice thing is that your binary is really written in Rust, right? Yeah. Which also compiles the WebAssembly. Yeah. So maybe there's a way that you'd build... You have just a different build of Pydantic and that ships with whatever your distro for Cloudflare workers is.Samuel [00:20:36]: Yes, that's exactly what... So Pyodide has builds for Pydantic Core and for things like NumPy and basically all of the popular binary libraries. Yeah. It's just basic. And you're doing exactly that, right? You're using Rust to compile the WebAssembly and then you're calling that shared library from Python. And it's unbelievably complicated, but it works. Okay.Swyx [00:20:57]: Staying on graphs a little bit more, and then I wanted to go to some of the other features that you have in Pydantic AI. I see in your docs, there are sort of four levels of agents. There's single agents, there's agent delegation, programmatic agent handoff. That seems to be what OpenAI swarms would be like. And then the last one, graph-based control flow. Would you say that those are sort of the mental hierarchy of how these things go?Samuel [00:21:21]: Yeah, roughly. Okay.Swyx [00:21:22]: You had some expression around OpenAI swarms. Well.Samuel [00:21:25]: And indeed, OpenAI have got in touch with me and basically, maybe I'm not supposed to say this, but basically said that Pydantic AI looks like what swarms would become if it was production ready. So, yeah. I mean, like, yeah, which makes sense. Awesome. Yeah. I mean, in fact, it was specifically saying, how can we give people the same feeling that they were getting from swarms that led us to go and implement graphs? Because my, like, just call the next agent with Python code was not a satisfactory answer to people. So it was like, okay, we've got to go and have a better answer for that. It's not like, let us to get to graphs. Yeah.Swyx [00:21:56]: I mean, it's a minimal viable graph in some sense. What are the shapes of graphs that people should know? So the way that I would phrase this is I think Anthropic did a very good public service and also kind of surprisingly influential blog post, I would say, when they wrote Building Effective Agents. We actually have the authors coming to speak at my conference in New York, which I think you're giving a workshop at. Yeah.Samuel [00:22:24]: I'm trying to work it out. But yes, I think so.Swyx [00:22:26]: Tell me if you're not. yeah, I mean, like, that was the first, I think, authoritative view of, like, what kinds of graphs exist in agents and let's give each of them a name so that everyone is on the same page. So I'm just kind of curious if you have community names or top five patterns of graphs.Samuel [00:22:44]: I don't have top five patterns of graphs. I would love to see what people are building with them. But like, it's been it's only been a couple of weeks. And of course, there's a point is that. Because they're relatively unopinionated about what you can go and do with them. They don't suit them. Like, you can go and do lots of lots of things with them, but they don't have the structure to go and have like specific names as much as perhaps like some other systems do. I think what our agents are, which have a name and I can't remember what it is, but this basically system of like, decide what tool to call, go back to the center, decide what tool to call, go back to the center and then exit. One form of graph, which, as I say, like our agents are effectively one implementation of a graph, which is why under the hood they are now using graphs. And it'll be interesting to see over the next few years whether we end up with these like predefined graph names or graph structures or whether it's just like, yep, I built a graph or whether graphs just turn out not to match people's mental image of what they want and die away. We'll see.Swyx [00:23:38]: I think there is always appeal. Every developer eventually gets graph religion and goes, oh, yeah, everything's a graph. And then they probably over rotate and go go too far into graphs. And then they have to learn a whole bunch of DSLs. And then they're like, actually, I didn't need that. I need this. And they scale back a little bit.Samuel [00:23:55]: I'm at the beginning of that process. I'm currently a graph maximalist, although I haven't actually put any into production yet. But yeah.Swyx [00:24:02]: This has a lot of philosophical connections with other work coming out of UC Berkeley on compounding AI systems. I don't know if you know of or care. This is the Gartner world of things where they need some kind of industry terminology to sell it to enterprises. I don't know if you know about any of that.Samuel [00:24:24]: I haven't. I probably should. I should probably do it because I should probably get better at selling to enterprises. But no, no, I don't. Not right now.Swyx [00:24:29]: This is really the argument is that instead of putting everything in one model, you have more control and more maybe observability to if you break everything out into composing little models and changing them together. And obviously, then you need an orchestration framework to do that. Yeah.Samuel [00:24:47]: And it makes complete sense. And one of the things we've seen with agents is they work well when they work well. But when they. Even if you have the observability through log five that you can see what was going on, if you don't have a nice hook point to say, hang on, this is all gone wrong. You have a relatively blunt instrument of basically erroring when you exceed some kind of limit. But like what you need to be able to do is effectively iterate through these runs so that you can have your own control flow where you're like, OK, we've gone too far. And that's where one of the neat things about our graph implementation is you can basically call next in a loop rather than just running the full graph. And therefore, you have this opportunity to to break out of it. But yeah, basically, it's the same point, which is like if you have two bigger unit of work to some extent, whether or not it involves gen AI. But obviously, it's particularly problematic in gen AI. You only find out afterwards when you've spent quite a lot of time and or money when it's gone off and done done the wrong thing.Swyx [00:25:39]: Oh, drop on this. We're not going to resolve this here, but I'll drop this and then we can move on to the next thing. This is the common way that we we developers talk about this. And then the machine learning researchers look at us. And laugh and say, that's cute. And then they just train a bigger model and they wipe us out in the next training run. So I think there's a certain amount of we are fighting the bitter lesson here. We're fighting AGI. And, you know, when AGI arrives, this will all go away. Obviously, on Latent Space, we don't really discuss that because I think AGI is kind of this hand wavy concept that isn't super relevant. But I think we have to respect that. For example, you could do a chain of thoughts with graphs and you could manually orchestrate a nice little graph that does like. Reflect, think about if you need more, more inference time, compute, you know, that's the hot term now. And then think again and, you know, scale that up. Or you could train Strawberry and DeepSeq R1. Right.Samuel [00:26:32]: I saw someone saying recently, oh, they were really optimistic about agents because models are getting faster exponentially. And I like took a certain amount of self-control not to describe that it wasn't exponential. But my main point was. If models are getting faster as quickly as you say they are, then we don't need agents and we don't really need any of these abstraction layers. We can just give our model and, you know, access to the Internet, cross our fingers and hope for the best. Agents, agent frameworks, graphs, all of this stuff is basically making up for the fact that right now the models are not that clever. In the same way that if you're running a customer service business and you have loads of people sitting answering telephones, the less well trained they are, the less that you trust them, the more that you need to give them a script to go through. Whereas, you know, so if you're running a bank and you have lots of customer service people who you don't trust that much, then you tell them exactly what to say. If you're doing high net worth banking, you just employ people who you think are going to be charming to other rich people and set them off to go and have coffee with people. Right. And the same is true of models. The more intelligent they are, the less we need to tell them, like structure what they go and do and constrain the routes in which they take.Swyx [00:27:42]: Yeah. Yeah. Agree with that. So I'm happy to move on. So the other parts of Pydantic AI that are worth commenting on, and this is like my last rant, I promise. So obviously, every framework needs to do its sort of model adapter layer, which is, oh, you can easily swap from OpenAI to Cloud to Grok. You also have, which I didn't know about, Google GLA, which I didn't really know about until I saw this in your docs, which is generative language API. I assume that's AI Studio? Yes.Samuel [00:28:13]: Google don't have good names for it. So Vertex is very clear. That seems to be the API that like some of the things use, although it returns 503 about 20% of the time. So... Vertex? No. Vertex, fine. But the... Oh, oh. GLA. Yeah. Yeah.Swyx [00:28:28]: I agree with that.Samuel [00:28:29]: So we have, again, another example of like, well, I think we go the extra mile in terms of engineering is we run on every commit, at least commit to main, we run tests against the live models. Not lots of tests, but like a handful of them. Oh, okay. And we had a point last week where, yeah, GLA is a little bit better. GLA1 was failing every single run. One of their tests would fail. And we, I think we might even have commented out that one at the moment. So like all of the models fail more often than you might expect, but like that one seems to be particularly likely to fail. But Vertex is the same API, but much more reliable.Swyx [00:29:01]: My rant here is that, you know, versions of this appear in Langchain and every single framework has to have its own little thing, a version of that. I would put to you, and then, you know, this is, this can be agree to disagree. This is not needed in Pydantic AI. I would much rather you adopt a layer like Lite LLM or what's the other one in JavaScript port key. And that's their job. They focus on that one thing and they, they normalize APIs for you. All new models are automatically added and you don't have to duplicate this inside of your framework. So for example, if I wanted to use deep seek, I'm out of luck because Pydantic AI doesn't have deep seek yet.Samuel [00:29:38]: Yeah, it does.Swyx [00:29:39]: Oh, it does. Okay. I'm sorry. But you know what I mean? Should this live in your code or should it live in a layer that's kind of your API gateway that's a defined piece of infrastructure that people have?Samuel [00:29:49]: And I think if a company who are well known, who are respected by everyone had come along and done this at the right time, maybe we should have done it a year and a half ago and said, we're going to be the universal AI layer. That would have been a credible thing to do. I've heard varying reports of Lite LLM is the truth. And it didn't seem to have exactly the type safety that we needed. Also, as I understand it, and again, I haven't looked into it in great detail. Part of their business model is proxying the request through their, through their own system to do the generalization. That would be an enormous put off to an awful lot of people. Honestly, the truth is I don't think it is that much work unifying the model. I get where you're coming from. I kind of see your point. I think the truth is that everyone is centralizing around open AIs. Open AI's API is the one to do. So DeepSeq support that. Grok with OK support that. Ollama also does it. I mean, if there is that library right now, it's more or less the open AI SDK. And it's very high quality. It's well type checked. It uses Pydantic. So I'm biased. But I mean, I think it's pretty well respected anyway.Swyx [00:30:57]: There's different ways to do this. Because also, it's not just about normalizing the APIs. You have to do secret management and all that stuff.Samuel [00:31:05]: Yeah. And there's also. There's Vertex and Bedrock, which to one extent or another, effectively, they host multiple models, but they don't unify the API. But they do unify the auth, as I understand it. Although we're halfway through doing Bedrock. So I don't know about it that well. But they're kind of weird hybrids because they support multiple models. But like I say, the auth is centralized.Swyx [00:31:28]: Yeah, I'm surprised they don't unify the API. That seems like something that I would do. You know, we can discuss all this all day. There's a lot of APIs. I agree.Samuel [00:31:36]: It would be nice if there was a universal one that we didn't have to go and build.Alessio [00:31:39]: And I guess the other side of, you know, routing model and picking models like evals. How do you actually figure out which one you should be using? I know you have one. First of all, you have very good support for mocking in unit tests, which is something that a lot of other frameworks don't do. So, you know, my favorite Ruby library is VCR because it just, you know, it just lets me store the HTTP requests and replay them. That part I'll kind of skip. I think you are busy like this test model. We're like just through Python. You try and figure out what the model might respond without actually calling the model. And then you have the function model where people can kind of customize outputs. Any other fun stories maybe from there? Or is it just what you see is what you get, so to speak?Samuel [00:32:18]: On those two, I think what you see is what you get. On the evals, I think watch this space. I think it's something that like, again, I was somewhat cynical about for some time. Still have my cynicism about some of the well, it's unfortunate that so many different things are called evals. It would be nice if we could agree. What they are and what they're not. But look, I think it's a really important space. I think it's something that we're going to be working on soon, both in Pydantic AI and in LogFire to try and support better because it's like it's an unsolved problem.Alessio [00:32:45]: Yeah, you do say in your doc that anyone who claims to know for sure exactly how your eval should be defined can safely be ignored.Samuel [00:32:52]: We'll delete that sentence when we tell people how to do their evals.Alessio [00:32:56]: Exactly. I was like, we need we need a snapshot of this today. And so let's talk about eval. So there's kind of like the vibe. Yeah. So you have evals, which is what you do when you're building. Right. Because you cannot really like test it that many times to get statistical significance. And then there's the production eval. So you also have LogFire, which is kind of like your observability product, which I tried before. It's very nice. What are some of the learnings you've had from building an observability tool for LEMPs? And yeah, as people think about evals, even like what are the right things to measure? What are like the right number of samples that you need to actually start making decisions?Samuel [00:33:33]: I'm not the best person to answer that is the truth. So I'm not going to come in here and tell you that I think I know the answer on the exact number. I mean, we can do some back of the envelope statistics calculations to work out that like having 30 probably gets you most of the statistical value of having 200 for, you know, by definition, 15% of the work. But the exact like how many examples do you need? For example, that's a much harder question to answer because it's, you know, it's deep within the how models operate in terms of LogFire. One of the reasons we built LogFire the way we have and we allow you to write SQL directly against your data and we're trying to build the like powerful fundamentals of observability is precisely because we know we don't know the answers. And so allowing people to go and innovate on how they're going to consume that stuff and how they're going to process it is we think that's valuable. Because even if we come along and offer you an evals framework on top of LogFire, it won't be right in all regards. And we want people to be able to go and innovate and being able to write their own SQL connected to the API. And effectively query the data like it's a database with SQL allows people to innovate on that stuff. And that's what allows us to do it as well. I mean, we do a bunch of like testing what's possible by basically writing SQL directly against LogFire as any user could. I think the other the other really interesting bit that's going on in observability is OpenTelemetry is centralizing around semantic attributes for GenAI. So it's a relatively new project. A lot of it's still being added at the moment. But basically the idea that like. They unify how both SDKs and or agent frameworks send observability data to to any OpenTelemetry endpoint. And so, again, we can go and having that unification allows us to go and like basically compare different libraries, compare different models much better. That stuff's in a very like early stage of development. One of the things we're going to be working on pretty soon is basically, I suspect, GenAI will be the first agent framework that implements those semantic attributes properly. Because, again, we control and we can say this is important for observability, whereas most of the other agent frameworks are not maintained by people who are trying to do observability. With the exception of Langchain, where they have the observability platform, but they chose not to go down the OpenTelemetry route. So they're like plowing their own furrow. And, you know, they're a lot they're even further away from standardization.Alessio [00:35:51]: Can you maybe just give a quick overview of how OTEL ties into the AI workflows? There's kind of like the question of is, you know, a trace. And a span like a LLM call. Is it the agent? It's kind of like the broader thing you're tracking. How should people think about it?Samuel [00:36:06]: Yeah, so they have a PR that I think may have now been merged from someone at IBM talking about remote agents and trying to support this concept of remote agents within GenAI. I'm not particularly compelled by that because I don't think that like that's actually by any means the common use case. But like, I suppose it's fine for it to be there. The majority of the stuff in OTEL is basically defining how you would instrument. A given call to an LLM. So basically the actual LLM call, what data you would send to your telemetry provider, how you would structure that. Apart from this slightly odd stuff on remote agents, most of the like agent level consideration is not yet implemented in is not yet decided effectively. And so there's a bit of ambiguity. Obviously, what's good about OTEL is you can in the end send whatever attributes you like. But yeah, there's quite a lot of churn in that space and exactly how we store the data. I think that one of the most interesting things, though, is that if you think about observability. Traditionally, it was sure everyone would say our observability data is very important. We must keep it safe. But actually, companies work very hard to basically not have anything that sensitive in their observability data. So if you're a doctor in a hospital and you search for a drug for an STI, the sequel might be sent to the observability provider. But none of the parameters would. It wouldn't have the patient number or their name or the drug. With GenAI, that distinction doesn't exist because it's all just messed up in the text. If you have that same patient asking an LLM how to. What drug they should take or how to stop smoking. You can't extract the PII and not send it to the observability platform. So the sensitivity of the data that's going to end up in observability platforms is going to be like basically different order of magnitude to what's in what you would normally send to Datadog. Of course, you can make a mistake and send someone's password or their card number to Datadog. But that would be seen as a as a like mistake. Whereas in GenAI, a lot of data is going to be sent. And I think that's why companies like Langsmith and are trying hard to offer observability. On prem, because there's a bunch of companies who are happy for Datadog to be cloud hosted, but want self-hosted self-hosting for this observability stuff with GenAI.Alessio [00:38:09]: And are you doing any of that today? Because I know in each of the spans you have like the number of tokens, you have the context, you're just storing everything. And then you're going to offer kind of like a self-hosting for the platform, basically. Yeah. Yeah.Samuel [00:38:23]: So we have scrubbing roughly equivalent to what the other observability platforms have. So if we, you know, if we see password as the key, we won't send the value. But like, like I said, that doesn't really work in GenAI. So we're accepting we're going to have to store a lot of data and then we'll offer self-hosting for those people who can afford it and who need it.Alessio [00:38:42]: And then this is, I think, the first time that most of the workloads performance is depending on a third party. You know, like if you're looking at Datadog data, usually it's your app that is driving the latency and like the memory usage and all of that. Here you're going to have spans that maybe take a long time to perform because the GLA API is not working or because OpenAI is kind of like overwhelmed. Do you do anything there since like the provider is almost like the same across customers? You know, like, are you trying to surface these things for people and say, hey, this was like a very slow span, but actually all customers using OpenAI right now are seeing the same thing. So maybe don't worry about it or.Samuel [00:39:20]: Not yet. We do a few things that people don't generally do in OTA. So we send. We send information at the beginning. At the beginning of a trace as well as sorry, at the beginning of a span, as well as when it finishes. By default, OTA only sends you data when the span finishes. So if you think about a request which might take like 20 seconds, even if some of the intermediate spans finished earlier, you can't basically place them on the page until you get the top level span. And so if you're using standard OTA, you can't show anything until those requests are finished. When those requests are taking a few hundred milliseconds, it doesn't really matter. But when you're doing Gen AI calls or when you're like running a batch job that might take 30 minutes. That like latency of not being able to see the span is like crippling to understanding your application. And so we've we do a bunch of slightly complex stuff to basically send data about a span as it starts, which is closely related. Yeah.Alessio [00:40:09]: Any thoughts on all the other people trying to build on top of OpenTelemetry in different languages, too? There's like the OpenLEmetry project, which doesn't really roll off the tongue. But how do you see the future of these kind of tools? Is everybody going to have to build? Why does everybody want to build? They want to build their own open source observability thing to then sell?Samuel [00:40:29]: I mean, we are not going off and trying to instrument the likes of the OpenAI SDK with the new semantic attributes, because at some point that's going to happen and it's going to live inside OTEL and we might help with it. But we're a tiny team. We don't have time to go and do all of that work. So OpenLEmetry, like interesting project. But I suspect eventually most of those semantic like that instrumentation of the big of the SDKs will live, like I say, inside the main OpenTelemetry report. I suppose. What happens to the agent frameworks? What data you basically need at the framework level to get the context is kind of unclear. I don't think we know the answer yet. But I mean, I was on the, I guess this is kind of semi-public, because I was on the call with the OpenTelemetry call last week talking about GenAI. And there was someone from Arize talking about the challenges they have trying to get OpenTelemetry data out of Langchain, where it's not like natively implemented. And obviously they're having quite a tough time. And I was realizing, hadn't really realized this before, but how lucky we are to primarily be talking about our own agent framework, where we have the control rather than trying to go and instrument other people's.Swyx [00:41:36]: Sorry, I actually didn't know about this semantic conventions thing. It looks like, yeah, it's merged into main OTel. What should people know about this? I had never heard of it before.Samuel [00:41:45]: Yeah, I think it looks like a great start. I think there's some unknowns around how you send the messages that go back and forth, which is kind of the most important part. It's the most important thing of all. And that is moved out of attributes and into OTel events. OTel events in turn are moving from being on a span to being their own top-level API where you send data. So there's a bunch of churn still going on. I'm impressed by how fast the OTel community is moving on this project. I guess they, like everyone else, get that this is important, and it's something that people are crying out to get instrumentation off. So I'm kind of pleasantly surprised at how fast they're moving, but it makes sense.Swyx [00:42:25]: I'm just kind of browsing through the specification. I can already see that this basically bakes in whatever the previous paradigm was. So now they have genai.usage.prompt tokens and genai.usage.completion tokens. And obviously now we have reasoning tokens as well. And then only one form of sampling, which is top-p. You're basically baking in or sort of reifying things that you think are important today, but it's not a super foolproof way of doing this for the future. Yeah.Samuel [00:42:54]: I mean, that's what's neat about OTel is you can always go and send another attribute and that's fine. It's just there are a bunch that are agreed on. But I would say, you know, to come back to your previous point about whether or not we should be relying on one centralized abstraction layer, this stuff is moving so fast that if you start relying on someone else's standard, you risk basically falling behind because you're relying on someone else to keep things up to date.Swyx [00:43:14]: Or you fall behind because you've got other things going on.Samuel [00:43:17]: Yeah, yeah. That's fair. That's fair.Swyx [00:43:19]: Any other observations just about building LogFire, actually? Let's just talk about this. So you announced LogFire. I was kind of only familiar with LogFire because of your Series A announcement. I actually thought you were making a separate company. I remember some amount of confusion with you when that came out. So to be clear, it's Pydantic LogFire and the company is one company that has kind of two products, an open source thing and an observability thing, correct? Yeah. I was just kind of curious, like any learnings building LogFire? So classic question is, do you use ClickHouse? Is this like the standard persistence layer? Any learnings doing that?Samuel [00:43:54]: We don't use ClickHouse. We started building our database with ClickHouse, moved off ClickHouse onto Timescale, which is a Postgres extension to do analytical databases. Wow. And then moved off Timescale onto DataFusion. And we're basically now building, it's DataFusion, but it's kind of our own database. Bogomil is not entirely happy that we went through three databases before we chose one. I'll say that. But like, we've got to the right one in the end. I think we could have realized that Timescale wasn't right. I think ClickHouse. They both taught us a lot and we're in a great place now. But like, yeah, it's been a real journey on the database in particular.Swyx [00:44:28]: Okay. So, you know, as a database nerd, I have to like double click on this, right? So ClickHouse is supposed to be the ideal backend for anything like this. And then moving from ClickHouse to Timescale is another counterintuitive move that I didn't expect because, you know, Timescale is like an extension on top of Postgres. Not super meant for like high volume logging. But like, yeah, tell us those decisions.Samuel [00:44:50]: So at the time, ClickHouse did not have good support for JSON. I was speaking to someone yesterday and said ClickHouse doesn't have good support for JSON and got roundly stepped on because apparently it does now. So they've obviously gone and built their proper JSON support. But like back when we were trying to use it, I guess a year ago or a bit more than a year ago, everything happened to be a map and maps are a pain to try and do like looking up JSON type data. And obviously all these attributes, everything you're talking about there in terms of the GenAI stuff. You can choose to make them top level columns if you want. But the simplest thing is just to put them all into a big JSON pile. And that was a problem with ClickHouse. Also, ClickHouse had some really ugly edge cases like by default, or at least until I complained about it a lot, ClickHouse thought that two nanoseconds was longer than one second because they compared intervals just by the number, not the unit. And I complained about that a lot. And then they caused it to raise an error and just say you have to have the same unit. Then I complained a bit more. And I think as I understand it now, they have some. They convert between units. But like stuff like that, when all you're looking at is when a lot of what you're doing is comparing the duration of spans was really painful. Also things like you can't subtract two date times to get an interval. You have to use the date sub function. But like the fundamental thing is because we want our end users to write SQL, the like quality of the SQL, how easy it is to write, matters way more to us than if you're building like a platform on top where your developers are going to write the SQL. And once it's written and it's working, you don't mind too much. So I think that's like one of the fundamental differences. The other problem that I have with the ClickHouse and Impact Timescale is that like the ultimate architecture, the like snowflake architecture of binary data in object store queried with some kind of cache from nearby. They both have it, but it's closed sourced and you only get it if you go and use their hosted versions. And so even if we had got through all the problems with Timescale or ClickHouse, we would end up like, you know, they would want to be taking their 80% margin. And then we would be wanting to take that would basically leave us less space for margin. Whereas data fusion. Properly open source, all of that same tooling is open source. And for us as a team of people with a lot of Rust expertise, data fusion, which is implemented in Rust, we can literally dive into it and go and change it. So, for example, I found that there were some slowdowns in data fusion's string comparison kernel for doing like string contains. And it's just Rust code. And I could go and rewrite the string comparison kernel to be faster. Or, for example, data fusion, when we started using it, didn't have JSON support. Obviously, as I've said, it's something we can do. It's something we needed. I was able to go and implement that in a weekend using our JSON parser that we built for Pydantic Core. So it's the fact that like data fusion is like for us the perfect mixture of a toolbox to build a database with, not a database. And we can go and implement stuff on top of it in a way that like if you were trying to do that in Postgres or in ClickHouse. I mean, ClickHouse would be easier because it's C++, relatively modern C++. But like as a team of people who are not C++ experts, that's much scarier than data fusion for us.Swyx [00:47:47]: Yeah, that's a beautiful rant.Alessio [00:47:49]: That's funny. Most people don't think they have agency on these projects. They're kind of like, oh, I should use this or I should use that. They're not really like, what should I pick so that I contribute the most back to it? You know, so but I think you obviously have an open source first mindset. So that makes a lot of sense.Samuel [00:48:05]: I think if we were probably better as a startup, a better startup and faster moving and just like headlong determined to get in front of customers as fast as possible, we should have just started with ClickHouse. I hope that long term we're in a better place for having worked with data fusion. We like we're quite engaged now with the data fusion community. Andrew Lam, who maintains data fusion, is an advisor to us. We're in a really good place now. But yeah, it's definitely slowed us down relative to just like building on ClickHouse and moving as fast as we can.Swyx [00:48:34]: OK, we're about to zoom out and do Pydantic run and all the other stuff. But, you know, my last question on LogFire is really, you know, at some point you run out sort of community goodwill just because like, oh, I use Pydantic. I love Pydantic. I'm going to use LogFire. OK, then you start entering the territory of the Datadogs, the Sentrys and the honeycombs. Yeah. So where are you going to really spike here? What differentiator here?Samuel [00:48:59]: I wasn't writing code in 2001, but I'm assuming that there were people talking about like web observability and then web observability stopped being a thing, not because the web stopped being a thing, but because all observability had to do web. If you were talking to people in 2010 or 2012, they would have talked about cloud observability. Now that's not a term because all observability is cloud first. The same is going to happen to gen AI. And so whether or not you're trying to compete with Datadog or with Arise and Langsmith, you've got to do first class. You've got to do general purpose observability with first class support for AI. And as far as I know, we're the only people really trying to do that. I mean, I think Datadog is starting in that direction. And to be honest, I think Datadog is a much like scarier company to compete with than the AI specific observability platforms. Because in my opinion, and I've also heard this from lots of customers, AI specific observability where you don't see everything else going on in your app is not actually that useful. Our hope is that we can build the first general purpose observability platform with first class support for AI. And that we have this open source heritage of putting developer experience first that other companies haven't done. For all I'm a fan of Datadog and what they've done. If you search Datadog logging Python. And you just try as a like a non-observability expert to get something up and running with Datadog and Python. It's not trivial, right? That's something Sentry have done amazingly well. But like there's enormous space in most of observability to do DX better.Alessio [00:50:27]: Since you mentioned Sentry, I'm curious how you thought about licensing and all of that. Obviously, your MIT license, you don't have any rolling license like Sentry has where you can only use an open source, like the one year old version of it. Was that a hard decision?Samuel [00:50:41]: So to be clear, LogFire is co-sourced. So Pydantic and Pydantic AI are MIT licensed and like properly open source. And then LogFire for now is completely closed source. And in fact, the struggles that Sentry have had with licensing and the like weird pushback the community gives when they take something that's closed source and make it source available just meant that we just avoided that whole subject matter. I think the other way to look at it is like in terms of either headcount or revenue or dollars in the bank. The amount of open source we do as a company is we've got to be open source. We're up there with the most prolific open source companies, like I say, per head. And so we didn't feel like we were morally obligated to make LogFire open source. We have Pydantic. Pydantic is a foundational library in Python. That and now Pydantic AI are our contribution to open source. And then LogFire is like openly for profit, right? As in we're not claiming otherwise. We're not sort of trying to walk a line if it's open source. But really, we want to make it hard to deploy. So you probably want to pay us. We're trying to be straight. That it's to pay for. We could change that at some point in the future, but it's not an immediate plan.Alessio [00:51:48]: All right. So the first one I saw this new I don't know if it's like a product you're building the Pydantic that run, which is a Python browser sandbox. What was the inspiration behind that? We talk a lot about code interpreter for lamps. I'm an investor in a company called E2B, which is a code sandbox as a service for remote execution. Yeah. What's the Pydantic that run story?Samuel [00:52:09]: So Pydantic that run is again completely open source. I have no interest in making it into a product. We just needed a sandbox to be able to demo LogFire in particular, but also Pydantic AI. So it doesn't have it yet, but I'm going to add basically a proxy to OpenAI and the other models so that you can run Pydantic AI in the browser. See how it works. Tweak the prompt, et cetera, et cetera. And we'll have some kind of limit per day of what you can spend on it or like what the spend is. The other thing we wanted to b
Episode SummaryIn this episode of The Secure Developer, host Danny Allan sits down with David Mytton, founder and CEO of Arcjet, former CEO of Server Density, and co-founder of Console.dev. David shares his insights into bridging the “developer-security gap” with Arcjet, a cutting-edge middleware SDK designed to empower developers with advanced security tools like rate limiting and bot protection. The conversation dives into the evolution of developer tools, the growing role of AI in coding, and the future of secure software development in modern environments. David also offers a fascinating perspective on sustainable computing and the impact of clean energy in the tech industry.Show NotesIn this thought-provoking episode of The Secure Developer, host Danny Allan sits down with David Mytton, founder and CEO of Arcjet, to explore the evolving intersection of development, security, and AI. David, a serial entrepreneur with deep roots in cloud monitoring and developer tools, shares his journey from co-founding Server Density to building Arcjet, a groundbreaking solution for developers managing runtime security.The conversation begins with David's take on why developers should prioritize security early in the development lifecycle. He highlights the challenges developers face in modern environments, where traditional security tools often fail to integrate seamlessly with serverless and edge computing platforms. David introduces Arcjet as an innovative SDK that empowers developers to implement rate-limiting, bot detection, and other security measures directly in their applications, offering a developer-first approach to runtime protection.Delving deeper, the discussion shifts to the rise of WebAssembly as a transformative technology. David explains how WebAssembly enables near-native performance across platforms while providing unparalleled isolation—making it a perfect fit for modern security needs. He contrasts this with traditional intrusion detection systems and outlines how Arcjet leverages WebAssembly to fill the gaps left by legacy tools.The episode also explores the broader evolution of the developer ecosystem. From the increasing adoption of AI-powered coding tools to the growing interest in languages like Rust, David shares his perspective on how these trends are reshaping software development. He also discusses the challenges of balancing AI-generated code with the need for security and the potential for AI to exacerbate vulnerabilities if not carefully managed.As the conversation wraps up, David touches on his research in sustainable computing and its implications for the tech industry. He highlights the positive strides being made toward greener computing practices and how developers can contribute to a more sustainable future.This episode offers a rich blend of technical insights, forward-thinking ideas, and practical advice for developers and security professionals navigating the ever-changing landscape of software security and development.LinksArcjetConsoleAcquiaRust Programming LanguageUniversity of OxfordSnyk - The Developer Security Company Follow UsOur WebsiteOur LinkedIn
Scott and Wes talk with Andrea Giammarchi (aka WebReflection) about his projects, including LinkDOM and PyScript, and the exciting future of running Python in the browser via WebAssembly. Show Notes 00:00 Welcome to Syntax! 01:04 Andrea's background and early work LinkDOM 07:25 Brought to you by Sentry.io 09:56 Pyscript 14:31 Why run Python in the browser? 20:17 Using WebAssembly to run different languages in JS 23:33 The advantages of WebAssembly 25:55 What excites Andrea about WASM Proposal: ESX as core JS feature 31:10 What is WASI? 32:21 Andrea's experience with IOT and microcontrollers 35:35 How can the JS ecosystem be improved? 38:07 Should we have reactivity in the browser? Signals 41:06 Andrea's thoughts on server-side APIs 43:43 Andrea's thoughts on TypeScript 49:13 Sick Picks & Shameless Plugs Sick Picks Andrea: ESP32 Shameless Plugs Andrea: Andrea's X / Twitter Hit us up on Socials! Syntax: X Instagram Tiktok LinkedIn Threads Wes: X Instagram Tiktok LinkedIn Threads Scott: X Instagram Tiktok LinkedIn Threads Randy: X Instagram YouTube Threads