Podcasts about SQLite

Serverless relational database management system (RDBMS)

  • 234PODCASTS
  • 459EPISODES
  • 48mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • May 5, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about SQLite

Latest podcast episodes about SQLite

Point-Free Videos
Modern Persistence: Schemas

Point-Free Videos

Play Episode Listen Later May 5, 2025 57:10


Subscriber-Only: Today's episode is available only to subscribers. If you are a Point-Free subscriber you can access your private podcast feed by visiting https://www.pointfree.co/account. --- What are the best, modern practices for persisting your application's state? We explore the topic by rebuilding Apple's Reminders app from scratch using SQLite, the most widely deployed database in all software. We will start by designing the schema that models our domain.

ThunderCast
State of the Thunder #2

ThunderCast

Play Episode Listen Later May 1, 2025 56:13


In our second State of the Thunder, we expand on the purpose of these meetings and go back to our roadmap. From there, we talk about bugs and stability, discussing how to tackle the backlogs of bugs in Bugzilla and whether Thunderbird is more or less stable now than in years past.We also chat about how to improve collaboration between MZLA and the Thunderbird Council, and finish with technical discussion on the address book's SQLite database and the Thunderbird CLI. ★ Support this podcast ★

ITSPmagazine | Technology. Cybersecurity. Society
Inside the DARPA AI Cyber Challenge: Securing Tomorrow's Critical Infrastructure Through AI and Healthy Competition | An RSAC Conference 2025 Conversation with Andrew Carney | On Location Coverage with Sean Martin and Marco Ciappelli

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Apr 28, 2025 27:35


During RSAC Conference 2025, Andrew Carney, Program Manager at DARPA, and (remotely via video) Dr. Kathleen Fisher, Professor at Tufts University and Program Manager for the AI Cyber Challenge (AIxCC), guide attendees through an immersive experience called Northbridge—a fictional city designed to showcase the critical role of AI in securing infrastructure through the DARPA-led AI Cyber Challenge.Inside Northbridge: The Stakes Are RealNorthbridge simulates the future of cybersecurity, blending AI, infrastructure, and human collaboration. It's not just a walkthrough — it's a call to action. Through simulated attacks on water systems, healthcare networks, and cyber operations, visitors witness firsthand the tangible impacts of vulnerabilities in critical systems. Dr. Fisher emphasizes that the AI Cyber Challenge isn't theoretical: the vulnerabilities competitors find and fix directly apply to real open-source software relied on by society today.The AI Cyber Challenge: Pairing Generative AI with Cyber ReasoningThe AI Cyber Challenge (AIxCC) invites teams from universities, small businesses, and consortiums to create cyber reasoning systems capable of autonomously identifying and fixing vulnerabilities. Leveraging leading foundation models from Anthropic, Google, Microsoft, and OpenAI, the teams operate with tight constraints—working with limited time, compute, and LLM credits—to uncover and patch vulnerabilities at scale. Remarkably, during semifinals, teams found and fixed nearly half of the synthetic vulnerabilities, and even discovered a real-world zero-day in SQLite.Building Toward DEFCON Finals and BeyondThe journey doesn't end at RSA. As the teams prepare for the AIxCC finals at DEFCON 2025, DARPA is increasing the complexity of the challenge—and the available resources. Beyond the competition, a core goal is public benefit: all cyber reasoning systems developed through AIxCC will be open-sourced under permissive licenses, encouraging widespread adoption across industries and government sectors.From Competition to CollaborationCarney and Fisher stress that the ultimate victory isn't in individual wins, but in strengthening cybersecurity collectively. Whether securing hospitals, water plants, or financial institutions, the future demands cooperation across public and private sectors.The Northbridge experience offers a powerful reminder: resilience in cybersecurity is built not through fear, but through innovation, collaboration, and a relentless drive to secure the systems we all depend on.___________Guest: Andrew Carney, AI Cyber Challenge Program Manager, Defense Advanced Research Projects Agency (DARPA) | https://www.linkedin.com/in/andrew-carney-945458a6/Hosts:Sean Martin, Co-Founder at ITSPmagazine | Website: https://www.seanmartin.comMarco Ciappelli, Co-Founder at ITSPmagazine | Website: https://www.marcociappelli.com______________________Episode SponsorsThreatLocker: https://itspm.ag/threatlocker-r974Akamai: https://itspm.ag/akamailbwcBlackCloak: https://itspm.ag/itspbcwebSandboxAQ: https://itspm.ag/sandboxaq-j2enArcher: https://itspm.ag/rsaarchwebDropzone AI: https://itspm.ag/dropzoneai-641ISACA: https://itspm.ag/isaca-96808ObjectFirst: https://itspm.ag/object-first-2gjlEdera: https://itspm.ag/edera-434868___________ResourcesThe DARPA AIxCC Experience at RSAC 2025 Innovation Sandbox: https://www.rsaconference.com/usa/programs/sandbox/darpaLearn more and catch more stories from RSAC Conference 2025 coverage: https://www.itspmagazine.com/rsac25___________KEYWORDSandrew carney, kathleen fisher, marco ciappelli, sean martin, darpa, aixcc, cybersecurity, rsac 2025, defcon, ai cybersecurity, event coverage, on location, conference______________________Catch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageWant to tell your Brand Story Briefing as part of our event coverage? Learn More 

Redefining CyberSecurity
Inside the DARPA AI Cyber Challenge: Securing Tomorrow's Critical Infrastructure Through AI and Healthy Competition | An RSAC Conference 2025 Conversation with Andrew Carney | On Location Coverage with Sean Martin and Marco Ciappelli

Redefining CyberSecurity

Play Episode Listen Later Apr 28, 2025 27:35


During RSAC Conference 2025, Andrew Carney, Program Manager at DARPA, and (remotely via video) Dr. Kathleen Fisher, Professor at Tufts University and Program Manager for the AI Cyber Challenge (AIxCC), guide attendees through an immersive experience called Northbridge—a fictional city designed to showcase the critical role of AI in securing infrastructure through the DARPA-led AI Cyber Challenge.Inside Northbridge: The Stakes Are RealNorthbridge simulates the future of cybersecurity, blending AI, infrastructure, and human collaboration. It's not just a walkthrough — it's a call to action. Through simulated attacks on water systems, healthcare networks, and cyber operations, visitors witness firsthand the tangible impacts of vulnerabilities in critical systems. Dr. Fisher emphasizes that the AI Cyber Challenge isn't theoretical: the vulnerabilities competitors find and fix directly apply to real open-source software relied on by society today.The AI Cyber Challenge: Pairing Generative AI with Cyber ReasoningThe AI Cyber Challenge (AIxCC) invites teams from universities, small businesses, and consortiums to create cyber reasoning systems capable of autonomously identifying and fixing vulnerabilities. Leveraging leading foundation models from Anthropic, Google, Microsoft, and OpenAI, the teams operate with tight constraints—working with limited time, compute, and LLM credits—to uncover and patch vulnerabilities at scale. Remarkably, during semifinals, teams found and fixed nearly half of the synthetic vulnerabilities, and even discovered a real-world zero-day in SQLite.Building Toward DEFCON Finals and BeyondThe journey doesn't end at RSA. As the teams prepare for the AIxCC finals at DEFCON 2025, DARPA is increasing the complexity of the challenge—and the available resources. Beyond the competition, a core goal is public benefit: all cyber reasoning systems developed through AIxCC will be open-sourced under permissive licenses, encouraging widespread adoption across industries and government sectors.From Competition to CollaborationCarney and Fisher stress that the ultimate victory isn't in individual wins, but in strengthening cybersecurity collectively. Whether securing hospitals, water plants, or financial institutions, the future demands cooperation across public and private sectors.The Northbridge experience offers a powerful reminder: resilience in cybersecurity is built not through fear, but through innovation, collaboration, and a relentless drive to secure the systems we all depend on.___________Guest: Andrew Carney, AI Cyber Challenge Program Manager, Defense Advanced Research Projects Agency (DARPA) | https://www.linkedin.com/in/andrew-carney-945458a6/Hosts:Sean Martin, Co-Founder at ITSPmagazine | Website: https://www.seanmartin.comMarco Ciappelli, Co-Founder at ITSPmagazine | Website: https://www.marcociappelli.com______________________Episode SponsorsThreatLocker: https://itspm.ag/threatlocker-r974Akamai: https://itspm.ag/akamailbwcBlackCloak: https://itspm.ag/itspbcwebSandboxAQ: https://itspm.ag/sandboxaq-j2enArcher: https://itspm.ag/rsaarchwebDropzone AI: https://itspm.ag/dropzoneai-641ISACA: https://itspm.ag/isaca-96808ObjectFirst: https://itspm.ag/object-first-2gjlEdera: https://itspm.ag/edera-434868___________ResourcesThe DARPA AIxCC Experience at RSAC 2025 Innovation Sandbox: https://www.rsaconference.com/usa/programs/sandbox/darpaLearn more and catch more stories from RSAC Conference 2025 coverage: https://www.itspmagazine.com/rsac25___________KEYWORDSandrew carney, kathleen fisher, marco ciappelli, sean martin, darpa, aixcc, cybersecurity, rsac 2025, defcon, ai cybersecurity, event coverage, on location, conference______________________Catch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageWant to tell your Brand Story Briefing as part of our event coverage? Learn More 

Database School
Building a serverless database replica with Carl Sverre

Database School

Play Episode Listen Later Apr 18, 2025 88:59


Want to learn more SQLite? Check out my SQLite course: https://highperformancesqlite.com In this episode, Carl Sverre and I discuss why syncing everything is a bad idea and how his new project, Graft, makes edge-native, partially replicated databases possible. We dig into SQLite, object storage, transactional guarantees, and why Graft might be the foundation for serverless database replicas. SQLSync: https://sqlsync.dev Stop syncing everything blog post: https://sqlsync.dev/posts/stop-syncing-everything Graft: https://github.com/orbitinghail/graft Follow Carl: Twitter: https://twitter.com/carlsverre LinkedIn: https://www.linkedin.com/in/carlsverre Website: https://carlsverre.com/ Follow Aaron: Twitter: https://twitter.com/aarondfrancis LinkedIn: https://www.linkedin.com/in/aarondfrancis Website: https://aaronfrancis.com - find articles, podcasts, courses, and more. Chapters: 00:00 - Intro and Carl's controversial blog title 01:00 - Why “stop syncing everything” doesn't mean stop syncing 02:30 - The problem with full database syncs 03:20 - Quick recap of SQL Sync and multiplayer SQLite 04:45 - How SQL Sync works using physical replication 06:00 - The limitations that led to building Graft 09:00 - What is Graft? A high-level overview 16:30 - Syncing architecture: how Graft scales 18:00 - Graft's stateless design and Fly.io integration 20:00 - S3 compatibility and using Tigris as backend 22:00 - Latency tuning and express zone support 24:00 - Can Graft run locally or with Minio? 27:00 - Page store vs meta store in Graft 36:00 - Index-aware prefetching in SQLite 38:00 - Prefetching intelligence: Graft vs driver 40:00 - The benefits of Graft's architectural simplicity 48:00 - Three use cases: apps, web apps, and replicas 50:00 - Sync timing and perceived latency 59:00 - Replaying transactions vs logical conflict resolution 1:03:00 - What's next for Graft and how to get involved 1:05:00 - Hacker News reception and blog post feedback 1:06:30 - Closing thoughts and where to find Carl

The .NET Core Podcast
From Code to Cloud in 15 Minutes: Jason Taylor's Expert Insights And The Clean Architecture Template

The .NET Core Podcast

Play Episode Listen Later Apr 4, 2025 62:14


RJJ Software's Software Development Service This episode of The Modern .NET Show is supported, in part, by RJJ Software's Podcasting Services, whether your company is looking to elevate its UK operations or reshape its US strategy, we can provide tailored solutions that exceed expectations. Show Notes "So I've been focused on the code to cloud journey, I like to call it, for the template. And two years ago, my goal was to provide a solution that could take you from code to cloud in 45 minutes or less. So I wanted it to be "file new project" to deploy a solution on Azure—because that's where my main focus is—within 45 minutes."— Jason Taylor Welcome friends to The Modern .NET Show; the premier .NET podcast, focusing entirely on the knowledge, tools, and frameworks that all .NET developers should have in their toolbox. We are the go-to podcast for .NET developers worldwide, and I am your host: Jamie "GaProgMan" Taylor. In this episode, Jason Taylor (no relation) joined us to talk about his journey from Classic ASP to .NET and Azure. He also discusses clean architecture's maintainability, and his open-source Clean Architecture Solution template for ASP .NET Core, along with strategies for learning new frameworks and dealing with complexity. "Right now the template supports PostgreSQL, SQLite, and SQL Server. If you want to support MySQL, it's relatively easy to do because there's already a Bicep module or a Terraform module that you can go in and use it. So I went from 45 minutes to now I can get things up and running in like, I don't know, two minutes of effort and 15 minutes of waiting around while I make my coffee"— Jason Taylor Along the way, we talk about some of the complexities involved with creating a template which supports multiple different frontend technologies and .NET Aspire (which was news to me when we recorded), all the while maintaining the goal of being the simplest approach for enterprise development with Clean Architecture. Anyway, without further ado, let's sit back, open up a terminal, type in `dotnet new podcast` and we'll dive into the core of Modern .NET. Supporting the Show If you find this episode useful in any way, please consider supporting the show by either leaving a review (check our review page for ways to do that), sharing the episode with a friend or colleague, buying the host a coffee, or considering becoming a Patron of the show. Full Show Notes The full show notes, including links to some of the things we discussed and a full transcription of this episode, can be found at: https://dotnetcore.show/season-7/from-code-to-cloud-in-15-minutes-jason-taylors-expert-insights-and-the-clean-architecture-template/ Jason's Links: Jason's Clean Architecture repo on GitHub Jason's Northwind Traders with Clean Architecture repo on Github Connect with Jason Jason's RapidBlazor repo on GitHub Other Links: C# DevKit for Visual Studio Code Code, Coffee, and Clever Debugging: Leslie Richardson's Microsoft Journey and the C# Dev Kit in Visual Studio Code with Leslie Richardson dotnet scaffold devcontainers .NET Aspire Azure Developer CLI GitHub CLI Obsidian Supporting the show: Leave a rating or review Buy the show a coffee Become a patron Getting in Touch: Via the contact page Joining the Discord Remember to rate and review the show on Apple Podcasts, Podchaser, or wherever you find your podcasts, this will help the show's audience grow. Or you can just share the show with a friend. And don't forget to reach out via our Contact page. We're very interested in your opinion of the show, so please get in touch. You can support the show by making a monthly donation on the show's Patreon page at: https://www.patreon.com/TheDotNetCorePodcast. Music created by Mono Memory Music, licensed to RJJ Software for use in The Modern .NET Show

PodRocket - A web development podcast from LogRocket
Put your database in the browser with Ben Holmes

PodRocket - A web development podcast from LogRocket

Play Episode Listen Later Apr 3, 2025 32:25


Ben Holmes, product engineer at Warp, joins PodRocket to talk about local-first web apps and what it takes to run a database directly in the browser. He breaks down how moving data closer to the user can reduce latency, improve performance, and simplify frontend development. Learn about SQLite in the browser, syncing challenges, handling conflicts, and tools like WebAssembly, IndexedDB, and CRDTs. Plus, Ben shares insights from building his own SimpleSyncEngine and where local-first development is headed! Links https://bholmes.dev https://www.linkedin.com/in/bholmesdev https://www.youtube.com/@bholmesdev https://x.com/bholmesdev https://bsky.app/profile/bholmes.dev https://github.com/bholmesdev We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Let us know by sending an email to our producer, Emily, at emily.kochanekketner@logrocket.com (mailto:emily.kochanekketner@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understand where your users are struggling by trying it for free at [LogRocket.com]. Try LogRocket for free today.(https://logrocket.com/signup/?pdr) Special Guest: Ben Holmes.

Constructed Futures
AI Briefing Series: Expanding AI Agent Capabilities with Model Context Protocol (MCP)

Constructed Futures

Play Episode Listen Later Mar 17, 2025 24:50


Check out The Link.AI Consulting at https://agentic.constructionConnect with Hugh on LinkedinHere's a shorter briefing based on the same informationExecutive Summary:Anthropic's Model Context Protocol (MCP), announced in late November 2024, is an open protocol designed to standardize how AI systems interact with external data sources and tools. It aims to overcome the current fragmented landscape of AI integration, where bespoke solutions are often required for each new connection. MCP establishes a universal framework for communication, simplifying development, enhancing AI agent effectiveness through improved context and tool access, and fostering a vibrant ecosystem of AI capabilities. By utilizing a client-server architecture and defining key primitives for data and action exchange, MCP offers a more dynamic and context-aware approach compared to traditional REST APIs. The emergence of MCP registries and marketplaces like smithery.ai further signifies its potential to transform the future of AI by enabling more interconnected, adaptable, and powerful AI systems.Key Themes and Important Ideas/Facts:1. Addressing the Challenges of AI Integration:The current method of integrating AI models with external resources is often complex and requires custom solutions for each connection. "When building AI applications today, each project frequently requires unique, bespoke solutions for how AI processes are constructed and how they connect with necessary data resources." (Introduction)This leads to significant development and maintenance burdens.MCP aims to solve this by providing a universal, open standard for connecting AI systems with data sources and tools. "MCP offers a unified solution to this problem by providing a universal, open standard for connecting AI systems with data sources, effectively replacing these fragmented integrations with a single, consistent protocol." (Introduction)The motivation is to overcome the limitations of isolated AI models "trapped behind information silos and legacy systems." (Introduction, citing source 2)MCP addresses the "MxN problem" by transforming it into an "N plus M setup," where each model and tool only needs to conform to the standard once. "Without a standardized protocol, this results in a complex web of M multiplied by N individual integrations... MCP's approach transforms this into a much simpler N plus M setup, where each tool and each model only needs to conform to the MCP standard once..." (Introduction, citing source 3)By open-sourcing MCP, Anthropic intends to foster collaboration and a shared ecosystem.2. Core Concepts of MCP:Client-Server Architecture: MCP is built on this established pattern. "At its core, the Model Context Protocol (MCP) is built upon a client-server architecture, a well-established design pattern in computing, to facilitate the connection between AI models and external resources." (Core Concepts)Host: The AI-powered application or agent environment the user interacts with (e.g., Claude desktop app, IDE plugin). "The Host is the AI-powered application or agent environment that the end-user directly interacts with." (Core Concepts) It can connect to multiple MCP servers and manages client permissions.Client: An intermediary within the Host that manages the connection to a single MCP server, maintaining a one-to-one link. "The Client acts as an intermediary within the Host, responsible for managing the connection to a single MCP server." (Core Concepts) It handles communication lifecycle and maintains stateful sessions.Server: An external program that implements MCP and provides capabilities (tools, data, prompts) for a specific domain (e.g., databases, cloud services). "The Server is a program, typically external to the AI model itself, that implements the MCP standard and provides a specific set of capabilities." (Core Concepts) Anthropic and the community have released servers for Google Drive, Slack, GitHub, Postgres, SQLite, and web browsing.This architecture is likened to a "USB port" for AI. "This client-server architecture, often likened to a 'USB port' for AI applications, provides a standardized way for AI assistants to 'plug into' any data source or service without requiring custom code for each connection." (Core Concepts, citing source 3)3. MCP vs. REST APIs for AI Agents:Limitations of REST APIs: Require significant manual effort, lack standardized context management, often stateless, static API definitions. "Integrating AI agents with external services via REST APIs often requires significant manual effort and lacks a standardized way to manage the evolving context of agent interactions." (MCP vs. REST APIs for AI Agents)Advantages of MCP:Standardized Communication: Based on JSON-RPC, simplifying integration.Dynamic Tool Discovery: AI can query servers to understand available tools. "AI models equipped with an MCP client can query connected servers to understand the tools and resources they offer." (MCP vs. REST APIs for AI Agents)Two-Way Real-Time Interaction: Supports persistent connections for context updates.Superior Approach Scenarios: Complex workflows with multiple tools, real-time data integration, frequently changing toolsets, intelligent assistants, automated coding tools, dynamic data analytics.4. Enhancing AI Agent Effectiveness:Improved Contextual Awareness and Management: MCP allows agents to access and retain relevant context from multiple sources, overcoming context window limitations. "One of the most significant ways in which the Model Context Protocol enhances the effectiveness of AI agents is by enabling improved contextual awareness and management." (Enhancing AI Agent Effectiveness)The ability to connect to multiple servers simultaneously supports complex workflows.The "Resources" primitive provides just-in-time, modular context, leading to more efficient processing and accurate responses.Facilitating Seamless Integration: MCP eliminates the need for custom code for each new data source or tool. "By providing a standardized interface, MCP eliminates the need for developers to write custom code for each new data source or tool that an AI agent needs to interact with." (Enhancing AI Agent Effectiveness)Pre-built servers for popular systems (Google Drive, Slack, GitHub, databases) streamline integration.Supporting Advanced Reasoning and Decision-Making: The "Tools" primitive allows agents to invoke functions and access real-time data.The "Sampling" primitive enables complex, multi-step reasoning processes (with recommended human approval).Real-World Examples:Corporate chatbots querying multiple internal systems.AI-powered coding assistants (Sourcegraph Cody, Zed Editor) accessing codebases.Anthropic's Claude Desktop accessing local files. "By integrating MCP, Claude can securely access local files, applications, and services on the user's computer." (Enhancing AI Agent Effectiveness)AI2SQL generating SQL from natural language.Apify allowing AI agents to access Apify Actors for automation.5. Driving Adoption for AI Tool Providers:Standardized Integration: Reduces the complexity and costs of developing and maintaining multiple custom integrations. "By providing a single, open standard for connecting AI models with tools, MCP reduces the need for tool providers to develop and maintain multiple custom integrations tailored to different AI platforms." (Driving Adoption for AI Tool Providers)Increased Interoperability: Tools can work with any MCP-compatible AI model, broadening the potential user base and reducing vendor lock-in. "Tools built using the MCP standard can seamlessly work with any AI model that has implemented an MCP client, regardless of the AI provider (e.g., Anthropic, OpenAI) or whether it's an open-source model." (Driving Adoption for AI Tool Providers)Opportunities for Innovation and Specialization: Enables developers to create specialized servers that can be accessed by any MCP client, fostering a division of labor.Benefits for Scalability and Future-Proofing: Ensures integrations remain compatible with future AI models adhering to the standard.6. Real-World Use Cases and Examples of MCP Implementation (Detailed):Coding Assistants: Sourcegraph Cody and Zed Editor.Enterprise Integrations: Block and Apollo. "Companies like Block and Apollo have adopted MCP to securely connect their AI systems with internal data repositories and customer relationship management (CRM) systems." (Real-World Use Cases and Examples of MCP Implementation)Desktop AI Applications: Anthropic's Claude Desktop.Data Querying Tools: AI2SQL.Automation Platforms: Apify.Community-Built Servers: Numerous servers on platforms like Smithery.ai and mcp-get.com for databases, cloud services, etc.7. Future Implications and the Evolving AI Ecosystem:Fostering Interoperability and Standardization: MCP has the potential to become a universal standard for AI integration. "By establishing a universal standard for AI integration, MCP could become the equivalent of HTTP for the web or USB-C for device connectivity in the AI world." (Future Implications and the Evolving AI Ecosystem)Could decouple AI model choice from underlying integrations.Potential Impact on AI R&D and Deployment: May shift focus towards effective utilization of external information over solely increasing model size. Could lead to more modular AI system designs.Addressing Potential Challenges: Requires buy-in from AI providers and tool developers. Security is paramount. Ensuring user trust and human oversight are crucial. "Security is another paramount concern. Allowing AI agents to access and interact with external systems, especially sensitive enterprise data, necessitates robust security measures to prevent unauthorized access or data leaks." (Future Implications and the Evolving AI Ecosystem)Conclusion:MCP offers a promising path towards a more interconnected, context-aware, and effective AI ecosystem. Its standardized framework addresses critical integration challenges, enhances AI agent capabilities, and provides new opportunities for tool providers and the broader AI community. While adoption challenges exist, the potential transformative impact of MCP on the future of AI is significant.

The GeekNarrator
Hosted PostgreSQL on bare metal and uni kernel

The GeekNarrator

Play Episode Listen Later Mar 14, 2025 60:03


The GeekNarrator memberships can be joined here: https://www.youtube.com/channel/UC_mGuY4g0mggeUGM6V1osdA/joinMembership will get you access to member only videos, exclusive notes and monthly 1:1 with me. Here you can see all the member only videos: https://www.youtube.com/playlist?list=UUMO_mGuY4g0mggeUGM6V1osdA------------------------------------------------------------------------------------------------------------------------------------------------------------------About this episode: ------------------------------------------------------------------------------------------------------------------------------------------------------------------In this episode, we talk to Søren Schmidt, Co-Founder and CEO of Prisma, discussing the evolution of Prisma from a backend as a service to a popular ORM and now to Prisma Postgres. He shares insights into the challenges faced during this journey, the importance of user feedback, and the innovative architecture of Prisma Postgres, which leverages micro VMs for performance optimization. The conversation also touches on the complexities of managing data centers and the strategies employed to ensure a seamless user experience. In this conversation, Søren Schmidt discusses the details about Postgres snapshots, their impact on performance, and the mechanisms for fault tolerance. He explains how Pulse change data capture works and how Prisma Postgres simplifies database management for users. Chapters00:00 Introduction to Prisma and Its Evolution03:00 The Journey from ORM to Prisma Postgres06:00 Simplifying Database Management09:01 Understanding Prisma Postgres Architecture12:12 The Role of Accelerate in Query Routing14:51 Optimizing Query Processing with Micro VMs18:12 Maintaining Postgres Integrity in a Micro VM Environment21:07 User Experience and Community Feedback23:57 Challenges of Data Center Management27:09 Cold Starts and Performance Optimization34:30 Understanding Snapshots in Postgres38:55 Snapshot Mechanisms and Fault Tolerance44:09 Change Data Capture with Pulse55:07 Transitioning to Prisma Postgres58:45 Community and Getting Started with Prisma PostgresSome blogs worth checking out:https://www.prisma.io/blog/prisma-postgres-the-future-of-serverless-databaseshttps://www.prisma.io/blog/cloudflare-unikernels-and-bare-metal-life-of-a-prisma-postgres-queryhttps://www.prisma.io/blog/announcing-prisma-postgres-early-accessPrisma Postgres relies heavily on the Unikraft project. There is a good introductory talk here: https://www.youtube.com/watch?v=n4wOyAuNhl0And some very technical papers here: https://unikraft.org/community/papersThe best way to get started with Prisma Postgres is to go straight to https://www.prisma.io/ ------------------------------------------------------------------------------------------------------------------------------------------------------------------Like building real stuff?------------------------------------------------------------------------------------------------------------------------------------------------------------------Try out CodeCrafters and build amazing real world systems like Redis, Kafka, Sqlite. Use the link below to signup and get 40% off on paid subscription.https://app.codecrafters.io/join?via=geeknarrator------------Database internals series: https://youtu.be/yV_Zp0Mi3xsPopular playlists:Realtime streaming systems: https://www.youtube.com/playlist?list=PLL7QpTxsA4se-mAKKoVOs3VcaP71X_LA-Software Engineering: https://www.youtube.com/playlist?list=PLL7QpTxsA4sf6By03bot5BhKoMgxDUU17Distributed systems and databases: https://www.youtube.com/playlist?list=PLL7QpTxsA4sfLDUnjBJXJGFhhz94jDd_dModern databases: https://www.youtube.com/playlist?list=PLL7QpTxsA4scSeZAsCUXijtnfW5ARlrsN

The GeekNarrator
Redpanda - High Performance Streaming Platform for Data Intensive Applications

The GeekNarrator

Play Episode Listen Later Mar 14, 2025 65:28


The GeekNarrator memberships can be joined here: https://www.youtube.com/channel/UC_mGuY4g0mggeUGM6V1osdA/joinMembership will get you access to member only videos, exclusive notes and monthly 1:1 with me. Here you can see all the member only videos: https://www.youtube.com/playlist?list=UUMO_mGuY4g0mggeUGM6V1osdA------------------------------------------------------------------------------------------------------------------------------------------------------------------About this episode: ------------------------------------------------------------------------------------------------------------------------------------------------------------------In this conversation, Alex from Red Panda discusses his engineering background, the challenges faced in reliability engineering, and the journey of building a better streaming system. He emphasizes the importance of understanding latency and performance in engineering systems, the market position of Red Panda in relation to Kafka, and the complexities involved in optimizing codebases for better performance. In this conversation, Alex discusses Red Panda's architecture, focusing on its thread architecture, memory allocation mechanics, and the importance of protocol correctness. He highlights how Red Panda stands out in the data systems landscape by eliminating unnecessary complexities and optimizing performance across various latency spectrums. The discussion also touches on the future of data processing, emphasizing the shift towards agentic workloads and the integration of analytical and operational layers.Chapters00:00 Introduction11:07 Building a Better Streaming System19:10 Market Position and Competition25:06 Optimizing Latency and Performance32:38 Understanding Complexity in Codebases33:36 Thread Architecture and Concurrency Models39:39 Memory Allocation Mechanics47:31 Protocol Correctness and Optimization Strategies56:27 Red Panda's Unique Position in Data Systems01:02:05 The Future of Data Processing and Agentic WorkloadsBlogs:TPC buffers: https://www.redpanda.com/blog/tpc-buffershttps://www.redpanda.com/blog/always-on-production-memory-profiling-seastarhttps://www.redpanda.com/blog/end-to-end-data-pipelines-types-benefits-and-process------------------------------------------------------------------------------------------------------------------------------------------------------------------Like building real stuff?------------------------------------------------------------------------------------------------------------------------------------------------------------------Try out CodeCrafters and build amazing real world systems like Redis, Kafka, Sqlite. Use the link below to signup and get 40% off on paid subscription.https://app.codecrafters.io/join?via=geeknarrator------------------------------------------------------------------------------------------------------------------------------------------------------------------Link to other playlists. LIKE, SHARE and SUBSCRIBE------------------------------------------------------------------------------------------------------------------------------------------------------------------If you like this episode, please hit the like button and share it with your network. Also please subscribe if you haven't yet.Database internals series: https://youtu.be/yV_Zp0Mi3xsPopular playlists:Realtime streaming systems: https://www.youtube.com/playlist?list=PLL7QpTxsA4se-mAKKoVOs3VcaP71X_LA-Software Engineering: https://www.youtube.com/playlist?list=PLL7QpTxsA4sf6By03bot5BhKoMgxDUU17Distributed systems and databases: https://www.youtube.com/playlist?list=PLL7QpTxsA4sfLDUnjBJXJGFhhz94jDd_dModern databases: https://www.youtube.com/playlist?list=PLL7QpTxsA4scSeZAsCUXijtnfW5ARlrsNStay Curios! Keep Learning!#streaming #kafka #redpanda #c++ #databasesystems #SQL #distributedsystems #memoryallocation #garbagecollection

The GeekNarrator
eBPF and continuous profiling with Frederic

The GeekNarrator

Play Episode Listen Later Mar 14, 2025 77:46


The GeekNarrator memberships can be joined here: https://www.youtube.com/channel/UC_mGuY4g0mggeUGM6V1osdA/joinMembership will get you access to member only videos, exclusive notes and monthly 1:1 with me. Here you can see all the member only videos: https://www.youtube.com/playlist?list=UUMO_mGuY4g0mggeUGM6V1osdA------------------------------------------------------------------------------------------------------------------------------------------------------------------About this episode: ------------------------------------------------------------------------------------------------------------------------------------------------------------------In this episode, Kaivalya Apte and Frederic Branczyk talk about observability, focusing on continuous profiling and the role of eBPF. They discuss the evolution of profiling techniques, the importance of systematic data collection, and the challenges faced in maintaining low overhead while gathering detailed performance metrics.Frederic shares insights from his extensive experience with Prometheus and Kubernetes, emphasizing the transformative impact of continuous profiling on software performance optimization. This conversation delves into the intricacies of eBPF (Extended Berkeley Packet Filter) and its applications in profiling and performance analysis. The discussion covers the capabilities of eBPF in extending the kernel safely, the mechanisms of user space profiling, and the handling of process terminations. It also explores memory and network profiling techniques, the challenges of profiling in different programming environments, and the limitations of eBPF in certain use cases. The conversation concludes with valuable resources for those interested in learning more about eBPF and profiling techniques.Chapters:00:00 Introduction to Observability and Profiling01:17 Frederic's Background and Expertise02:11 The Importance of Continuous Profiling06:46 The Value of Continuous Profiling11:20 Understanding Profiling Data19:09 Data Structures and Performance in Profiling32:35 The Role of eBPF in Profiling42:48 Introduction to eBPF and Its Capabilities48:32 User Space Profiling and Memory Management51:39 Handling Process Termination and Agent Recovery55:27 Memory and Network Profiling Techniques01:01:33 Profiling in Different Programming Environments01:11:47 Use Cases and Limitations of eBPF in Profiling01:13:54 Resources for Learning eBPF and Profiling Techniques------------------------------------------------------------------------------------------------------------------------------------------------------------------Like building real stuff?------------------------------------------------------------------------------------------------------------------------------------------------------------------Try out CodeCrafters and build amazing real world systems like Redis, Kafka, Sqlite. Use the link below to signup and get 40% off on paid subscription.https://app.codecrafters.io/join?via=geeknarrator------------------------------------------------------------------------------------------------------------------------------------------------------------------Link to other playlists. LIKE, SHARE and SUBSCRIBE------------------------------------------------------------------------------------------------------------------------------------------------------------------Database internals series: https://youtu.be/yV_Zp0Mi3xsPopular playlists:Realtime streaming systems: https://www.youtube.com/playlist?list=PLL7QpTxsA4se-mAKKoVOs3VcaP71X_LA-Software Engineering: https://www.youtube.com/playlist?list=PLL7QpTxsA4sf6By03bot5BhKoMgxDUU17Distributed systems and databases: https://www.youtube.com/playlist?list=PLL7QpTxsA4sfLDUnjBJXJGFhhz94jDd_dModern databases: https://www.youtube.com/playlist?list=PLL7QpTxsA4scSeZAsCUXijtnfW5ARlrsNStay Curios! Keep Learning!

Point-Free Videos
SQL Builders: Selects

Point-Free Videos

Play Episode Listen Later Mar 10, 2025 44:49


Subscriber-Only: Today's episode is available only to subscribers. If you are a Point-Free subscriber you can access your private podcast feed by visiting https://www.pointfree.co/account. --- We begin to build a type-safe SQL query builder from scratch by familiarizing ourselves with the `SELECT` statement. We will explore the SQLite documentation to understand the syntax, introduce a type that can generate valid statements, and write powerful inline snapshot tests for their output.

Point-Free Videos

Every once in awhile we release a new episode free for all to see, and today is that day! Please enjoy this episode, and if you find this interesting you may want to consider a subscription https://www.pointfree.co/pricing. --- Last week we released SharingGRDB, an alternative to SwiftData powered by SQLite, but there are a few improvements we could make. Let's take a look at some problems with the current tools before giving a sneak peek at the solution: a powerful new query building library that leverages many advanced Swift features that we will soon build from scratch.

AWS Bites
140. DuckDB Meets AWS: A Match Made in Cloud

AWS Bites

Play Episode Listen Later Feb 21, 2025 17:38


In this episode, we explore DuckDB, an open-source analytical database known for its speed and simplicity. Discover how DuckDB stands out in various applications and compare it to other tools like SQLite, Athena, Pandas, and Polars. We also demonstrate integrating DuckDB with AWS Lambda and Step Functions for serverless analytics.AWS Bites is brought to you by fourTheorem. If you are looking for a partner to architect, develop and modernise on AWS, give fourTheorem a call. Check out ⁠fourtheorem.com⁠In this episode, we mentioned the following resources: Our `duck-query-lambda`, A Lambda runtime for DuckDB queries: https://github.com/fourTheorem/duck-query-lambda DuckDB's official website: https://duckdb.org/ LibSQL: https://github.com/tursodatabase/libsql Do you have any AWS questions you would like us to address?Leave a comment here or connect with us on X/Twitter, BlueSky or LinkedIn:- ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://twitter.com/eoins⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | https://bsky.app/profile/eoin.sh | https://www.linkedin.com/in/eoins/- ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://twitter.com/loige⁠⁠⁠⁠ | https://bsky.app/profile/loige.co | https://www.linkedin.com/in/lucianomammino/

The GeekNarrator
Patterns of Distributed Systems with Unmesh Joshi

The GeekNarrator

Play Episode Listen Later Feb 12, 2025 58:14


The GeekNarrator memberships can be joined here: https://www.youtube.com/channel/UC_mGuY4g0mggeUGM6V1osdA/joinMembership will get you access to member only videos, exclusive notes and monthly 1:1 with me. Here you can see all the member only videos: https://www.youtube.com/playlist?list=UUMO_mGuY4g0mggeUGM6V1osdA------------------------------------------------------------------------------------------------------------------------------------------------------------------About this episode: ------------------------------------------------------------------------------------------------------------------------------------------------------------------In this conversation, Unmesh Joshi discusses the patterns of distributed systems. He emphasizes the importance of understanding the context in which patterns are applied, the need to read code to grasp their implementation, and the common pitfalls that developers face when applying patterns without a clear understanding of the underlying problems. Chapters00:00 Introduction to Distributed Systems and Patterns05:39 Understanding Patterns in Distributed Systems19:23 Bridging Theory and Practice in Distributed Systems28:56 The Role of Developers in Understanding Patterns31:58 Understanding Patterns in Software Development40:58 The Human Aspect of Software Design44:37 Iterative Development and Real-World Applications49:03 The Future of Patterns in Cloud-Native Systems55:07 Common Misunderstandings of Distributed PatternsInteresting quotes:"Patterns capture wisdom of generations.""Reading code is the best way to understand.""Patterns help you see beyond abstractions.""Understanding patterns helps bridge the gap.""Expert generalists can operate across verticals.""There are no simple systems in the cloud era.""Patterns can add complexity if misunderstood.""Patterns are always useful within a context.""Design and development are human activities.""The deconstruction of databases is happening.""Paxos is the most misunderstood pattern."Unmesh Joshi :https://in.linkedin.com/in/unmesh-joshi-9487635Catalog of Patterns: https://martinfowler.com/articles/patterns-of-distributed-systems/I hope you liked the episode, if you did please like, share and subscribe. ------------------------------------------------------------------------------------------------------------------------------------------------------------------Like building real stuff?------------------------------------------------------------------------------------------------------------------------------------------------------------------Try out CodeCrafters and build amazing real world systems like Redis, Kafka, Sqlite. Use the link below to signup and get 40% off on paid subscription.https://app.codecrafters.io/join?via=geeknarrator------------------------------------------------------------------------------------------------------------------------------------------------------------------Link to other playlists. LIKE, SHARE and SUBSCRIBE------------------------------------------------------------------------------------------------------------------------------------------------------------------If you like this episode, please hit the like button and share it with your network. Also please subscribe if you haven't yet.Database internals series: https://youtu.be/yV_Zp0Mi3xsPopular playlists:Realtime streaming systems: https://www.youtube.com/playlist?list=PLL7QpTxsA4se-mAKKoVOs3VcaP71X_LA-Software Engineering: https://www.youtube.com/playlist?list=PLL7QpTxsA4sf6By03bot5BhKoMgxDUU17Distributed systems and databases: https://www.youtube.com/playlist?list=PLL7QpTxsA4sfLDUnjBJXJGFhhz94jDd_dModern databases: https://www.youtube.com/playlist?list=PLL7QpTxsA4scSeZAsCUXijtnfW5ARlrsNStay Curios! Keep Learning!#distributedsystems #patterns #softwarearchitecture #consensus #algorithms #coding #patterns #softwaredevelopment #ThoughtWorks #softwareengineering #cloud #computing #software

LINUX Unplugged
601: Taming the Demons

LINUX Unplugged

Play Episode Listen Later Feb 10, 2025 68:42 Transcription Available


It's week one of our FreeBSD challenge, and for one of us, that penalty Windows install looks uncomfortably close! Plus, Zach Mitchell joins us to update us on Planet Nix.Sponsored By:Tailscale: Tailscale is a programmable networking software that is private and secure by default - get it free on up to 100 devices! 1Password Extended Access Management: 1Password Extended Access Management is a device trust solution for companies with Okta, and they ensure that if a device isn't trusted and secure, it can't log into your cloud apps. Support LINUX UnpluggedLinks:

Remote Ruby
High Leverage Rails & SQLite with Stephen Margheim

Remote Ruby

Play Episode Listen Later Feb 7, 2025 51:26


In this episode, Chris and Andrew welcome guest Stephen Margheim to discuss his specialization in Ruby and SQLite. Stephen shares his journey of improving thedeveloper experience with SQLite by addressing various pain points and adapting it for production in the Rails ecosystem. He talks about his contributions to Rails 8, making it the first fully production ready SQLite compatible web application framework. The conversation also covers the importance of leveraging these tools to build high-quality applications quickly and efficiently. Also, Stephen announces his upcoming course "High Leverage Rails" which focuses on maximizing the potential of Rails and SQLite for rapid, reliable development. Hit download now to hear more!LinksStephen Margheim XStephen Margheim WebsiteStephen Margheim LinkedInHigh Leverage Rails (video course by Stephen Margheim)High Performance SQLiteHatchboxTropical on Rails, April 3 & 4, 2025. São Paulo, Brazil Sin City Ruby, April 10 & 11, 2025 Las Vegas, NVHoneybadgerHoneybadger is an application health monitoring tool built by developers for developers.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you. Jason Charnes X/Twitter Chris Oliver X/Twitter Andrew Mason X/Twitter

Code and the Coding Coders who Code it
Episode 45 - Stephen Margheim

Code and the Coding Coders who Code it

Play Episode Listen Later Feb 4, 2025 38:46 Transcription Available


Stephen Margheim, a celebrated figure in the Ruby and Rails community, returns to unravel the fascinating intricacies of his latest project—writing a parser for SQLite's SQL dialect in Ruby. He shares his enlightening journey of translating complex SQL syntax, which at first seemed a simple endeavor but soon unfolded into a realm of deep learning and unexpected challenges. Alongside this, Stephen collaborates with Aaron Francis on "High Leverage Rails," a video course designed to spotlight the synergy between Rails and SQLite, offering a treasure trove of insights into developing high-quality applications.We dive into the nuanced world of SQL parsing, where Stephen candidly recounts the arduous process of porting SQLite's lexer and parser into Ruby. What began as a straightforward task quickly turned into a labyrinth of complex syntax and discrepancies that required astute attention and incremental progress. He reflects on the absence of a fully compatible SQLite parser in any language, emphasizing the significance of open parsers like Postgres in creating a robust ecosystem for tools and libraries.Stephen's excitement is palpable as he discusses Quickdraw, a groundbreaking testing framework that revolutionizes testing in multi-core environments. This innovation, along with the anticipation for RailsConf 2025 in Philadelphia, paints a bright future for the Rails community. With rich discussions on parsing, testing, and upcoming Rails events, this episode promises to inspire and engage both seasoned developers and newcomers to the Ruby and Rails landscape. Join us for an episode filled with excitement, insight, and a glimpse into the future of Rails development.Send us some love.HoneybadgerHoneybadger is an application health monitoring tool built by developers for developers.HoneybadgerHoneybadger is an application health monitoring tool built by developers for developers.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Support the showReady to start your own podcast?This show is hosted on Buzzsprout and it's awesome, not to mention a Ruby on Rails application. Let Buzzsprout know we sent you and you'll get a $20 Amazon gift card if you sign up for a paid plan, and it helps support our show.

Point-Free Videos
Sharing with SQLite: Dynamic Queries

Point-Free Videos

Play Episode Listen Later Feb 3, 2025 39:03


Subscriber-Only: Today's episode is available only to subscribers. If you are a Point-Free subscriber you can access your private podcast feed by visiting https://www.pointfree.co/account. --- We are now driving several features using SQLite using a simple property wrapper that offers the same ergonomics as Swift Data's `@Query` macro, and automatically keeps the view in sync with the database. Let's add one more feature to leverage _dynamic_ queries by allowing the user to change how the data is sorted.

The Changelog
Turso is rewriting SQLite in Rust (Interview)

The Changelog

Play Episode Listen Later Jan 30, 2025 75:56


Glauber Costa, co-founder and CEO of Turso, joins us to discuss libSQL, Limbo, and how they're rewriting SQLite in Rust. We discuss their efforts with libSQL, the challenge of SQLite being in the public domain but not being open for contribution, their choice to rewrite everything with Limbo, how this all plays into the future of the Turso platform, how they test Limbo with Deterministic Simulation Testing (DST), and their plan to replace SQLite.

Changelog Master Feed
Turso is rewriting SQLite in Rust (Changelog Interviews #626)

Changelog Master Feed

Play Episode Listen Later Jan 30, 2025 75:54 Transcription Available


Glauber Costa, co-founder and CEO of Turso, joins us to discuss libSQL, Limbo, and how they're rewriting SQLite in Rust. We discuss their efforts with libSQL, the challenge of SQLite being in the public domain but not being open for contribution, their choice to rewrite everything with Limbo, how this all plays into the future of the Turso platform, how they test Limbo with Deterministic Simulation Testing (DST), and their plan to replace SQLite.

Point-Free Videos
Sharing with SQLite: Advanced Queries

Point-Free Videos

Play Episode Listen Later Jan 27, 2025 27:55


Subscriber-Only: Today's episode is available only to subscribers. If you are a Point-Free subscriber you can access your private podcast feed by visiting https://www.pointfree.co/account. --- Let's leverage our new `@Shared` SQLite strategy by adding a brand new feature: archiving. We will see how easy it is to incorporate queries directly into a SwiftUI view, and we will expand our tools to support even more kinds of queries.

Digital Forensics Now
Mind Matters: Navigating DFIR with Balance

Digital Forensics Now

Play Episode Listen Later Jan 24, 2025 63:25 Transcription Available


Send us a textGet ready for a hands-on look at digital forensics and the challenges professionals tackle every day. We share a story about forensic guessing that highlights the importance of testing assumptions and following the evidence to avoid errors. The discussion emphasizes how staying grounded in facts can prevent investigations from going off track.We also highlight advancements in forensic tools and training. Learn about tools like Belkasoft, the UFADE tool for iOS device extraction, and SQBite for SQLite database analysis. These tools are improving efficiency and accessibility in the field. But it's not all about the tech. We address the important topic of mental health in digital forensics. We discuss the pressures of the job, strategies for managing stress, and the importance of supporting one another. Personal experiences and practical tips highlight the need to prioritize mental well-being in this demanding field.This episode provides valuable information on tools, investigative approaches, and mental health strategies for forensic professionals. Notes:Belkasoft Windows Forensics Coursehttps://belkasoft.com/windows-forensics-trainingUpdates to UFADEhttps://github.com/prosch88/UFADE/releasesThe Duck Hunter's Bloghttps://digital4n6withdamien.blogspot.com/2025/01/the-duck-hunters-guide-blog-1.htmlhttps://digital4n6withdamien.blogspot.com/2025/01/the-duck-hunters-guide-blog-2.htmlhttps://digital4n6withdamien.blogspot.com/2025/01/the-duck-hunters-guide-blog-3.htmlSQBitehttps://digital4n6withdamien.blogspot.com/2025/01/introducing-sqbite-alpha-python-tool.htmlhttps://github.com/SpyderForensics/SQLite_Forensics/tree/main/SQBiteMental Health in DFIRhttps://thebinaryhick.blog/2019/06/21/mental-health-in-dfir-its-kind-of-a-big-deal/https://www.forensicfocus.com/podcast/the-impact-of-traumatic-material-on-dfir-well-being/https://www.forensicfocus.com/news/dfir-and-mental-health-are-we-doing-enough-to-protect-investigators/https://www.sciencedirect.com/science/article/pii/S2666281721000251https://belkasoft.com/preventing-burnout-in-digital-forensicshttps://www.magnetforensics.com/resources/taking-care-of-mental-health-during-digital-forensics-investigations/https://www.harmlessthepodcast.com/https://www.shiftwellness.org/about-ushttps://www.nyleap.org/What's New with the LEAPPShttps://github.com/abrignoni

The GeekNarrator
AWS Aurora Distributed SQL internals with Marc Brooker

The GeekNarrator

Play Episode Listen Later Jan 24, 2025 74:55


The GeekNarrator memberships can be joined here: https://www.youtube.com/channel/UC_mGuY4g0mggeUGM6V1osdA/join Membership will get you access to member only videos, exclusive notes and monthly 1:1 with me. Here you can see all the member only videos: https://www.youtube.com/playlist?list=UUMO_mGuY4g0mggeUGM6V1osdA ------------------------------------------------------------------------------------------------------------------------------------------------------------------ About this episode: ------------------------------------------------------------------------------------------------------------------------------------------------------------------ In this episode of the Geek Narrator podcast, host Kaivalya Apte interviews Marc Brooker, a distinguished engineer at AWS, about Aurora D-SQL. They discuss Marc's journey at AWS, the evolution of Aurora D-SQL, and the customer-centric approach that led to its development. Marc explains the choice of PostgreSQL as the foundation for DSQL, the architecture of the database, and the importance of snapshot isolation and concurrency control. The conversation goes into the technical aspects of DSQL, including the write process and how atomicity is maintained, providing listeners with a comprehensive understanding of this innovative database solution. This conversation also goes deep into the intricacies of database design, focusing on fault tolerance, replication strategies, and the role of Firecracker VMs in enhancing scalability. Marc Brooker discusses the architecture of Aurora D-SQL, emphasizing the importance of transaction management, the challenges of active-active deployments, and the trade-offs involved in database design. The discussion also highlights various use cases for Aurora DSQL, including its suitability for micro-services and serverless architectures, while addressing scenarios where it may not be the best fit. Chapters 00:00 Introduction to Aurora DSQL and Marc Brooker's Journey 03:38 The Evolution of Aurora DSQL at AWS 09:24 Customer-Centric Development and Technological Enablers 12:50 Why PostgreSQL? The Choice Behind DSQL 16:39 High-Level Architecture of DSQL 22:07 Understanding Snapshot Isolation and Concurrency Control 28:45 The Write Process and Atomicity in DSQL 38:50 Designing Fault Tolerance in Databases 47:38 Replication and Transaction Commit Strategies 54:35 Active-Active Deployment and Fault Tolerance 01:00:14 Role of Firecracker VM in Scalability 01:09:27 Use Cases and Trade-offs of Aurora D-SQL Marc's Blog: https://brooker.co.za/blog/ Marc on Aurora DSQL : https://brooker.co.za/blog/2024/12/03/aurora-dsql.html AWS's documentation on Aurora DSQL : https://aws.amazon.com/rds/aurora/dsql/features/ ------------------------------------------------------------------------------------------------------------------------------------------------------------------ Like building real stuff? ------------------------------------------------------------------------------------------------------------------------------------------------------------------ Try out CodeCrafters and build amazing real world systems like Redis, Kafka, Sqlite. Use the link below to signup and get 40% off on paid subscription. https://app.codecrafters.io/join?via=geeknarrator ------------------------------------------------------------------------------------------------------------------------------------------------------------------ Link to other playlists. LIKE, SHARE and SUBSCRIBE ------------------------------------------------------------------------------------------------------------------------------------------------------------------ If you like this episode, please hit the like button and share it with your network. Also please subscribe if you haven't yet. Database internals series: https://youtu.be/yV_Zp0Mi3xs Popular playlists: Realtime streaming systems: https://www.youtube.com/playlist?list=PLL7QpTxsA4se-mAKKoVOs3VcaP71X_LA- Software Engineering: https://www.youtube.com/playlist?list=PLL7QpTxsA4sf6By03bot5BhKoMgxDUU17 Distributed systems and databases: https://www.youtube.com/playlist?list=PLL7QpTxsA4sfLDUnjBJXJGFhhz94jDd_d Modern databases: https://www.youtube.com/playlist?list=PLL7QpTxsA4scSeZAsCUXijtnfW5ARlrsN Stay Curios! Keep Learning! #sql #postgres #databasesystems #aws #awsdevelopers #spanner #google #cockroachdb #yugabytedb #cap #scalability #WAL #DistributedSystems #Cloud #aurora

The GeekNarrator
Power of #Duckdb with Postgres: pg_duckdb

The GeekNarrator

Play Episode Listen Later Jan 22, 2025 60:19


The GeekNarrator memberships can be joined here: https://www.youtube.com/channel/UC_mGuY4g0mggeUGM6V1osdA/join Membership will get you access to member only videos, exclusive notes and monthly 1:1 with me. Here you can see all the member only videos: https://www.youtube.com/playlist?list=UUMO_mGuY4g0mggeUGM6V1osdA ------------------------------------------------------------------------------------------------------------------------------------------------------------------ About this episode: ------------------------------------------------------------------------------------------------------------------------------------------------------------------ Hey folks - In this episode we have Jelte with us, who is the main contributor to the pg_duckdb project, which is a postgres extension to add the #duckdb power to our beloved #postgresql. We will try to understand how it works? Why is it needed and what's the future of pg_duckdb? If you love #Postgres or #Duckdb or just understanding #database internals then this episode will give you pretty solid insights into Postgres query processing, Duckdb analytics, Postgres extension ecosystem and so on. Basics: pg_duckdb is a Postgres extension that embeds DuckDB's columnar-vectorized analytics engine and features into Postgres. We recommend using pg_duckdb to build high performance analytics and data-intensive applications. Chapters: 00:00 Introduction to PG-DuckDB 03:40 Understanding the Integration of DuckDB with Postgres 06:23 Architecture of PG-DuckDB: Query Processing Explained 10:02 Configuring DuckDB for Analytics Queries 15:37 Managing Workloads: Transactional vs. Analytical 21:02 Observability and Debugging in DuckDB 25:58 Data Deletion and GDPR Compliance 30:46 Schema Management and Migration Challenges 33:14 Managing Schema Changes in Databases 35:21 Upgrading Database Extensions 36:33 Enhancing Data Reading Methods 38:33 Future Features and Improvements 45:54 Use Cases for PGDuckDB 50:03 Challenges in Building the Extension 55:25 Getting Involved with PGDuckDB Important links: The duckdb discord server, which has a pg_duckdb channel inside it: https://discord.duckdb.org/ repo: https://github.com/duckdb/pg_duckdb good-first-issue issues: https://github.com/duckdb/pg_duckdb/issues?q=sort%3Aupdated-desc+is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22 ------------------------------------------------------------------------------------------------------------------------------------------------------------------ Like building real stuff? ------------------------------------------------------------------------------------------------------------------------------------------------------------------ Try out CodeCrafters and build amazing real world systems like Redis, Kafka, Sqlite. Use the link below to signup and get 40% off on paid subscription. https://app.codecrafters.io/join?via=geeknarrator ------------------------------------------------------------------------------------------------------------------------------------------------------------------ Link to other playlists. LIKE, SHARE and SUBSCRIBE ------------------------------------------------------------------------------------------------------------------------------------------------------------------ If you like this episode, please hit the like button and share it with your network. Also please subscribe if you haven't yet. Database internals series: https://youtu.be/yV_Zp0Mi3xs Popular playlists: Realtime streaming systems: https://www.youtube.com/playlist?list=PLL7QpTxsA4se-mAKKoVOs3VcaP71X_LA- Software Engineering: https://www.youtube.com/playlist?list=PLL7QpTxsA4sf6By03bot5BhKoMgxDUU17 Distributed systems and databases: https://www.youtube.com/playlist?list=PLL7QpTxsA4sfLDUnjBJXJGFhhz94jDd_d Modern databases: https://www.youtube.com/playlist?list=PLL7QpTxsA4scSeZAsCUXijtnfW5ARlrsN Stay Curios! Keep Learning! #sql #postgres #databasesystems

Point-Free Videos
Sharing with SQLite: The Solution

Point-Free Videos

Play Episode Listen Later Jan 20, 2025 45:47


Subscriber-Only: Today's episode is available only to subscribers. If you are a Point-Free subscriber you can access your private podcast feed by visiting https://www.pointfree.co/account. --- SQLite offers a lot of power and flexibility over a simple JSON file, but it also requires a lot of boilerplate to get working. But we can hide away all that boilerplate using the `@Shared` property wrapper and end up with something that is arguably nicer than Swift Data's `@Query` macro!

Point-Free Videos
Sharing with SQLite: The Problems

Point-Free Videos

Play Episode Listen Later Jan 13, 2025 45:59


Subscriber-Only: Today's episode is available only to subscribers. If you are a Point-Free subscriber you can access your private podcast feed by visiting https://www.pointfree.co/account. --- Persisting app state to user defaults or a JSON file is simple and convenient, but it starts to break down when you need to present this data in more complex ways, and this is where SQLite really shines. Let's get a handle on the problem with some state that is currently persisted to a JSON file, and let's see how SQLite fixes it.

Late Night Linux Extra
Linux Dev Time – Episode 115

Late Night Linux Extra

Play Episode Listen Later Jan 12, 2025 22:18


We dig into SQLite – an interesting and unusual project that is widely used but has an uncommon licence, a proprietary test suite, and doesn't take external contributions. Plus printf() vs “proper” debugging.           Support us on Patreon and get an ad-free RSS feed with early episodes sometimes   See our... Read More

Late Night Linux All Episodes
Linux Dev Time – Episode 115

Late Night Linux All Episodes

Play Episode Listen Later Jan 12, 2025 22:18


We dig into SQLite – an interesting and unusual project that is widely used but has an uncommon licence, a proprietary test suite, and doesn't take external contributions. Plus printf() vs “proper” debugging.           Support us on Patreon and get an ad-free RSS feed with early episodes sometimes   See our... Read More

Modern Web
Why SQLite is Perfect for the Web

Modern Web

Play Episode Listen Later Jan 7, 2025 34:58


Dive deep into the fascinating world of SQLite and Turso with Gláuber Costa, the founder and CEO of Turso, as he shares insights into the evolution of modern database technologies. Hosted by Danny Thompson and Adam Rakus on the Modern Web Podcast, this episode unpacks SQLite's growing popularity, Turso's innovative managed database services, and how local-first architectures are changing the database landscape. From syncing databases to leveraging SQLite for offline use, discover how these advancements empower developers to build faster, scalable, and cost-effective solutions. Tune in to learn about Turso's unique approach, real-world use cases, and the future of databases in edge computing and mobile applications. Topics Discussed: - SQLite's resurgence and why it's trending in modern architectures - Turso's fork of SQLite and its innovative features - Offline-capable databases and local-first architecture - The impact of no-SQL databases and why SQL is making a comeback - Practical examples and use cases of database syncing and encryption Follow Glauber Costa on Social Media Twitter: https://x.com/glcst Linkedin:   / glommer   Github: https://github.com/glommer Turso: https://turso.tech/ Sponsored by This Dot: thisdot.co

Infinitum
Mihajlo Bombić

Infinitum

Play Episode Listen Later Jan 4, 2025 61:26


Ep 250Bootable backups have been deprecated for several yearsCollection of insane and fun facts about SQLite - blagDB Browser for SQLiteJeff Johnson:I just discovered that Enhanced Visual Search was enabled by default on my iPhone in Photos Settings.Brian Roemmele:42 years ago this bakery plugged in their Commodore 64s to use as cash registers at Hilligoss Bakery in Brownsburg, Indiana, and are still in use.M4 Mac mini casesORICO dock / caseZEERA MacForge Gen2: CNC Aluminum Cooling Case for 2024 Mac Mini M4 & M4 Pro with Mac Pro Enclosure Design“Mac mini Pro” 3D Printed Enclosure — Test fit at Apple StoreZahvalniceSnimano 4.1.2025.Uvodna muzika by Vladimir Tošić, stari sajt je ovde.Logotip by Aleksandra Ilić.Artwork epizode by Saša Montiljo, njegov kutak na Devianartupastel na papiru

The GeekNarrator
Database Trends and More with Peter Zaitsev

The GeekNarrator

Play Episode Listen Later Jan 4, 2025 64:13


Deep Dive into Databases with Peter Zaitsev | The GeekNarrator Podcast Join host Kaivalya Apte and special guest Peter Zaitsev from Percona on this episode of the Geeknerder Podcast. They discuss Peter's fascinating journey into the world of databases, founding Percona, and the evolution of open source database solutions. Topics include the rise of PostgreSQL, the comparison between MySQL and PostgreSQL, database observability, the impact of cloud and Kubernetes on database management, licensing changes in popular databases like Redis, and career advice for database administrators and developers. Stay tuned for insights on the future of databases, observability strategies, and the role of AI in database management. 00:00 Introduction and Guest Welcome 00:14 Peter's Journey into Databases 04:15 The Rise of PostgreSQL vs MySQL 18:17 Challenges in Managing Database Clusters 24:36 Common Developer Mistakes with Databases 30:59 MongoDB's Success and Future 34:53 Redis and Licensing Changes 37:07 Elastic's License Change and Its Impact 38:25 Redis Fork and Industry Collaboration 40:27 Kubernetes and Cloud-Native Databases 47:47 Challenges in Database Upgrades and Migrations 54:58 Load Testing and Observability 01:09:02 Future of Database Administration and Development 01:15:13 Conclusion and Final Thoughts Become a member of The GeekNarrator to get access to member only videos, notes and monthly 1:1 with me. Like building stuff? Try out CodeCrafters and build amazing real world systems like Redis, Kafka, Sqlite. Use the link below to signup and get 40% off on paid subscription. https://app.codecrafters.io/join?via=geeknarrator If you like this episode, please hit the like button and share it with your network. Also please subscribe if you haven't yet. Database internals series: https://youtu.be/yV_Zp0Mi3xs Popular playlists: Realtime streaming systems: https://www.youtube.com/playlist?list=PLL7QpTxsA4se-mAKKoVOs3VcaP71X_LA- Software Engineering: https://www.youtube.com/playlist?list=PLL7QpTxsA4sf6By03bot5BhKoMgxDUU17 Distributed systems and databases: https://www.youtube.com/playlist?list=PLL7QpTxsA4sfLDUnjBJXJGFhhz94jDd_d Modern databases: https://www.youtube.com/playlist?list=PLL7QpTxsA4scSeZAsCUXijtnfW5ARlrsN Stay Curios! Keep Learning!

The GeekNarrator
DBOS internals - Build reliable backends 10x faster

The GeekNarrator

Play Episode Listen Later Jan 4, 2025 61:58


The GeekNarrator memberships can be joined here: https://www.youtube.com/channel/UC_mGuY4g0mggeUGM6V1osdA/join Membership will get you access to member only videos, exclusive notes and monthly 1:1 with me. Here you can see all the member only videos: https://www.youtube.com/playlist?list=UUMO_mGuY4g0mggeUGM6V1osdA ------------------------------------------------------------------------------------------------------------------------------------------------------------------ About this episode: ------------------------------------------------------------------------------------------------------------------------------------------------------------------ In this episode we are talking to Peter and Qian, co-founders of DBOS. The conversation covers the challenges of creating fault-tolerant applications, the architecture of DBOS, and how it addresses reliability at multiple layers. Chapters: 00:00 Introduction to the Geeknerder Podcast 00:29 Meet the Co-Founders of DBOSS 01:25 The Core Problem: Building Reliable Systems 02:05 How DBOSS Solves Reliability Issues 04:29 Understanding DBOSS Architecture 06:09 Deep Dive into DBOSS Library 08:36 Postgres and State Management 18:31 Handling Parallel Steps and Performance Concerns 26:00 Observability and Version Control 30:18 Running Multiple Code Versions 30:58 Managing Workflow Versions 32:03 Surgery on Workflow States 33:15 Library Annotations and Durable Execution 34:24 Migrating to the Cloud Version 37:23 Handling Email Workflows 42:41 Transactional Guarantees with Postgres 48:44 Technical Challenges and Multi-Tenancy 54:12 Real-World Use Cases and Benefits 59:45 Conclusion and Final Thoughts Some important links: - Main website: https://www.dbos.dev/ - DBOS docs: https://docs.dbos.dev/ - Open-source DBOS Transact libraries: - Python: https://github.com/dbos-inc/dbos-transact-py - TypeScript: https://github.com/dbos-inc/dbos-transact-ts ------------------------------------------------------------------------------------------------------------------------------------------------------------------ Like building real stuff? ------------------------------------------------------------------------------------------------------------------------------------------------------------------ Try out CodeCrafters and build amazing real world systems like Redis, Kafka, Sqlite. Use the link below to signup and get 40% off on paid subscription. https://app.codecrafters.io/join?via=geeknarrator ------------------------------------------------------------------------------------------------------------------------------------------------------------------ Link to other playlists. LIKE, SHARE and SUBSCRIBE ------------------------------------------------------------------------------------------------------------------------------------------------------------------ If you like this episode, please hit the like button and share it with your network. Also please subscribe if you haven't yet. Database internals series: https://youtu.be/yV_Zp0Mi3xs Popular playlists: Realtime streaming systems: https://www.youtube.com/playlist?list=PLL7QpTxsA4se-mAKKoVOs3VcaP71X_LA- Software Engineering: https://www.youtube.com/playlist?list=PLL7QpTxsA4sf6By03bot5BhKoMgxDUU17 Distributed systems and databases: https://www.youtube.com/playlist?list=PLL7QpTxsA4sfLDUnjBJXJGFhhz94jDd_d Modern databases: https://www.youtube.com/playlist?list=PLL7QpTxsA4scSeZAsCUXijtnfW5ARlrsN Stay Curios! Keep Learning!

Sustain
Episode 261: Alexander Petros on htmx and sustainable, simpler tools

Sustain

Play Episode Listen Later Dec 20, 2024 36:21


Guest Alexander Petros Panelist Richard Littauer Show Notes Join host Richard Littauer as he dives into the world of open source sustainability with Alexander Petros, core maintainer of htmx and freelance software engineer. Today, they explore the evolution of HTML, the power of lightweight web protocols, and the broader implications of open-source software for the future of the web. Alexander shares his insights on building sustainable digital infrastructure, using simple tools effectively, and rethinking web development paradigms. Hit download now! [00:01:40] Alexander explains htmx as a lightweight front-end JavaScript library enhancing HTML capabilities. [00:03:16] There's a discussion about HTML's design for behavior and interactivity and a comparison of traditional HTML with modern practices, including JavaScript-heavy frameworks. [00:05:50] We hear the origins of htmx, how it started as a jQuery extension called intercooler.js, and the evolution during the pandemic to a standalone library. [00:09:16] Alexander explains building for the long term, why lightweight, adaptable systems matter, and reflects on the durability of early web standards and tools. [00:12:17] Richard inquires about what Alexander envisions a hundred years from now with htmx. [00:14:57] Balancing simplicity and scalability is discussed about HTML's capabilities for large-scale applications and why many developers overcomplicate solutions unnecessarily. [00:17:40] Alexander critiques over-reliance on tools like Docker and large-scale build systems and advocates for simpler development environments like SQLite. [00:19:42] Alexander talks about why open source frameworks like React solve organizational problems for tech giants. [00:25:42] Richard tells us he's been spending time on the International Code of Zoological Nomenclature as a foundational system for species classification and Alexander speaks about the challenges of contributing to protocols governed by large corporations and why HTML remains a uniquely sustainable and universal platform. [00:28:22] Richard asks Alexander if he's thought about the 1000 year approach to the work he's doing. [00:32:21] Find out where you can follow Alexander and his blog online. Quotes [00:13:11] “The web is going to be the most effective delivery mechanism for software for the next couple of decades.” [00:14:12] “If we look at the tools that we have available today, which tools can we use that are most likely to get us to that fifty, hundred year useful piece of software?” [00:24:06] “Different structural project models produce very different software.” Spotlight [00:33:11] Richard's spotlight is the International Code of Zoological Nomenclature. [00:34:07] Alexander's spotlight is better-sqlite3. Links SustainOSS (https://sustainoss.org/) podcast@sustainoss.org (mailto:podcast@sustainoss.org) richard@sustainoss.org (mailto:richard@sustainoss.org) SustainOSS Discourse (https://discourse.sustainoss.org/) SustainOSS Mastodon (https://mastodon.social/tags/sustainoss) Open Collective-SustainOSS (Contribute) (https://opencollective.com/sustainoss) Richard Littauer Socials (https://www.burntfen.com/2023-05-30/socials) Alexander Petros Website (https://alexanderpetros.com/) Alexander Petros LinkedIn (https://www.linkedin.com/in/apetros/) Unplanned Obsolescence (Alexander's Blog) (https://unplannedobsolescence.com/) Building the Hundred-Year Web Service with htmx- Alexander Petros (YouTube) (https://www.youtube.com/watch?v=lASLZ9TgXyc) htmx (https://htmx.org/) Sustain Podcast-Episode 238: Julia Evans and Wizard Zines (https://podcast.sustainoss.org/238) xkcd-927: How Standards Proliferate (https://xkcd.com/927/) Julia Evans Blog (https://jvns.ca/) International Code of Zoological Nomenclature (ICZN) (https://www.iczn.org/) better-sqlite3 (https://github.com/WiseLibs/better-sqlite3) Credits Produced by Richard Littauer (https://www.burntfen.com/) Edited by Paul M. Bahr at Peachtree Sound (https://www.peachtreesound.com/) Show notes by DeAnn Bahr Peachtree Sound (https://www.peachtreesound.com/) Special Guest: Alexander Petros.

Syntax - Tasty Web Development Treats
861: Local Data: Sqlite, LocalStorage, Session, Cookies and IndexDB

Syntax - Tasty Web Development Treats

Play Episode Listen Later Dec 16, 2024 24:58


Scott and Wes dive into the world of local data storage, breaking down the pros and cons of Sqlite, LocalStorage, SessionStorage, Cookies, and IndexedDB. They cover real-world use cases like user settings, offline data, and auth tokens, while sharing their favorite tools and strategies for keeping your data fast and secure. Show Notes 00:00 Welcome to Syntax! 00:30 Brought to you by Sentry.io. 01:43 Why store data locally. 01:55 User preferences and settings. 02:50 Not logged in state (shopping carts, etc). 03:30 Data for faster loading. 03:51 Privacy concerns. 04:25 Large files or drafts. 05:50 Auth tokens. 07:08 Where to store data. 07:11 Cookies. 07:48 Local storage. 09:15 Session storage. 10:35 IndexedDB. 12:15 BYOJS Storage. 13:41 SQlite via WASM. 14:12 Penalties of SQLite in browser via WASM. 15:29 PGLite. 16:23 Dealing with migrations. 16:55 The advantages of the approach. 18:42 Dexie. 19:59 Patch messages. 21:25 A few options. TinyBase Docs. Local First Web. Hit us up on Socials! Syntax: X Instagram Tiktok LinkedIn Threads Wes: X Instagram Tiktok LinkedIn Threads Scott: X Instagram Tiktok LinkedIn Threads Randy: X Instagram YouTube Threads

All TWiT.tv Shows (MP3)
Untitled Linux Show 182: Sketchy Patches

All TWiT.tv Shows (MP3)

Play Episode Listen Later Dec 15, 2024 68:38 Transcription Available


It's the Jeff and Jonathan Show, with the guys talking about the new Intel Arc Battlemage graphics cards, the new hardware attack known as BadRam, and the new proposed Fedora COSMIC spin. There's a new Proton release, CentOS Stream 10 is out, and KDE ships 6.3. For tips we have Cargo for all your rust packaging needs, and switcherooctl for video card management. See the show notes at https://bit.ly/4iDntUk and enjoy! Host: Jonathan Bennett Co-Host: Jeff Massie Want access to the video version and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.

Hacker Public Radio
HPR4251: Dave and MrX turn over a new leaf

Hacker Public Radio

Play Episode Listen Later Nov 18, 2024


This show has been flagged as Explicit by the host. Introduction Hosts: MrX Dave Morriss We recorded this on Saturday September 14th 2024. This time we were at Swanston Farm, a place we had previously visited for lunch in March 2024. After lunch we adjourned to Dave's car (Studio N) in the car park, and recorded a chat. The details of why it is Studio N instead of Studio C is mentioned in the chat itself! Preparing this show has taken longer than usual this time - apologies! Topics discussed Studio change: Sadly, since the last recording Studio C (Dave's 10-year old Citroën C4 Picasso) self-destructed. It was a diesel car and one of the fuel injectors failed and destroyed the engine management system as it died. It wasn't worth repairing! The replacement is Studio N, a Nissan Leaf, which is an EV (electric vehicle). The price of nearly new EV cars is fairly good in the UK at this time in 2024, so it seemed like a good opportunity to get one. Learning to own and drive an EV can be challenging to some extent: "Range anxiety" and access to charging stations Regenerative braking Fast (DC) charging on the road is relatively expensive (£0.79p per kWh), but is convenient Ideally, a home (AC) charger is required. It will be slower (7 kW per hour) but will be cheaper with a night tariff (£0.085 per kWh versus £0.25 per kWh normal rate) There is potential, with solar panels and a battery, to use free electricity to charge an EV at home MrX might like to move to an EV in the future YouTube channels: Dave is subscribed to a channel called "The Post Apocalyptic Inventor (TPAI)" and recently shared one of the latest videos with MrX. The channel owner collects discarded items from scrapyards in Germany, or buys old bits of equipment, and gets them working again. Milling Machine Adventure! Bring her Home! / Gantry Build I built a CNC Plasma Cutting Table from Scrap! Databases: MrX used dBase on DOS in the past, and received some training in databases. In 2017 he obtained a large csv (comma-separated values) file from the OFCOM (Office of Communications, UK) website containing their Wireless Legacy Register, which contains licensees and frequencies with longitude and latitude values. A means of interrogating this file was sought, having found that spreadsheets were not really very good at handling files of this size (around 200,000 records). MrX used the xsv tool, which was covered in shows hpr2698 and hpr2752 by Mr. Young. It allows a CSV file to be interrogated in quite a lot of detail from the command line. However, with a file of this size it was still quite slow. In a discussion with Dave the subject of the SQLite database came up. Using the SQLite Browser it was simple to load this CSV file into a database and gain rapid access to its contents. SQLite databases may also be queried through a command-line interface which can also be run on a Raspberry Pi, phones, tablets and on a ChromeBook. The textimg tool: This is a command to convert from colored text (ANSI or 256) to an image. Dave generates coloured text from his meal database (HPR show hpr3386 :: What's for dinner?, this being a later enhancement), then captures the output and sends it to a Telegram channel shared with his family. Dave also exchanges weather data obtained from the site wttr.in with Archer72 on Matrix. This is a useful tool for generating images from text, including any text colours. It can be installed from the GitHub copy, and maybe from some package repositories. Using coloured text in BASH (Dave responding to MrX): I have used a function to define variables with colour names: Call a function define_colours which defines (and exports) variables called red, green, etc. Using red=$(tput setaf 1); export red I use the colours in two ways: Method 1: use these names in echo "${red}Red text${reset}" Method 2: use another function coloured which takes two arguments, a colour name (as a string) and a message. The script encloses the message argument in a colour variable and a reset. The colour name argument is used in a redirection to turn red into the contents of the variable $red. This probably needs a show to explain things fully. Terminal multiplexers: Dave and MrX use GNU screen. Both recognise that the alternative tmux might be better to use in terms of features, but are reluctant to learn a new interface! Dave has noticed a new open-source alternative called zellij but has not yet used it. Variable weather: Dealing with hot weather: YouTube, Techmoan channel PERSONAL AIRCON - Ranvoo Aice Lite Review MrX had recently had a holiday in the Lake District where the weather was good. In Scotland the weather has been wet and windy in the same period. Spectrum24, OggCamp: MrX is attending his first OggCamp in Manchester. Dave will be attending too, as will Ken. HPR has a table/booth at OggCamp. Ken was recently at Spectrum24, an amateur radio conference in Paris. Meshtastic an open source, off-grid, decentralized, mesh network built to run on affordable, low-power devices Old inkjet printers: MrX has an Epson R300 printer where the black ink seems to have dried up. Dave has an old HP Inkjet with the same type of problem. This printer has a scanner and FAX capability. An HPR show was done in 2015 describing how it was set up to use a Raspberry Pi to make it available on the local network. Propelling or mechanical pencils: Dave had a Pentel GraphGear 1000 propelling (aka mechanical) pencil which was mentioned on HPR show 3197. This was dropped onto concrete, and didn't appear damaged at the time, but it apparently received internal damage and eventually fell apart. Links Electric cars: EV (electric vehicle) Regenerative braking Databases SQLite: SQLite SQLite Browser An Easy Way to Master SQLite Fast Open source SQLite Studio available for Linux SQLiteStudio SQL: Origins: The Birth of SQL & the Relational Database Intricacies: MySQL JOIN Types Poster (Steve Stedman) Design: How to Fake a Database Design - Curtis Poe (Ovid) The textimg tool: GitHub repository: textimg zellij: Website: zellij Github repository: zellij Quote from the repo: Zellij is a workspace aimed at developers, ops-oriented people and anyone who loves the terminal. Similar programs are sometimes called "Terminal Multiplexers". Provide feedback on this episode.

Paul's Security Weekly
Tariffs, Pygmy Goat, Schneider, SQLite, Deepfakes, Military AI, Josh Marpet... - SWN #428

Paul's Security Weekly

Play Episode Listen Later Nov 6, 2024 32:53


Tariffs, Pygmy Goat, Schneider, SQLite and Dixie Flatline, Deepfakes, Military AI, Josh Marpet, and more on the Security Weekly News. Visit https://www.securityweekly.com/swn for all the latest episodes! Show Notes: https://securityweekly.com/swn-428

Cyber Security Today
AI Finds Zero Day Vulnerability For First Time: Cyber Security Today for Wednesday, November 6, 2024

Cyber Security Today

Play Episode Listen Later Nov 6, 2024 8:05 Transcription Available


AI Finds Zero Day Vulnerability, MFA Mandatory on Google Cloud, French Energy Firm Hacked In today's episode of Cyber Security Today, host Jim Love discusses Google's AI-driven system Big Sleep discovering the first ever AI-identified zero day vulnerability in the SQLite database engine. He also covers Google's new requirement for Google Cloud users to implement multi-factor authentication (MFA) starting January, and a recent cyber-attack on French firm Schneider Electric, where hackers demanded a ransom in baguettes. Learn about these critical updates and their implications for the future of cybersecurity. 00:00 Introduction to Cyber Security Today 00:21 AI Discovers Zero Day Vulnerability 03:06 Google Cloud Enforces Multi-Factor Authentication 05:55 Hackers Demand Ransom in Baguettes 07:42 Conclusion and Show Notes

Paul's Security Weekly TV
Tariffs, Pygmy Goat, Schneider, SQLite, Deepfakes, Military AI, Josh Marpet... - SWN #428

Paul's Security Weekly TV

Play Episode Listen Later Nov 6, 2024 32:53


Tariffs, Pygmy Goat, Schneider, SQLite and Dixie Flatline, Deepfakes, Military AI, Josh Marpet, and more on the Security Weekly News. Show Notes: https://securityweekly.com/swn-428

The CyberWire
Confidence on election day.

The CyberWire

Play Episode Listen Later Nov 5, 2024 33:33


On election day U.S. officials express confidence. A Virginia company is charged with violating U.S. export restrictions on technology bound for Russia. Backing up your GMail. Google mandates MFA. Google claims an AI-powered vulnerability detection breakthrough. Schneider Electric investigates a cyberattack on its internal project tracking platform. A Canadian man suspected in the Snowflake-related data breaches has been arrested. On our Threat Vector segment, David Moulton sits down with Christopher Scott, from Unit 42 to explore the essentials of crisis leadership and management.  I spy air fry? Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you'll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. Threat Vector Segment In this segment of the Threat Vector podcast, host David Moulton sits down with Christopher Scott, Managing Partner at Unit 42 by Palo Alto Networks, to explore the essentials of crisis leadership and management in cybersecurity. You can hear the full discussion here and catch new episodes of Threat Vector every Thursday on your favorite podcast app.  Selected Reading In final check-in before Election Day, CISA cites low-level threats, and not much else (The Record) Joint ODNI, FBI, and CISA Statement (FBI Federal Bureau of Investigation) Exclusive: Nakasone says all the news about influence campaigns ahead of Election Day is actually 'a sign of success' (The Record) Virginia Company and Two Senior Executives Charged with Illegally Exporting Millions of Dollars of U.S. Technology to Russia (United States Department of Justice) Gmail 2FA Cyber Attacks—Open Another Account Before It's Too Late (Forbes) Mandatory MFA is coming to Google Cloud. Here's what you need to know (Google Cloud) Schneider Electric says hackers accessed internal project execution tracking platform (The Record) Google claims AI first after SQLite security bug discovered (The Register) Suspected Snowflake Hacker Arrested in Canada (404 Media) Is your air fryer spying on you? Concerns over ‘excessive' surveillance in smart devices (The Guardian)  Share your feedback. We want to ensure that you are getting the most out of the podcast. Please take a few minutes to share your thoughts with us by completing our brief listener survey as we continually work to improve the show.  Want to hear your company in the show? You too can reach the most influential leaders and operators in the industry. Here's our media kit. Contact us at cyberwire@n2k.com to request more info. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices

Paul's Security Weekly
Bug bounties, vulnerability disclosure, PTaaS, fractional pentesting - Grant McCracken - ASW #306

Paul's Security Weekly

Play Episode Listen Later Nov 5, 2024 65:35


After spending a decade working for appsec vendors, Grant McKracken wanted to give something back. He saw a gap in the market for free or low-cost services for smaller organizations that have real appsec needs, but not a lot of means to pay for it. He founded DarkHorse, who offers VDPs and bug bounties to organizations of all sizes for free, or for as low of cost as possible. While not a non-profit, the company's goal is to make these services as cheap as possible to increase accessibility for smaller or more budget-constrained organizations. The company has also introduced the concept of "fractional pentesting", access to cyber talent when and how you need it, based on what you can afford. This implies services beyond just offensive security, something we'll dive deeper into in the interview. We don't see DarkHorse ever competing with the larger Bug Bounty platforms, but rather providing services to the organizations too small for the larger platforms to sell to. Microsoft delays Recall AGAIN, Project Zero uses an LLM to find a bugger underflow in SQLite, the scourge of infostealer malware, zero standing privileges is easy if you have unlimited time (but no one does), reverse engineering Nintendo's Alarmo and RedBox's... boxes. Bonus: the book series mentioned in this episode The Lost Fleet by Jack Campbell. Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-306

Paul's Security Weekly TV
Total Recall? LLM finds bug in SQLite, C++ safety failures, zero time for zero privs - ASW #306

Paul's Security Weekly TV

Play Episode Listen Later Nov 5, 2024 33:29


Microsoft delays Recall AGAIN, Project Zero uses an LLM to find a bugger underflow in SQLite, the scourge of infostealer malware, zero standing privileges is easy if you have unlimited time (but no one does), reverse engineering Nintendo's Alarmo and RedBox's... boxes. Bonus: the book series mentioned in this episode The Lost Fleet by Jack Campbell. Show Notes: https://securityweekly.com/asw-306

Application Security Weekly (Audio)
Bug bounties, vulnerability disclosure, PTaaS, fractional pentesting - Grant McCracken - ASW #306

Application Security Weekly (Audio)

Play Episode Listen Later Nov 5, 2024 65:35


After spending a decade working for appsec vendors, Grant McKracken wanted to give something back. He saw a gap in the market for free or low-cost services for smaller organizations that have real appsec needs, but not a lot of means to pay for it. He founded DarkHorse, who offers VDPs and bug bounties to organizations of all sizes for free, or for as low of cost as possible. While not a non-profit, the company's goal is to make these services as cheap as possible to increase accessibility for smaller or more budget-constrained organizations. The company has also introduced the concept of "fractional pentesting", access to cyber talent when and how you need it, based on what you can afford. This implies services beyond just offensive security, something we'll dive deeper into in the interview. We don't see DarkHorse ever competing with the larger Bug Bounty platforms, but rather providing services to the organizations too small for the larger platforms to sell to. Microsoft delays Recall AGAIN, Project Zero uses an LLM to find a bugger underflow in SQLite, the scourge of infostealer malware, zero standing privileges is easy if you have unlimited time (but no one does), reverse engineering Nintendo's Alarmo and RedBox's... boxes. Bonus: the book series mentioned in this episode The Lost Fleet by Jack Campbell. Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-306

DevZen Podcast
Загрузиться за 4.76 дня — Episode 480

DevZen Podcast

Play Episode Listen Later Oct 29, 2024 126:16


В этом выпуске мы обсуждаем уникальный проект по запуску Linux на микропроцессоре Intel 4004, странные девайсы и сравниваем Redis и SQLite. [00:04:38] Чему мы научились The First Gaming Keyboard How to steal a game idea [00:22:52] Как запустить Linux за 4.75 дня [01:05:57] Неожиданно диски быстрее памяти aka Redis vs SQLite Rearchitecting: Redis to SQLite |… Читать далее →

The Changelog
Naming conventions that need to die (News)

The Changelog

Play Episode Listen Later Oct 21, 2024 9:26


Will Crichton wishes some naming conventions would die already, GitHub user brjsp noticed that Bitwarden's new SDK dependency isn't open source, Joaquim Rocha details his forking best practices, Sophie Koonin explains why you should go to conferences & Mike Hoye puts WordPress on SQLite.

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

CEOs of publicly traded companies are often in the news talking about their new AI initiatives, but few of them have built anything with it. Drew Houston from Dropbox is different; he has spent over 400 hours coding with LLMs in the last year and is now refocusing his 2,500+ employees around this new way of working, 17 years after founding the company.Timestamps00:00 Introductions00:43 Drew's AI journey04:14 Revalidating expectations of AI08:23 Simulation in self-driving vs. knowledge work12:14 Drew's AI Engineering setup15:24 RAG vs. long context in AI models18:06 From "FileGPT" to Dropbox AI23:20 Is storage solved?26:30 Products vs Features30:48 Building trust for data access33:42 Dropbox Dash and universal search38:05 The evolution of Dropbox42:39 Building a "silicon brain" for knowledge work48:45 Open source AI and its impact51:30 "Rent, Don't Buy" for AI54:50 Staying relevant58:57 Founder Mode01:03:10 Advice for founders navigating AI01:07:36 Building and managing teams in a growing companyTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO at Decibel Partners, and there's no Swyx today, but I'm joined by Drew Houston of Dropbox. Welcome, Drew.Drew [00:00:14]: Thanks for having me.Alessio [00:00:15]: So we're not going to talk about the Dropbox story. We're not going to talk about the Chinatown bus and the flash drive and all that. I think you've talked enough about it. Where I want to start is you as an AI engineer. So as you know, most of our audience is engineering folks, kind of like technology leaders. You obviously run Dropbox, which is a huge company, but you also do a lot of coding. I think that's how you spend almost 400 hours, just like coding. So let's start there. What was the first interaction you had with an LLM API and when did the journey start for you?Drew [00:00:43]: Yeah. Well, I think probably all AI engineers or whatever you call an AI engineer, those people started out as engineers before that. So engineering is my first love. I mean, I grew up as a little kid. I was that kid. My first line of code was at five years old. I just really loved, I wanted to make computer games, like this whole path. That also led me into startups and eventually starting Dropbox. And then with AI specifically, I studied computer science, I got my, I did my undergrad, but I didn't do like grad level computer science. I didn't, I sort of got distracted by all the startup things, so I didn't do grad level work. But about several years ago, I made a couple of things. So one is I sort of, I knew I wanted to go from being an engineer to a founder. And then, but sort of the becoming a CEO part was sort of backed into the job. And so a couple of realizations. One is that, I mean, there's a lot of like repetitive and like manual work you have to do as an executive that is actually lends itself pretty well to automation, both for like my own convenience. And then out of interest in learning, I guess what we call like classical machine learning these days, I started really trying to wrap my head around understanding machine learning and informational retrieval more, more formally. So I'd say maybe 2016, 2017 started me writing these more successively, more elaborate scripts to like understand basic like classifiers and regression and, and again, like basic information retrieval and NLP back in those days. And there's sort of like two things that came out of that. One is techniques are super powerful. And even just like studying like old school machine learning was a pretty big inversion of the way I had learned engineering, right? You know, I started programming when everyone starts programming and you're, you're sort of the human, you're giving an algorithm to the, and spelling out to the computer how it should run it. And then machine learning, here's machine learning where it's like actually flip that, like give it sort of the answer you want and it'll figure out the algorithm, which was pretty mind bending. And it was both like pretty powerful when I would write tools, like figure out like time audits or like, where's my time going? Is this meeting a one-on-one or is it a recruiting thing or is it a product strategy thing? I started out doing that manually with my assistant, but then found that this was like a very like automatable task. And so, which also had the side effect of teaching me a lot about machine learning. But then there was this big problem, like anytime you, it was very good at like tabular structured data, but like anytime it hit, you know, the usual malformed English that humans speak, it would just like fall over. I had to kind of abandon a lot of the things that I wanted to build because like there's no way to like parse text. Like maybe it would sort of identify the part of speech in a sentence or something. But then fast forward to the LLM, I mean actually I started trying some of like this, what we would call like very small LLMs before kind of the GPT class models. And it was like super hard to get those things working. So like these 500 parameter models would just be like hallucinating and repeating and you know. So actually I'd kind of like written it off a little bit. But then the chat GPT launch and GPT-3 for sure. And then once people figured out like prompting and instruction tuning, this was sort of like November-ish 2022 like everybody else sort of that the chat GPT launch being the starting gun for the whole AI era of computing and then having API access to three and then early access to GPT-4. I was like, oh man, it's happening. And so I was literally on my honeymoon and we're like on a beach in Thailand and I'm like coding these like AI tools to automate like writing or to assist with writing and all these different use cases.Alessio [00:04:14]: You're like, I'm never going back to work. I'm going to automate all of it before I get back.Drew [00:04:17]: And I was just, you know, ever since then, I mean, I've always been like coding like prototypes and just stuff to make my life more convenient, but like escalated a lot after 22. And yeah, I spent, I checked, I think it was probably like over 400 hours this year so far coding because I had my paternity leave where I was able to work on some special projects. But yeah, it's a super important part of like my whole learning journey is like being really hands-on with these things. And I mean, it's probably not a typical recipe, but I really love to get down to the metal as far as how this stuff works.Alessio [00:04:47]: Yeah. So Swyx and I were with Sam Altman in October 22. We were like at a hack day at OpenAI and that's why we started this podcast eventually. But you did an interview with Sam like seven years ago and he asked you what's the biggest opportunity in startups and you were like machine learning and AI and you were almost like too early, right? It's like maybe seven years ago, the models weren't quite there. How should people think about revalidating like expectations of this technology? You know, I think even today people will tell you, oh, models are not really good at X because they were not good 12 months ago, but they're good today.Drew [00:05:19]: What's your project? Heuristics for thinking about that or how is, yeah, I think the way I look at it now is pretty, has evolved a lot since when I started. I mean, I think everybody intuitively starts with like, all right, let's try to predict the future or imagine like what's this great end state we're going to get to. And the tricky thing is like often those prognostications are right, but they're right in terms of direction, but not when. For example, you know, even in the early days of the internet, 90s when things were even like tech space and you know, even before like the browser or things like that, people were like, oh man, you're going to have, you know, you're going to be able to order food, get like a Snickers delivered to your house, you're going to be able to watch any movie ever created. And they were right. But they were like, you know, it took 20 years for that to actually happen. And before you got to DoorDash, you had to get, you started with like Webvan and Cosmo and before you get to Spotify, you had to do like Napster and Kazaa and LimeWire and like a bunch of like broken Britney Spears MP3s and malware. So I think the big lesson is being early is the same as being wrong. Being late is the same as being wrong. So really how do you calibrate timing? And then I think with AI, it's the same thing that people are like, oh, it's going to completely upend society and all these positive and negative ways. I think that's like most of those things are going to come true. The question is like, when is that going to happen? And then with AI specifically, I think there's also, in addition to sort of the general tech category or like jumping too fast to the future, I think that AI is particularly susceptible to that. And you look at self-driving, right? This idea of like, oh my God, you can have a self-driving car captured everybody's imaginations 10, 12 years ago. And you know, people are like, oh man, in two years, there's not going to be another year. There's not going to be a human driver on the road to be seen. It didn't work out that way, right? We're still 10, 12 years later where we're in a world where you can sort of sometimes get a Waymo in like one city on earth. Exciting, but just took a lot longer than people think. And the reason is there's a lot of engineering challenges, but then there's a lot of other like societal time constants that are hard to compress. So one thing I think you can learn from things like self-driving is they have these levels of autonomy that's a useful kind of framework in driving or these like maturity levels. People sort of skip to like level five, full autonomy, or we're going to have like an autonomous knowledge worker that's just going to take, that's going to, and then we won't need humans anymore kind of projection that that's going to take a long time. But then when you think about level one or level two, like these little assistive experiences, you know, we're seeing a lot of traction with those. So what you see really working is the level one autonomy in the AI world would be like the tab auto-complete and co-pilot, right? And then, you know, maybe a little higher is like the chatbot type interface. Obviously you want to get to the highest level you can to build a good product, but the reliability just isn't, and the capability just isn't there in the early innings. And so, and then you think of other level one, level two type things, like Google Maps probably did more for self-driving than in literal self-driving, like a billion people have like the ability to have like maps and navigation just like taken care of for you autonomously. So I think the timing and maturity are really important factors to include.Alessio [00:08:23]: The thing with self-driving, maybe one of the big breakthroughs was like simulation. So it's like, okay, instead of driving, we can simulate these environments. It's really hard to do when knowledge work, you know, how do you simulate like a product review? How do you simulate these things? I'm curious if you've done any experiments. I know some companies have started to build kind of like a virtual personas that you can like bounce ideas off of.Drew [00:08:42]: I mean, fortunately in a company you generate lots of, you know, actual human training data all the time. And then I also just like start with myself, like, all right, I can, you know, it's pretty tricky even within your company to be like, all right, let's open all this up as quote training data. But, you know, I can start with my own emails or my own calendar or own stuff without running into the same kind of like privacy or other concerns. So I often like start with my own stuff. And so that is like a one level of bootstrapping, but actually four or five years ago during COVID, we decided, you know, a lot of companies were thinking about how do we go back to work? And so we decided to really lean into remote and distributed work because I thought, you know, this is going to be the biggest change to the way we work in our lifetimes. And COVID kind of ripped up a bunch of things, but I think everybody was sort of pleasantly surprised how with a lot of knowledge work, you could just keep going. And actually you were sort of fine. Work was decoupled from your physical environment, from being in a physical place, which meant that things people had dreamed about since the fifties or sixties, like telework, like you actually could work from anywhere. And that was now possible. So we decided to really lean into that because we debated, should we sort of hit the fast forward button or should we hit the rewind button and go back to 2019? And obviously that's been playing out over the last few years. And we decided to basically turn, we went like 90% remote. We still, the in-person part's really important. We can kind of come back to our working model, but we're like, yeah, this is, everybody is going to be in some kind of like distributed or hybrid state. So like instead of like running away from this, like let's do a full send, let's really go into it. Let's live in the future. A few years before our customers, let's like turn Dropbox into a lab for distributed work. And we do that like quite literally, both of the working model and then increasingly with our products. And then absolutely, like we have products like Dropbox Dash, which is our universal search product. That was like very elevated in priority for me after COVID because like now you have, we're putting a lot more stress on the system and on our screens, it's a lot more chaotic and overwhelming. And so even just like getting the right information, the right person at the right time is a big fundamental challenge in knowledge work and these, in the distributed world, like big problem today is still getting, you know, has been getting bigger. And then for a lot of these other workflows, yeah, there's, we can both get a lot of natural like training data from just our own like strategy docs and processes. There's obviously a lot you can do with synthetic data and you know, actually like LMs are pretty good at being like imitating generic knowledge workers. So it's, it's kind of funny that way, but yeah, the way I look at it is like really turn Dropbox into a lab for distributed work. You think about things like what are the big problems we're going to have? It's just the complexity on our screens just keeps growing and the whole environment gets kind of more out of sync with what makes us like cognitively productive and engaged. And then even something like Dash was initially seeded, I made a little personal search engine because I was just like personally frustrated with not being able to find my stuff. And along that whole learning journey with AI, like the vector search or semantic search, things like that had just been the tooling for that. The open source stuff had finally gotten to a place where it was a pretty good developer experience. And so, you know, in a few days I had sort of a hello world type search engine and I'm like, oh my God, like this completely works. You don't even have to get the keywords right. The relevance and ranking is super good. We even like untuned. So I guess that's to say like I've been surprised by if you choose like the right algorithm and the right approach, you can actually get like super good results without having like a ton of data. And even with LLMs, you can apply all these other techniques to give them, kind of bootstrap kind of like task maturity pretty quickly.Alessio [00:12:14]: Before we jump into Dash, let's talk about the Drew Haas and AI engineering stuff. So IDE, let's break that down. What IDE do you use? Do you use Cursor, VS Code, do you use any coding assistant, like WeChat, is it just autocomplete?Drew [00:12:28]: Yeah, yeah. Both. So I use VS Code as like my daily driver, although I'm like super excited about things like Cursor or the AI agents. I have my own like stack underneath that. I mean, some off the shelf parts, some pretty custom. So I use the continue.dev just like AI chat UI basically as just the UI layer, but I also proxy the request. I proxy the request to my own backend, which is sort of like a router. You can use any backend. I mean, Sonnet 3.5 is probably the best all around. But then these things are like pretty limited if you don't give them the right context. And so part of what the proxy does is like there's a separate thing where I can say like include all these files by default with the request. And then it becomes a lot easier and like without like cutting and pasting. And I'm building mostly like prototype toy apps, so it's like a front end React thing and a Python backend thing. And so it can do these like end to end diffs basically. And then I also like love being able to host everything locally or do it offline. So I have my own, when I'm on a plane or something or where like you don't have access or the internet's not reliable, I actually bring a gaming laptop on the plane with me. It's like a little like blue briefcase looking thing. And then I like literally hook up a GPU like into one of the outlets. And then I have, I can do like transcription, I can do like autocomplete, like I have an 8 billion, like Llama will run fine.Alessio [00:13:44]: And you're using like a Llama to run the model?Drew [00:13:47]: No, I use, I have my own like LLM inference stack. I mean, it uses the backend somewhat interchangeable. So everything from like XLlama to VLLM or SGLang, there's a bunch of these different backends you can use. And then I started like working on stuff before all this tooling was like really available. So you know, over the last several years, I've built like my own like whole crazy environment and like in stack here. So I'm a little nuts about it.Alessio [00:14:12]: Yeah. What's the state of the art for, I guess not state of the art, but like when it comes to like frameworks and things like that, do you like using them? I think maybe a lot of people say, hey, things change so quickly, they're like trying to abstract things. Yeah.Drew [00:14:24]: It's maybe too early today. As much as I do a lot of coding, I have to be pretty surgical with my time. I don't have that much time, which means I have to sort of like scope my innovation to like very specific places or like my time. So for the front end, it'll be like a pretty vanilla stack, like a Next.js, React based thing. And then these are toy apps. So it's like Python, Flask, SQLite, and then all the different, there's a whole other thing on like the backend. Like how do you get, sort of run all these models locally or with a local GPU? The scaffolding on the front end is pretty straightforward, the scaffolding on the backend is pretty straightforward. Then a lot of it is just like the LLM inference and control over like fine grained aspects of how you do generation, caching, things like that. And then there's a lot, like a lot of the work is how do you take, sort of go to an IMAP, like take an email, get a new, or a document or a spreadsheet or any of these kinds of primitives that you work with and then translate them, render them in a format that an LLM can understand. So there's like a lot of work that goes into that too. Yeah.Alessio [00:15:24]: So I built a kind of like email triage system and like I would say 80% of the code is like Google and like pulling emails and then the actual AI part is pretty easy.Drew [00:15:34]: Yeah. And even, same experience. And then I tried to do all these like NLP things and then to my dismay, like a bunch of reg Xs were like, got you like 95% of the way there. So I still leave it running, I just haven't really built like the LLM powered version of it yet. Yeah.Alessio [00:15:51]: So do you have any thoughts on rag versus long context, especially, I mean with Dropbox, you know? Sure. Do you just want to shove things in? Like have you seen that be a lot better?Drew [00:15:59]: Well, they kind of have different strengths and weaknesses, so you need both for different use cases. I mean, it's been awesome in the last 12 months, like now you have these like long context models that can actually do a lot. You can put a book in, you know, Sonnet's context and then now with the later versions of LLAMA, you can have 128k context. So that's sort of the new normal, which is awesome and that, that wasn't even the case a year ago. That said, models don't always use, and certainly like local models don't use the full context well fully yet, and actually if you provide too much irrelevant context, the quality degrades a lot. And so I say in the open source world, like we're still just getting to the cusp of like the full context is usable. And then of course, like when you're something like Dropbox Dash, like it's basically building this whole like brain that's like read everything your company's ever written. And so that's not going to fit into your context window, so you need rag just as a practical reality. And even for a lot of similar reasons, you need like RAM and hard disk in conventional computer architecture. And I think these things will keep like horse trading, like maybe if, you know, a million or 10 million is the new, tokens is the new context length, maybe that shifts. Maybe the bigger picture is like, it's super exciting to talk about the LLM and like that piece of the puzzle, but there's this whole other scaffolding of more conventional like retrieval or conventional machine learning, especially because you have to scale up products to like millions of people you do in your toy app is not going to scale to that from a cost or latency or performance standpoint. So I think you really need these like hybrid architectures that where you have very like purpose fit tools, or you're probably not using Sonnet 3.5 for all of your normal product use cases. You're going to use like a fine tuned 8 billion model or sort of the minimum model that gets you the right output. And then a smaller model also is like a lot more cost and latency versus like much better characteristics on that front.Alessio [00:17:48]: Yeah. Let's jump into the Dropbox AI story. So sure. Your initial prototype was Files GPT. How did it start? And then how did you communicate that internally? You know, I know you have a pretty strong like mammal culture. One where you're like, okay, Hey, we got to really take this seriously.Drew [00:18:06]: Yeah. Well, on the latter, it was, so how do we say like how we took Dropbox, how AI seriously as a company started kind of around that time, that honeymoon time, unfortunately. In January, I wrote this like memo to the company, like around basically like how we need to play offense in 23. And that most of the time the kind of concrete is set and like the winners are the winners and things are kind of frozen. But then with these new eras of computing, like the PC or the internet or the phone or the concrete on freezes and you can sort of build, do things differently and have a new set of winners. It's sort of like a new season starts as a result of a lot of that sort of personal hacking and just like thinking about this. I'm like, yeah, this is an inflection point in the industry. Like we really need to change how we think about our strategy. And then becoming an AI first company was probably the headline thing that we did. And then, and then that got, and then calling on everybody in the company to really think about in your world, how is AI going to reshape your workflows or what sort of the AI native way of thinking about your job. File GPT, which is sort of this Dropbox AI kind of initial concept that actually came from our engineering team as, you know, as we like called on everybody, like really think about what we should be doing that's new or different. So it was kind of organic and bottoms up like a bunch of engineers just kind of hacked that together. And then that materialized as basically when you preview a file on Dropbox, you can have kind of the most straightforward possible integration of AI, which is a good thing. Like basically you have a long PDF, you want to be able to ask questions of it. So like a pretty basic implementation of RAG and being able to do that when you preview a file on Dropbox. So that was the origin of that, that was like back in 2023 when we released just like the starting engines had just, you know, gotten going.Alessio [00:19:53]: It's funny where you're basically like these files that people have, they really don't want them in a way, you know, like you're storing all these files and like you actually don't want to interact with them. You want a layer on top of it. And that's kind of what also takes you to Dash eventually, which is like, Hey, you actually don't really care where the file is. You just want to be the place that aggregates it. How do you think about what people will know about files? You know, are files the actual file? Are files like the metadata and they're just kind of like a pointer that goes somewhere and you don't really care where it is?Drew [00:20:21]: Yeah.Alessio [00:20:22]: Any thoughts about?Drew [00:20:23]: Totally. Yeah. I mean, there's a lot of potential complexity in that question, right? Is it a, you know, what's the difference between a file and a URL? And you can go into the technicals, it's like pass by value, pass by reference. Okay. What's the format like? All right. So it starts with a primitive. It's not really a flat file. It's like a structured data. You're sort of collaborative. Yeah. That's keeping in sync. Blah, blah, blah. I actually don't start there at all. I just start with like, what do people, like, what do humans, let's work back from like how humans think about this stuff or how they should think about this stuff. Meaning like, I don't think about, Oh, here are my files and here are my links or cloud docs. I'm just sort of like, Oh, here's my stuff. This, this, here's sort of my documents. Here's my media. Here's my projects. Here are the people I'm working with. So it starts from primitives more like those, like how do people, how do humans think about these things? And then, then start from like a more ideal experience. Because if you think about it, we kind of have this situation that will look like particularly medieval in hindsight where, all right, how do you manage your work stuff? Well, on all, you know, on one side of your screen, you have this file browser that literally hasn't changed since the early eighties, right? You could take someone from the original Mac and sit them in front of like a computer and they'd be like, this is it. And that's, it's been 40 years, right? Then on the other side of your screen, you have like Chrome or a browser that has so many tabs open, you can no longer see text or titles. This is the state of the art for how we manage stuff at work. Interestingly, neither of those experiences was purpose-built to be like the home for your work stuff or even anything related to it. And so it's important to remember, we get like stuck in these local maxima pretty often in tech where we're obviously aware that files are not going away, especially in certain domains. So that format really matters and where files are still going to be the tool you use for like if there's something big, right? If you're a big video file, that kind of format in a file makes sense. There's a bunch of industries where it's like construction or architecture or sort of these domain specific areas, you know, media generally, if you're making music or photos or video, that all kind of fits in the big file zone where Dropbox is really strong and that's like what customers love us for. It's also pretty obvious that a lot of stuff that used to be in, you know, Word docs or Excel files, like all that has tilted towards the browser and that tilt is going to continue. So with Dash, we wanted to make something that was really like cloud-native, AI-native and deliberately like not be tied down to the abstractions of the file system. Now on the other hand, it would be like ironic and bad if we then like fractured the experience that you're like, well, if it touches a file, it's a syncing metaphor to this app. And if it's a URL, it's like this completely different interface. So there's a convergence that I think makes sense over time. But you know, but I think you have to start from like, not so much the technology, start from like, what do the humans want? And then like, what's the idealized product experience? And then like, what are the technical underpinnings of that, that can make that good experience?Alessio [00:23:20]: I think it's kind of intuitive that in Dash, you can connect Google Drive, right? Because you think about Dropbox, it's like, well, it's file storage, you really don't want people to store files somewhere, but the reality is that they do. How do you think about the importance of storage and like, do you kind of feel storage is like almost solved, where it's like, hey, you can kind of store these files anywhere, what matters is like access.Drew [00:23:38]: It's a little bit nuanced in that if you're dealing with like large quantities of data, it actually does matter. The implementation matters a lot or like you're dealing with like, you know, 10 gig video files like that, then you sort of inherit all the problems of sync and have to go into a lot of the challenges that we've solved. Switching on a pretty important question, like what is the value we provide? What does Dropbox do? And probably like most people, I would have said like, well, Dropbox syncs your files. And we didn't even really have a mission of the company in the beginning. I'm just like, yeah, I just don't want to carry a thumb driving around and life would be a lot better if our stuff just like lived in the cloud and I just didn't have to think about like, what device is the thing on or what operating, why are these operating systems fighting with each other and incompatible? You know, I just want to abstract all of that away. But then so we thought, even we were like, all right, Dropbox provides storage. But when we talked to our customers, they're like, that's not how we see this at all. Like actually, Dropbox is not just like a hard drive in the cloud. It's like the place where I go to work or it's a place like I started a small business is a place where my dreams come true. Or it's like, yeah, it's not keeping files in sync. It's keeping people in sync. It's keeping my team in sync. And so they're using this kind of language where we're like, wait, okay, yeah, because I don't know, storage probably is a commodity or what we do is a commodity. But then we talked to our customers like, no, we're not buying the storage, we're buying like the ability to access all of our stuff in one place. We're buying the ability to share everything and sort of, in a lot of ways, people are buying the ability to work from anywhere. And Dropbox was kind of, the fact that it was like file syncing was an implementation detail of this higher order need that they had. So I think that's where we start too, which is like, what is the sort of higher order thing, the job the customer is hiring Dropbox to do? Storage in the new world is kind of incidental to that. I mean, it still matters for things like video or those kinds of workflows. The value of Dropbox had never been, we provide you like the cheapest bits in the cloud. But it is a big pivot from Dropbox is the company that syncs your files to now where we're going is Dropbox is the company that kind of helps you organize all your cloud content. I started the company because I kept forgetting my thumb drive. But the question I was really asking was like, why is it so hard to like find my stuff, organize my stuff, share my stuff, keep my stuff safe? You know, I'm always like one washing machine and I would leave like my little thumb drive with all my prior company stuff on in the pocket of my shorts and then almost wash it and destroy it. And so I was like, why do we have to, this is like medieval that we have to think about this. So that same mindset is how I approach where we're going. But I think, and then unfortunately the, we're sort of back to the same problems. Like it's really hard to find my stuff. It's really hard to organize myself. It's hard to share my stuff. It's hard to secure my content at work. Now the problem is the same, the shape of the problem and the shape of the solution is pretty different. You know, instead of a hundred files on your desktop, it's now a hundred tabs in your browser, et cetera. But I think that's the starting point.Alessio [00:26:30]: How has the idea of a product evolved for you? So, you know, famously Steve Jobs started by Dropbox and he's like, you know, this is just a feature. It's not a product. And then you build like a $10 billion feature. How in the age of AI, how do you think about, you know, maybe things that used to be a product are now features because the AI on top of it, it's like the product, like what's your mental model? Do you think about it?Drew [00:26:50]: Yeah. So I don't think there's really like a bright line. I don't know if like I use the word features and products and my mental model that much of how I break it down because it's kind of a, it's a good question. I mean, I don't not think about features, I don't think about products, but it does start from that place of like, all right, we have all these new colors we can paint with and all right, what are these higher order needs that are sort of evergreen, right? So people will always have stuff at work. They're always need to be able to find it or, you know, all the verbs I just mentioned. It's like, okay, how can we make like a better painting and how can we, and then how can we use some of these new colors? And then, yeah, it's like pretty clear that after the large models, the way you find stuff share stuff, it's going to be completely different after COVID, it's going to be completely different. So that's the starting point. But I think it is also important to, you know, you have to do more than just work back from the customer and like what they're trying to do. Like you have to think about, and you know, we've, we've learned a lot of this the hard way sometimes. Okay. You might start with a customer. You might start with a job to be on there. You're like, all right, what's the solution to their problem? Or like, can we build the best product that solves that problem? Right. Like what's the best way to find your stuff in the modern world? Like, well, yeah, right now the status quo for the vast majority of the billion, billion knowledge workers is they have like 10 search boxes at work that each search 10% of your stuff. Like that's clearly broken. Obviously you should just have like one search box. All right. So we can do that. And that also has to be like, I'll come back to defensibility in a second, but like, can we build the right solution that is like meaningfully better from the status quo? Like, yes, clearly. Okay. Then can we like get distribution and growth? Like that's sort of the next thing you learned is as a founder, you start with like, what's the product? What's the product? What's the product? Then you're like, wait, wait, we need distribution and we need a business model. So those are the next kind of two dominoes you have to knock down or sort of needles you have to thread at the same time. So all right, how do we grow? I mean, if Dropbox 1.0 is really this like self-serve viral model that there's a lot of, we sort of took a borrowed from a lot of the consumer internet playbook and like what Facebook and social media were doing and then translated that to sort of the business world. How do you get distribution, especially as a startup? And then a business model, like, all right, storage happened to be something in the beginning happened to be something people were willing to pay for. They recognize that, you know, okay, if I don't buy something like Dropbox, I'm going to have to buy an external hard drive. I'm going to have to buy a thumb drive and I have to pay for something one way or another. People are already paying for things like backup. So we felt good about that. But then the last domino is like defensibility. Okay. So you build this product or you get the business model, but then, you know, what do you do when the incumbents, the next chess move for them is I just like copy, bundle, kill. So they're going to copy your product. They'll bundle it with their platforms and they'll like give it away for free or no added cost. And, you know, we had a lot of, you know, scar tissue from being on the wrong side of that. Now you don't need to solve all four for all four or five variables or whatever at once or you can sort of have, you know, some flexibility. But the more of those gates that you get through, you sort of add a 10 X to your valuation. And so with AI, I think, you know, there's been a lot of focus on the large language model, but it's like large language models are a pretty bad business from a, you know, you sort of take off your tech lens and just sort of business lens. Like there's sort of this weirdly self-commoditizing thing where, you know, models only have value if they're kind of on this like Pareto frontier of size and quality and cost. Being number two, you know, if you're not on that frontier, the second the frontier moves out, which it moves out every week, like your model literally has zero economic value because it's dominated by the new thing. LLMs generate output that can be used to train or improve. So there's weird, peculiar things that are specific to the large language model. And then you have to like be like, all right, where's the value going to accrue in the stack or the value chain? And, you know, certainly at the bottom with Nvidia and the semiconductor companies, and then it's going to be at the top, like the people who have the customer relationship who have the application layer. Those are a few of the like lenses that I look at a question like that through.Alessio [00:30:48]: Do you think AI is making people more careful about sharing the data at all? People are like, oh, data is important, but it's like, whatever, I'm just throwing it out there. Now everybody's like, but are you going to train on my data? And like your data is actually not that good to train on anyway. But like how have you seen, especially customers, like think about what to put in, what to not?Drew [00:31:06]: I mean, everybody should be. Well, everybody is concerned about this and nobody should be concerned about this, right? Because nobody wants their personal companies information to be kind of ground up into little pellets to like sell you ads or train the next foundation model. I think it's like massively top of mind for every one of our customers, like, and me personally, and with my Dropbox hat on, it's like so fundamental. And, you know, we had experience with this too at Dropbox 1.0, the same kind of resistance, like, wait, I'm going to take my stuff on my hard drive and put it on your server somewhere. Are you serious? What could possibly go wrong? And you know, before that, I was like, wait, are you going to sell me, I'm going to put my credit card number into this website? And before that, I was like, hey, I'm going to take all my cash and put it in a bank instead of under my mattress. You know, so there's a long history of like tech and comfort. So in some sense, AI is kind of another round of the same thing, but the issues are real. And then when I think about like defensibility for Dropbox, like that's actually a big advantage that we have is one, our incentives are very aligned with our customers, right? We only get, we only make money if you pay us and you only pay us if we do a good job. So we don't have any like side hustle, you know, we're not training the next foundation model. You know, we're not trying to sell you ads. Actually we're not even trying to lock you into an ecosystem, like the whole point of Dropbox is it works, you know, everywhere. Because I think one of the big questions we've circling around is sort of like, in the world of AI, where should our lane be? Like every startup has to ask, or in every big company has to ask, like, where can we really win? But to me, it was like a lot of the like trust advantages, platform agnostic, having like a very clean business model, not having these other incentives. And then we also are like super transparent. We were transparent early on. We're like, all right, we're going to establish these AI principles, very table stakes stuff of like, here's transparency. We want to give people control. We want to cover privacy, safety, bias, like fairness, all these things. And we put that out up front to put some sort of explicit guardrails out where like, hey, we're, you know, because everybody wants like a trusted partner as they sort of go into the wild world of AI. And then, you know, you also see people cutting corners and, you know, or just there's a lot of uncertainty or, you know, moving the pieces around after the fact, which no one feels good about.Alessio [00:33:14]: I mean, I would say the last 10, 15 years, the race was kind of being the system of record, being the storage provider. I think today it's almost like, hey, if I can use Dash to like access my Google Drive file, why would I pay Google for like their AI feature? So like vice versa, you know, if I can connect my Dropbook storage to this other AI assistant, how do you kind of think about that, about, you know, not being able to capture all the value and how open people will stay? I think today things are still pretty open, but I'm curious if you think things will get more closed or like more open later.Drew [00:33:42]: Yeah. Well, I think you have to get the value exchange right. And I think you have to be like a trustworthy partner or like no one's going to partner with you if they think you're going to eat their lunch, right? Or if you're going to disintermediate them and like all the companies are quite sophisticated with how they think about that. So we try to, like, we know that's going to be the reality. So we're actually not trying to eat anyone's like Google Drive's lunch or anything. Actually we'll like integrate with Google Drive, we'll integrate with OneDrive, really any of the content platforms, even if they compete with file syncing. So that's actually a big strategic shift. We're not really reliant on being like the store of record and there are pros and cons to this decision. But if you think about it, we're basically like providing all these apps more engagement. We're like helping users do what they're really trying to do, which is to get, you know, that Google Doc or whatever. And we're not trying to be like, oh, by the way, use this other thing. This is all part of our like brand reputation. It's like, no, we give people freedom to use whatever tools or operating system they want. We're not taking anything away from our partners. We're actually like making it, making their thing more useful or routing people to those things. I mean, on the margin, then we have something like, well, okay, to the extent you do rag and summarize things, maybe that doesn't generate a click. Okay. You know, we also know there's like infinity investment going into like the work agents. So we're not really building like a co-pilot or Gemini competitor. Not because we don't like those. We don't find that thing like captivating. Yeah, of course. But just like, you know, you learn after some time in this business that like, yeah, there's some places that are just going to be such kind of red oceans or just like super big battlefields. Everybody's kind of trying to solve the same problem and they just start duplicating all each other effort. And then meanwhile, you know, I think the concern would be is like, well, there's all these other problems that aren't being properly addressed by AI. And I was concerned that like, yeah, and everybody's like fixated on the agent or the chatbot interface, but forgetting that like, hey guys, like we have the opportunity to like really fix search or build a self-organizing Dropbox or environment or there's all these other things that can be a compliment. Because we don't really want our customers to be thinking like, well, do I use Dash or do I use co-pilot? And frankly, none of them do. In a lot of ways, actually, some of the things that we do on the security front with Dash for Business are a good compliment to co-pilot. Because as part of Dash for Business, we actually give admins, IT, like universal visibility and control over all the different, what's being shared in your company across all these different platforms. And as a precondition to installing something like co-pilot or Dash or Glean or any of these other things, right? You know, IT wants to know like, hey, before we like turn all the lights in here, like let's do a little cleaning first before we let everybody in. And there just haven't been good tools to do that. And post AI, you would do it completely differently. And so that's like a big, that's a cornerstone of what we do and what sets us apart from these tools. And actually, in a lot of cases, we will help those tools be adopted because we actually help them do it safely. Yeah.Alessio [00:36:27]: How do you think about building for AI versus people? It's like when you mentioned cleaning up is because maybe before you were like, well, humans can have some common sense when they look at data on what to pick versus models are just kind of like ingesting. Do you think about building products differently, knowing that a lot of the data will actually be consumed by LLMs and like agents and whatnot versus like just people?Drew [00:36:46]: I think it'll always be, I aim a little bit more for like, you know, level three, level four kind of automation, because even if the LLM is like capable of completely autonomously organizing your environment, it probably would do a reasonable job. But like, I think you build bad UI when the sort of user has to fit itself to the computer versus something that you're, you know, it's like an instrument you're playing or something where you have some kind of good partnership. And you know, and on the other side, you don't have to do all this like manual effort. And so like the command line was sort of subsumed by like, you know, graphical UI. We'll keep toggling back and forth. Maybe chat will be, chat will be an increasing, especially when you bring in voice, like will be an increasing part of the puzzle. But I don't think we're going to go back to like a million command lines either. And then as far as like the sort of plumbing of like, well, is this going to be consumed by an LLM or a human? Like fortunately, like you don't really have to design it that differently. I mean, you have to make sure everything's legible to the LLM, but it's like quite tolerant of, you know, malformed everything. And actually the more, the easier it makes something to read for a human, the easier it is for an LLM to read to some extent as well. But we really think about what's that kind of right, how do we build that right, like human machine interface where you're still in control and driving, but then it's super easy to translate your intent into like the, you know, however you want your folder, setting your environment set up or like your preferences.Alessio [00:38:05]: What's the most underrated thing about Dropbox that maybe people don't appreciate?Drew [00:38:09]: Well, I think this is just such a natural evolution for us. It's pretty true. Like when people think about the world of AI, file syncing is not like the next thing you would auto complete mentally. And I think we also did like our first thing so well that there were a lot of benefits to that. But I think there also are like, we hit it so hard with our first product that it was like pretty tough to come up with a sequel. And we had a bit of a sophomore slump and you know, I think actually a lot of kids do use Dropbox through in high school or things like that, but you know, they're not, they're using, they're a lot more in the browser and then their file system, right. And we know all this, but still like we're super well positioned to like help a new generation of people with these fundamental problems and these like that affect, you know, a billion knowledge workers around just finding, organizing, sharing your stuff and keeping it safe. And there's, there's a ton of unsolved problems in those four verbs. We've talked about search a little bit, but just even think about like a whole new generation of people like growing up without the ability to like organize their things and yeah, search is great. And if you just have like a giant infinite pile of stuff, then search does make that more manageable. But you know, you do lose some things that were pretty helpful in prior decades, right? So even just the idea of persistence, stuff still being there when you come back, like when I go to sleep and wake up, my physical papers are still on my desk. When I reboot my computer, the files are still on my hard drive. But then when in my browser, like if my operating system updates the wrong way and closes the browser or if I just more commonly just declared tab bankruptcy, it's like your whole workspace just clears itself out and starts from zero. And you're like, on what planet is this a good idea? There's no like concept of like, oh, here's the stuff I was working on. Yeah, let me get back to it. And so that's like a big motivation for things like Dash. Huge problems with sharing, right? If I'm remodeling my house or if I'm getting ready for a board meeting, you know, what do I do if I have a Google doc and an air table and a 10 gig 4k video? There's no collection that holds mixed format things. And so it's another kind of hidden problem, hidden in plain sight, like he's missing primitives. Files have folders, songs have playlists, links have, you know, there's no, somehow we miss that. And so we're building that with stacks in Dash where it's like a mixed format, smart collection that you can then, you know, just share whatever you need internally, externally and have it be like a really well designed experience and platform agnostic and not tying you to any one ecosystem. We're super excited about that. You know, we talked a little bit about security in the modern world, like IT signs all these compliance documents, but in reality has no way of knowing where anything is or what's being shared. It's actually better for them to not know about it than to know about it and not be able to do anything about it. And when we talked to customers, we found that there were like literally people in IT whose jobs it is to like manually go through, log into each, like log into office, log into workspace, log into each tool and like go comb through one by one the links that people have shared and like unshares. There's like an unshare guy in all these companies and that that job is probably about as fun as it sounds like, my God. So there's, you know, fortunately, I guess what makes technology a good business is for every problem it solves, it like creates a new one, so there's always like a sequel that you need. And so, you know, I think the happy version of our Act 2 is kind of similar to Netflix. I look at a lot of these companies that really had multiple acts and Netflix had the vision to be streaming from the beginning, but broadband and everything wasn't ready for it. So they started by mailing you DVDs, but then went to streaming and then, but the value probably the whole time was just like, let me press play on something I want to see. And they did a really good job about bringing people along from the DVD mailing off. You would think like, oh, the DVD mailing piece is like this burning platform or it's like legacy, you know, ankle weight. And they did have some false starts in that transition. But when you really think about it, they were able to take that DVD mailing audience, move, like migrate them to streaming and actually bootstrap a, you know, take their season one people and bootstrap a victory in season two, because they already had, you know, they weren't starting from scratch. And like both of those worlds were like super easy to sort of forget and be like, oh, it's all kind of destiny. But like, no, that was like an incredibly competitive environment. And Netflix did a great job of like activating their Act 1 advantages and winning in Act 2 because of it. So I don't think people see Dropbox that way. I think people are sort of thinking about us just in terms of our Act 1 and they're like, yeah, Dropbox is fine. I used to use it 10 years ago. But like, what have they done for me lately? And I don't blame them. So fortunately, we have like better and better answers to that question every year.Alessio [00:42:39]: And you call it like the silicon brain. So you see like Dash and Stacks being like the silicon brain interface, basically forDrew [00:42:46]: people. I mean, that's part of it. Yeah. And writ large, I mean, I think what's so exciting about AI and everybody's got their own kind of take on it, but if you like really zoom out civilizationally and like what allows humans to make progress and, you know, what sort of is above the fold in terms of what's really mattered. I certainly want to, I mean, there are a lot of points, but some that come to mind like you think about things like the industrial revolution, like before that, like mechanical energy, like the only way you could get it was like by your own hands, maybe an animal, maybe some like clever sort of machines or machines made of like wood or something. But you were quite like energy limited. And then suddenly, you know, the industrial revolution, things like electricity, it suddenly is like, all right, mechanical energy is now available on demand as a very fungible kind of, and then suddenly we consume a lot more of it. And then the standard of living goes way, way, way, way up. That's been pretty limited to the physical realm. And then I believe that the large models, that's really the first time we can kind of bottle up cognitive energy and offloaded, you know, if we started by offloading a lot of our mechanical or physical busy work to machines that freed us up to make a lot of progress in other areas. But then with AI and computing, we're like, now we can offload a lot more of our cognitive busy work to machines. And then we can create a lot more of it. Price of it goes way down. Importantly, like, it's not like humans never did anything physical again. It's sort of like, no, but we're more leveraged. We can move a lot more earth with a bulldozer than a shovel. And so that's like what is at the most fundamental level, what's so exciting to me about AI. And so what's the silicon brain? It's like, well, we have our human brains and then we're going to have this other like half of our brain that's sort of coming online, like our silicon brain. And it's not like one or the other. They complement each other. They have very complimentary strengths and weaknesses. And that's, that's a good thing. There's also this weird tangent we've gone on as a species to like where knowledge work, knowledge workers have this like epidemic of, of burnout, great resignation, quiet quitting. And there's a lot going on there. But I think that's one of the biggest problems we have is that be like, people deserve like meaningful work and, you know, can't solve all of it. But like, and at least in knowledge work, there's a lot of own goals, you know, enforced errors that we're doing where it's like, you know, on one side with brain science, like we know what makes us like productive and fortunately it's also what makes us engaged. It's like when we can focus or when we're some kind of flow state, but then we go to work and then increasingly going to work is like going to a screen and you're like, if you wanted to design an environment that made it impossible to ever get into a flow state or ever be able to focus, like what we have is that. And that was the thing that just like seven, eight years ago just blew my mind. I'm just like, I cannot understand why like knowledge work is so jacked up on this adventure. It's like, we, we put ourselves in like the most cognitively polluted environment possible and we put so much more stress on the system when we're working remotely and things like that. And you know, all of these problems are just like going in the wrong direction. And I just, I just couldn't understand why this was like a problem that wasn't fixing itself. And I'm like, maybe there's something Dropbox can do with this and you know, things like Dash are the first step. But then, well, so like what, well, I mean, now like, well, why are humans in this like polluted state? It's like, well, we're just, all of the tools we have today, like this generation of tools just passes on all of the weight, the burden to the human, right? So it's like, here's a bajillion, you know, 80,000 unread emails, cool. Here's 25 unread Slack channels. Here's, we all get started like, it's like jittery like thinking about it. And then you look at that, you're like, wait, I'm looking at my phone, it says like 80,000 unread things. There's like no question, product question for which this is the right answer. Fortunately, that's why things like our silicon brain are pretty helpful because like they can serve as like an attention filter where it's like, actually, computers have no problem reading a million things. Humans can't do that, but computers can. And to some extent, this was already happening with computer, you know, Excel is an aversion of your silicon brain or, you know, you could draw the line arbitrarily. But with larger models, like now so many of these little subtasks and tasks we do at work can be like fully automated. And I think, you know, I think it's like an important metaphor to me because it mirrors a lot of what we saw with computing, computer architecture generally. It's like we started out with the CPU, very general purpose, then GPU came along much better at these like parallel computations. We talk a lot about like human versus machine being like substituting, it's like CPU, GPU, it's not like one is categorically better than the other, they're complements. Like if you have something really parallel, use a GPU, if not, use a CPU. The whole relationship, that symbiosis between CPU and GPU has obviously evolved a lot since, you know, playing Quake 2 or something. But right now we have like the human CPU doing a lot of, you know, silicon CPU tasks. And so you really have to like redesign the work thoughtfully such that, you know, probably not that different from how it's evolved in computer architecture, where the CPU is sort of an orchestrator of these really like heavy lifting GPU tasks. That dividing line does shift a little bit, you know, with every generation. And so I think we need to think about knowledge work in that context, like what are human brains good at? What's our silicon brain good at? Let's resegment the work. Let's offload all the stuff that can be automated. Let's go on a hunt for like anything that could save a human CPU cycle. Let's give it to the silicon one. And so I think we're at the early earnings of actually being able to do something about it.Alessio [00:48:00]: It's funny, I gave a talk to a few government people earlier this year with a similar point where we used to make machines to release human labor. And then the kilowatt hour was kind of like the unit for a lot of countries. And now you're doing the same thing with the brain and the data centers are kind of computational power plants, you know, they're kind of on demand tokens. You're on the board of Meta, which is the number one donor of Flops for the open source world. The thing about open source AI is like the model can be open source, but you need to carry a briefcase to actually maybe run a model that is not even that good compared to some of the big ones. How do you think about some of the differences in the open source ethos with like traditional software where it's like really easy to run and act on it versus like models where it's like it might be open source, but like I'm kind of limited, sort of can do with it?Drew [00:48:45]: Yeah, well, I think with every new era of computing, there's sort of a tug of war between is this going to be like an open one or a closed one? And, you know, there's pros and cons to both. It's not like open is always better or open always wins. But, you know, I think you look at how the mobile, like the PC era and the Internet era started out being more on the open side, like it's very modular. Everybody sort of party that everybody could, you know, come to some downsides of that security. But I think, you know, the advent of AI, I think there's a real question, like given the capital intensity of what it takes to train these foundation models, like are we going to live in a world where oligopoly or cartel or all, you know, there's a few companies that have the keys and we're all just like paying them rent. You know, that's one future. Or is it going to be more open and accessible? And I'm like super happy with how that's just I find it exciting on many levels with all the different hats I wear about it. You know, fortunately, you've seen in real life, yeah, even if people aren't bringing GPUs on a plane or something, you've seen like the price performance of these models improve 10 or 100x year over year, which is sort of like many Moore's laws compounded together for a bunch of reasons like that wouldn't have happened without open source. Right. You know, for a lot of same reasons, it's probably better that we can anyone can sort of spin up a website without having to buy an internet information server license like there was some alternative future. So like things are Linux and really good. And there was a good balance of trade to where like people contribute their code and then also benefit from the community returning the favor. I mean, you're seeing that with open source. So you wouldn't see all this like, you know, this flourishing of research and of just sort of the democratization of access to compute without open source. And so I think it's been like phenomenally successful in terms of just moving the ball forward and pretty much anything you care about, I believe, even like safety. You can have a lot more eyes on it and transparency instead of just something is happening. And there was three places with nuclear power plants attached to them. Right. So I think it's it's been awesome to see. And then and again, for like wearing my Dropbox hat, like anybody who's like scaling a service to millions of people, again, I'm probably not using like frontier models for every request. It's, you know, there are a lot of different configurations, mostly with smaller models. And even before you even talk about getting on the device, like, you know, you need this whole kind of constellation of different options. So open source has been great for that.Alessio [00:51:06]: And you were one of the first companies in the cloud repatriation. You kind of brought back all the storage into your own data centers. Where are we in the AI wave for that? I don't think people really care today to bring the models in-house. Like, do you think people will care in the future? Like, especially as you have more small models that you want to control more of the economics? Or are the tokens so subsidized that like it just doesn't matter? It's more like a principle. Yeah. Yeah.Drew [00:51:30]: I mean, I think there's another one where like thinking about the future is a lot easier if you start with the past. So, I mean, there's definitely this like big surge in demand as like there's sort of this FOMO driven bubble of like all of big tech taking their headings and shipping them to Jensen for a couple of years. And then you're like, all right, well, first of all, we've seen this kind of thing before. And in the late 90s with like Fiber, you know, this huge race to like own the internet, own the information superhighway, literally, and then way overbuilt. And then there was this like crash. I don't know to what extent, like maybe it is really different this time. Or, you know, maybe if we create AGI that will sort of solve the rest of the, or we'll just have a different set of things to worry about. But, you know, the simplest way I think about it is like this is sort of a rent not buy phase because, you know, I wouldn't want to be, we're still so early in the maturity, you know, I wouldn't want to be buying like pallets of over like of 286s at a 5x markup when like the 386 and 486 and Pentium and everything are like clearly coming there around the corner. And again, because of open source, there's just been a lot more com

The Changelog
Leveling up JavaScript with Deno 2 (Interview)

The Changelog

Play Episode Listen Later Sep 26, 2024 75:12


Jerod is joined by Ryan Dahl to discuss his second take on leveling up JavaScript developers all around the world. Jerod asks Ryan why not try to fix or fork Node instead of starting fresh, how Deno (the open source project) can avoid the all too common rug pull (not cool) scenario, what's new in Deno 2 & their pragmatic decision to support npm, they talk JSR, they talk Deno KV & SQLite, they even talk about Ryan's open letter to Oracle in an attempt to free the unused "JavaScript" trademark from the giant's clutches.

Accidental Tech Podcast
606: A Decade of Half-Presses

Accidental Tech Podcast

Play Episode Listen Later Sep 24, 2024 160:20


Join us in raising money for St. Jude Children’s Research Hospital Donate in ATP’s name! Ryan Ricard’s great idea Calculate your Marco Offset Podcastathon: Casey was live from noon → midnight on Friday, 20 September Follow-up: John filename extraction from Photos From the ATP Insider: Photo Workflows special John’s AppleScript Other suggestions Shortcuts (for example, Josh Woodward’s) osxphotos (via Muli) Directly read SQLite database Suggestion from Will Leinweber Use Swift/PhotoKit Example from Alex Mazanov SwiftBar Always sending photos to the shared library (via Alexander Faxå) JPEG-XL in iOS 18 on iPhone 16 iPhone apps and spying via the microphone Eavesdropping via Siri Secure Exclave iPhone 16 & 16 Plus battery removal iFixIt teardown iFixIt blog post Jeff Johnson figured out how to silence the monthly screen recording prompts Sometimes the keys are bundle IDs Shell script to push everything out a year by Kyle Rubenok Amnesia App has come out to handle this for you TextSniper iPhone 16 Pro Impressions Is it green or blue? Belkin UltraGlass 2 Screen Protector Hypercritical #86: Naked Robotic Core AirPods 4 with ANC Impressions Apple Watch Series 10 Impressions Post-show Neutral: Volvo EX90’s annoying “feature” Animation Timestamp link to a video review Still images Bugatti Chiron Headlight replacement Members-only ATP Overtime: Meta kicks Apple when it’s down Meta canceled its Vision Pro competitor The Information Zuckerberg calls Meta the “opposite of Apple” Interview video Sponsored by: Trade Coffee: Coffee at home, made better. Get your first bag free! 1Password Extended Access Management: Secure every sign-in for every app on every device. Become a member for ATP Overtime, ad-free episodes, member specials, and our early-release, unedited “bootleg” feed!

Syntax - Tasty Web Development Treats

In this episode of Syntax, Wes and Scott talk about the latest features in Node.js, including native support for TypeScript, .env parsing, a built-in test runner, watch mode, SQLite integration, glob support, and top-level await. They also discuss some wishlist items, and experimental features like WebSocket support and the require module. Show Notes 00:00 Welcome to Syntax! 01:13 Brought to you by Sentry.io 01:37 Node.js new features Deno Bun 02:51 TypeScript tsx swc/wasm-typescript 10:03 SQLite v22.5 14:35 .env support 16:24 Test runner Jest 19:42 Watch Mode nodemon 21:22 Glob support 22:48 Top-Level Await Top-level await is a footgun 26:40 Experimental require module Default ESM Detection Web request standards HonoJS 29:39 Experimental WebSocket support 30:13 Async local storage 31:43 Single file executables 32:46 Wishlist 32:54 Hot reload 34:20 Window shim globalThis 35:30 Better server 35:56 Better terminal integration NIM styleText chalk warp 41:36 Twitter responses Coolify n 46:54 Sick Picks + Shameless Plugs Sick Picks Scott: Cascadia Wes: Roborock Qrevo Shameless Plugs Scott: YouTube Channel Hit us up on Socials! Syntax: X Instagram Tiktok LinkedIn Threads Wes: X Instagram Tiktok LinkedIn Threads Scott: X Instagram Tiktok LinkedIn Threads Randy: X Instagram YouTube Threads