Markup language developed by the W3C for encoding of data
POPULARITY
Categories
This Day in Legal History: SEC EstablishedOn this day in legal history, June 6, 1934, the United States Securities and Exchange Commission (SEC) was established as part of the sweeping reforms of the New Deal. The SEC was created by the Securities Exchange Act of 1934 in response to the stock market crash of 1929 and the ensuing Great Depression, which exposed widespread fraud, manipulation, and lack of oversight in the financial markets. Its primary mission was, and remains, to protect investors; maintain fair, orderly, and efficient markets; and facilitate capital formation.President Franklin D. Roosevelt appointed Joseph P. Kennedy, a former stockbroker and businessman, as the SEC's first chairman. The choice was controversial—Kennedy had profited handsomely from some of the same speculative practices the SEC was meant to prevent—but Roosevelt believed that Kennedy's insider knowledge would make him an effective regulator.The SEC was empowered to regulate the securities industry, enforce federal securities laws, and oversee the nation's stock and options exchanges. Among its early duties were requiring public companies to file detailed financial disclosures, registering securities before public offering, and monitoring insider trading. The commission also played a key role in restoring investor confidence in U.S. capital markets during a time of deep financial mistrust.Over time, the SEC expanded its reach, responding to new financial products, trading technologies, and crises. From investigating corporate accounting scandals like Enron and WorldCom, to managing the regulatory fallout of the 2008 financial crisis, the SEC has remained a pivotal force in shaping American financial law. It continues to evolve, now addressing issues such as crypto asset regulation, ESG disclosures, and algorithmic trading.Speaking of the SEC, U.S. District Judge Reggie Walton dismissed a lawsuit challenging the SEC 2020 rule changes that made it more difficult for shareholders to submit proposals at corporate annual meetings. The rules, enacted late in President Trump's term, raised the ownership thresholds and lengthened holding periods required to file shareholder proposals. They also introduced stricter resubmission requirements for proposals previously rejected by shareholders.The plaintiffs, including the Interfaith Center on Corporate Responsibility, As You Sow, and shareholder advocate James McRitchie, argued the changes disproportionately harmed proposals on environmental, social, and governance (ESG) issues and reduced long-term shareholder value. They claimed the SEC failed to assess the benefits of such proposals before implementing the rules.Judge Walton rejected these claims, ruling that the SEC adequately justified the changes under its mandate to promote efficiency, competition, and capital formation. The SEC, which had defended the rules during both the Trump and Biden administrations, argued that the reforms ensured shareholder proposals had broader relevance and potential for meaningful corporate action. The 2020 vote on the rule changes split along party lines, with Republican commissioners in support. While the SEC declined to comment on the ruling, the plaintiffs expressed disappointment and affirmed their commitment to corporate engagement on environmental and social issues.SEC wins dismissal of lawsuit challenging tighter rules on shareholder proposals | ReutersOpenAI filed an appeal challenging a court order that requires it to indefinitely preserve ChatGPT output data in an ongoing copyright lawsuit brought by The New York Times. OpenAI argues the order conflicts with its user privacy commitments and sets a troubling precedent. The preservation directive was issued last month after The Times requested that all relevant log data be maintained and segregated.OpenAI CEO Sam Altman publicly criticized the order on social media, affirming the company's stance against actions it sees as compromising user privacy. The appeal, filed on June 3, asks U.S. District Judge Sidney Stein to vacate the preservation requirement.The lawsuit, filed in 2023, accuses OpenAI and Microsoft of using millions of Times articles without permission to train ChatGPT. In April, Judge Stein ruled that The Times had plausibly alleged that OpenAI and Microsoft may have encouraged users to reproduce copyrighted content. The ruling rejected parts of a motion to dismiss the case and allowed several of the Times' claims to move forward, citing multiple examples of ChatGPT generating material closely resembling Times articles.OpenAI appeals data preservation order in NYT copyright case | ReutersPresident Donald Trump's 2026 budget proposal includes a plan to eliminate the Legal Services Corporation (LSC), an independent agency that funds civil legal aid for low-income Americans. The proposal seeks $21 million for an "orderly closeout" of the organization, which had requested $2.1 billion to meet growing demand. The LSC supports 130 nonprofit legal aid programs that assist with issues such as evictions, disaster recovery, and access to public benefits.Critics warn that the move would devastate legal aid access for millions, particularly in rural areas and the South. In Louisiana, for example, there is just one legal aid lawyer for every 11,250 eligible residents. Legal aid leaders say they already turn away half of those seeking help due to budget constraints, and the proposed funding cut would further limit their reach.Organizations like Southeast Louisiana Legal Services and Legal Aid of North Carolina would lose 40–50% of their funding, jeopardizing services for communities still recovering from recent hurricanes. Legal Services NYC, the largest legal aid provider in the country, has implemented a hiring freeze in anticipation of possible cuts.The proposal revives a long-standing conservative goal. Past Republican efforts to dismantle the LSC date back to the Reagan era, and Trump made a similar attempt in 2018. The Heritage Foundation has accused the LSC of supporting controversial causes, but legal aid advocates argue the organization is vital to community stability and fairness in the justice system.Trump Plan to Ax Legal Aid a Conservative Aim That Targets PoorIn a piece I wrote for Forbes last week, I discuss how the IRS has quietly released the underlying codebase for its Direct File program on GitHub, marking a rare moment of transparency in government software. At the center of this release is something called the “Fact Graph,” a logic engine that models tax rules as interrelated facts rather than a linear checklist. Built using XML and Scala, the Fact Graph interprets ambiguous tax data, identifies contradictions or omissions, and suggests paths forward, all in a transparent, declarative format.What sets this apart is that, unlike proprietary tax software, Direct File's logic isn't hidden—it's open, reviewable, and potentially improvable by anyone. This move not only demystifies some of the inner workings of tax enforcement but also sets a precedent: if algorithms are mediating our legal obligations, we should be able to see and understand the rules they follow.The release is particularly striking in an era of eroding public trust in institutions and increasing reliance on automated decision-making. While Direct File itself remains limited in scope and its future uncertain, the open-sourcing of its logic engine may have laid the groundwork for broader change. Other agencies—from state tax departments to those experimenting with AI-driven policy enforcement—could adopt similar transparency, allowing the public to engage with and even help refine the systems that govern them.Peeking Behind The Code—IRS Just Open-Sourced Direct FileThis week's closing theme is by Robert Schumann and comes courtesy of Christopher Zbinden. This week's closing theme is Robert Schumann's Toccata in C major, Op. 7, a dazzling showcase of Romantic-era pianism and one of the most technically demanding works in the standard repertoire. Composed in 1830 and revised in 1833, the piece earned a reputation early on as a pianist's Everest—Franz Liszt himself dubbed it “the hardest piece ever written.” Clocking in at just over five minutes when played at tempo, it's a relentless whirlwind of perpetual motion, requiring both physical stamina and interpretive precision.The toccata form, traditionally a virtuosic keyboard piece emphasizing dexterity, becomes in Schumann's hands something more cerebral. Beneath its bravura surface lies a structure built on two contrasting themes, developed with intricate counterpoint and rhythmic displacement. The left hand must execute rapid repeated notes and wide leaps with precision, while the right weaves through syncopated figures and chromatic runs, creating a dense musical texture.Schumann dedicated the piece to his friend Ludwig Schuncke, who had recently died at the age of 23. That personal connection adds an emotional layer to a work that might otherwise be heard as pure technical spectacle. Unlike many showpieces of the era, Schumann's Toccata isn't just difficult for difficulty's sake—it's an expression of obsession, energy, and youthful ambition.For a composer better known for lyrical piano miniatures, the Toccata is an early signal of the depth and range Schumann would explore in later works. As this week closes, it offers a fitting sendoff: intricate, driven, and a little manic—in the best Romantic sense of the word.Without further ado, Robert Schumann's Toccata in C major, Op. 7 – enjoy! This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.minimumcomp.com/subscribe
Send us a textNavigating the complex landscape of authentication frameworks is essential for any cybersecurity professional, especially those preparing for the CISSP exam. This deep-dive episode unravels the intricate world of authentication systems that protect our digital identities across multiple platforms and services.We begin by examining OAuth 2.0 and OpenID Connect (OIDC), exploring how these token-based frameworks revolutionize third-party authentication without exposing user credentials. When you click "Login with Google," you're experiencing these protocols in action—reducing password reuse while maintaining security across digital services. Learn the difference between authorization flows and how these systems interact to verify your identity seamlessly across the web.The podcast then transitions to Security Assertion Markup Language (SAML), breaking down how this XML-based protocol establishes trust between identity providers and service providers. Through practical examples, we illustrate how SAML enables web single sign-on capabilities across educational institutions, corporate environments, and cloud services—creating that "connective tissue" between disparate systems while enhancing both security and user experience.Kerberos, MIT's powerful network authentication protocol, takes center stage as we explore its ticketing system architecture. Named after the three-headed dog of Greek mythology, this protocol's Authentication Service, Ticket Granting Service, and Key Distribution Center work in concert to verify identities without transmitting passwords across networks. We also discuss critical considerations like time synchronization requirements that can make or break your Kerberos implementation.For remote authentication scenarios, we compare RADIUS and TACACS+ protocols, highlighting their distinct approaches to the AAA (Authentication, Authorization, and Accounting) framework. Discover why network administrators choose UDP-based RADIUS for general network access while preferring the TCP-based TACACS+ for granular administrative control with command-level authorization and full payload encryption.Whether you're studying for the CISSP exam or looking to strengthen your organization's security posture, this episode provides the knowledge foundation you need to implement robust authentication systems in today's interconnected world. Visit CISSP Cyber Training for additional resources to support your cybersecurity journey.Gain exclusive access to 360 FREE CISSP Practice Questions delivered directly to your inbox! Sign up at FreeCISSPQuestions.com and receive 30 expertly crafted practice questions every 15 days for the next 6 months—completely free! Don't miss this valuable opportunity to strengthen your CISSP exam preparation and boost your chances of certification success. Join now and start your journey toward CISSP mastery today!
Jason Martin is an AI Security Researcher at HiddenLayer. This episode explores “policy puppetry,” a universal attack technique bypassing safety features in all major language models using structured formats like XML or JSON.Subscribe to the Gradient Flow Newsletter
Filippo La Porta, Benedetto Intrigila"L'evidenza e l'emozione"Una passeggiata fra scienza e letteraturaLuca Sossella Editorewww.lucasossellaeditore.itDue amici di gioventù, che hanno condiviso gli studi universitari di filosofia – e in parte la turbolenta stagione politica dei primi anni settanta –, si ritrovano a ragionare delle loro passioni di allora, della loro “educazione sentimentale”, pensando alle giovani generazioni.Un critico letterario e un informatico dialogano passeggiando a partire dalla separazione tra le due culture – scientifica e umanistica – e poi via via affrontando i temi che considerano oggi più urgenti: attualità o meno del “metodo” dialettico, necessità della fede religiosa, critica dell'esistente, intelligenza artificiale e autenticità dell'esperienza, rapporto natura-cultura.Filippo La PortaCritico letterario e saggista. Scrive su Robinson de “la Repubblica” e collabora all'“Unità”.Ha una rubrica sul settimanale “Left”, collabora con Di Martedì La7.Ha pubblicato L'arte del riassunto. Come liberarsi del superfluo; Pasolini. Profili di storia letteraria; Uno sguardo sulla città. Gli scrittori italiani contemporanei e i loro luoghi; Roma è una bugia; Il bene e gli altri: Dante e un'etica per il nuovo millennio; Come un raggio nell'acqua. Dante e la relazione con l'altro.Insegna alla Luiss e alla Scuola Holden.Benedetto IntrigilaGià professore ordinario di informatica presso l'Università di Roma “Tor Vergata”.Laurea in Matematica (cum laude) e in Filosofia (cum laude).I suoi interessi di ricerca sono nel campo del lambda calcolo e delle applicazioni dei metodi formali all'informatica. Oltre al lambda calcolo la sua ricerca si è svolta fra l'altro nel campo della verifica dei sistemi software-like (protocolli) e dei sistemi ibridi, nel campo delle tecnologie XML e dei linguaggi formali.È autore di oltre 60 lavori scientifici su riviste e conferenze internazionali.In precedenza – dopo aver superato con successo un corso-concorso presso la Scuola Superiore della P.A – ha lavorato nel campo dell'informatica e dell'organizzazione del lavoro come funzionario del Ministero delle Finanze.IL POSTO DELLE PAROLEascoltare fa pensarewww.ilpostodelleparole.itDiventa un supporter di questo podcast: https://www.spreaker.com/podcast/il-posto-delle-parole--1487855/support.
In dieser News-Ausgabe sprechen wir über Änderungen an der V8 JavaScript Engine, die euch erlauben, Dateien mit Explicit Compile Hints für die direkte Kompilierung zu markieren. In der neuen Chrome-Version kann das hunderte Millisekunden Beschleunigung bringen.Wir diskutieren außerdem, warum die WCAG anfängt, ihr Kernthema „Accessibility“ anders zu denken und zu bewerten.Dave berichtet von einem Bug, der still und leise Nachrichten in Apples iMessage verschluckt & was genau XML damit zu tun hat.Fabi nutzt zwar mittlerweile mehrheitlich Cursor als IDE, war aber trotzdem erstaunt über die neuesten Änderungen und Verbesserungen im Umgang mit AI und Copilot in Visual Studio Code.Nachdem es zuletzt mit Evo nicht geklappt hatte, eine Alternative zu git zu etablieren, nimmt das Projekt JJ (Jujutsu) immer mehr an Fahrt auf. Jan legt dar, welche Vorteile das Projekt gegenüber Git mitbringt und was Google damit zu tun hat.Und zu guter Letzt berichtet Dennis, wie es (White-Hat-)Hacker:innen gelungen ist, die Kontrolle über einen Nissan Leaf zu übernehmen und was sie alles damit anstellen konnten.Schreibt uns! Schickt uns eure Themenwünsche und euer Feedback: podcast@programmier.barFolgt uns! Bleibt auf dem Laufenden über zukünftige Folgen und virtuelle Meetups und beteiligt euch an Community-Diskussionen. BlueskyInstagramLinkedInMeetupYouTube
#370 - La Nicchia Sei TuRiflessioni sul futuro del posizionamento di nicchia nell'epoca della sovrabbondanza di informazioni e cooscenza._______________Info Utili• Sostieni questo podcast:Entra in contatto con me, ottieni feedback, ricevi consigli sul tuo progetto onlinehttps://Patreon.com/Robin_Good• Musica di questa puntata:"Bittersweet" by baaskaT disponibile su Soundcloud• Nella foto di copertina:Con il mio amico Stefano A.. Milano, XML. Novembre 2019. • Canale Instagram (dalla Thailandia - ciò che vedono i miei occhi: momenti non in posa):https://instagram.com/giggi_canali • Ascolta e condividi questo podcast:https://www.spreaker.com/show/dabrandafriendArchivio completo organizzato per temi:https://start.me/p/kxENzk/da-brand-a-friend-archivio-podcast• Seguimi su Telegram:https://t.me/RobinGoodItalia• Newsletter in Inglese:https://robingood.substack.com - Fuoco su costruire fiducia per chi fa l'imprenditore onlinehttps://goodtools.substack.com - Tool alternativi a costo zerohttps://curationmonetized.substack.com - Esempi di come monetizzare organizzando informazioni.
SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast
SRUM-DUMP Version 3: Uncovering Malware Activity in Forensics Mark Baggett released SRUM-DUMP Version 3. The tool simplifies data extraction from Widnows System Resource Usage Monitor (SRUM). This database logs how much resources software used for 30 days, and is invaluable to find out what software was executed when and if it sent or received network data. https://isc.sans.edu/diary/SRUM-DUMP%20Version%203%3A%20Uncovering%20Malware%20Activity%20in%20Forensics/31896 Novel Universal Bypass For All Major LLMS Hidden Layer discovered a new prompt injection technique that bypasses security constraints in large language models. The technique uses an XML formatted prequel for a prompt, which appears to the LLM as a policy file. This Policy Puppetry can be used to rewrite some of the security policies configured for LLMs. Unlike other techniques, this technique works across multiple LLMs without changing the policy. https://hiddenlayer.com/innovation-hub/novel-universal-bypass-for-all-major-llms/ CHOICEJACKING: Compromising Mobile Devices through Malicious Chargers like a Decade ago The old Juice Jacking is back, at least if you do not run the latest version of Android or iOS. This issue may allow a malicious USB device, particularly a USB charger, to take control of a device connected to it. https://pure.tugraz.at/ws/portalfiles/portal/89650227/Final_Paper_Usenix.pdf SANS @RSA: https://www.sans.org/mlp/rsac/
We'll keep this brief because we're on a tight turnaround: GPT 4.1, previously known as the Quasar and Optimus models, is now live as the natural update for 4o/4o-mini (and the research preview of GPT 4.5). Though it is a general purpose model family, the headline features are: Coding abilities (o1-level SWEBench and SWELancer, but ok Aider) Instruction Following (with a very notable prompting guide) Long Context up to 1m tokens (with new MRCR and Graphwalk benchmarks) Vision (simply o1 level) Cheaper Pricing (cheaper than 4o, greatly improved prompt caching savings) We caught up with returning guest Michelle Pokrass and Josh McGrath to get more detail on each! Chapters 00:00:00 Introduction and Guest Welcome 00:00:57 GPC 4.1 Launch Overview 00:01:54 Developer Feedback and Model Names 00:02:53 Model Naming and Starry Themes 00:03:49 Confusion Over GPC 4.1 vs 4.5 00:04:47 Distillation and Model Improvements 00:05:45 Omnimodel Architecture and Future Plans 00:06:43 Core Capabilities of GPC 4.1 00:07:40 Training Techniques and Long Context 00:08:37 Challenges in Long Context Reasoning 00:09:34 Context Utilization in Models 00:10:31 Graph Walks and Model Evaluation 00:11:31 Real Life Applications of Graph Tasks 00:12:30 Multi-Hop Reasoning Benchmarks 00:13:30 Agentic Workflows and Backtracking 00:14:28 Graph Traversals for Agent Planning 00:15:24 Context Usage in API and Memory Systems 00:16:21 Model Performance in Long Context Tasks 00:17:17 Instruction Following and Real World Data 00:18:12 Challenges in Grading Instructions 00:19:09 Instruction Following Techniques 00:20:09 Prompting Techniques and Model Responses 00:21:05 Agentic Workflows and Model Persistence 00:22:01 Balancing Persistence and User Control 00:22:56 Evaluations on Model Edits and Persistence 00:23:55 XML vs JSON in Prompting 00:24:50 Instruction Placement in Context 00:25:49 Optimizing for Prompt Caching 00:26:49 Chain of Thought and Reasoning Models 00:27:46 Choosing the Right Model for Your Task 00:28:46 Coding Capabilities of GPC 4.1 00:29:41 Model Performance in Coding Tasks 00:30:39 Understanding Coding Model Differences 00:31:36 Using Smaller Models for Coding 00:32:33 Future of Coding in OpenAI 00:33:28 Internal Use and Success Stories 00:34:26 Vision and Multi-Modal Capabilities 00:35:25 Screen vs Embodied Vision 00:36:22 Vision Benchmarks and Model Improvements 00:37:19 Model Deprecation and GPU Usage 00:38:13 Fine-Tuning and Preference Steering 00:39:12 Upcoming Reasoning Models 00:40:10 Creative Writing and Model Humor 00:41:07 Feedback and Developer Community 00:42:03 Pricing and Blended Model Costs 00:44:02 Conclusion and Wrap-Up
This episode is a full-circle moment! Crystal reconnects with the person who first planted the seed that SEO could actually drive traffic—without the middleman of social media. Meet Favour Obasi-ike: a podcasting pioneer, SEO strategist, and founder of We Don't Play Podcast. Together, Crystal and Favour take a nostalgic ride through their Clubhouse days, unpack the magic of Pinterest SEO, explore the future of RSS feeds, and share why your website needs a sitemap. Like, yesterday!Whether you're a seasoned business owner or just now dipping your toe into SEO, this episode is packed with heart, tech, and next-level brand-building tips.
Join Matthias Reinwarth in this special episode of the KuppingerCole Analyst Chat as he welcomes not one but two expert guests: Nitish Deshpande, Research Analyst at KuppingerCole, and Martin Kuppinger, Principal Analyst and Co-Founder of KuppingerCole. Together, they explore the evolution of modern authorization, discussing how far the industry has come since the early days of static entitlements and XML-based policies. From early insights shared back in 2009 to today’s dynamic, AI-enhanced, signal-driven authorization models, this episode unpacks the what, why, and how of modern access control systems.
Join Matthias Reinwarth in this special episode of the KuppingerCole Analyst Chat as he welcomes not one but two expert guests: Nitish Deshpande, Research Analyst at KuppingerCole, and Martin Kuppinger, Principal Analyst and Co-Founder of KuppingerCole. Together, they explore the evolution of modern authorization, discussing how far the industry has come since the early days of static entitlements and XML-based policies. From early insights shared back in 2009 to today’s dynamic, AI-enhanced, signal-driven authorization models, this episode unpacks the what, why, and how of modern access control systems.
Brandon Liu is an open source developer and creator of the Protomaps basemap project. We talk about how static maps help developers build sites that last, the PMTiles file format, the role of OpenStreetMap, and his experience funding and running an open source project full time. Protomaps Protomaps PMTiles (File format used by Protomaps) Self-hosted slippy maps, for novices (like me) Why Deploy Protomaps on a CDN User examples Flickr Pinball Map Toilet Map Related projects OpenStreetMap (Dataset protomaps is based on) Mapzen (Former company that released details on what to display based on zoom levels) Mapbox GL JS (Mapbox developed source available map rendering library) MapLibre GL JS (Open source fork of Mapbox GL JS) Other links HTTP range requests (MDN) Hilbert curve Transcript You can help correct transcripts on GitHub. Intro [00:00:00] Jeremy: I'm talking to Brandon Liu. He's the creator of Protomaps, which is a way to easily create and host your own maps. Let's get into it. [00:00:09] Brandon: Hey, so thanks for having me on the podcast. So I'm Brandon. I work on an open source project called Protomaps. What it really is, is if you're a front end developer and you ever wanted to put maps on a website or on a mobile app, then Protomaps is sort of an open source solution for doing that that I hope is something that's way easier to use than, um, a lot of other open source projects. Why not just use Google Maps? [00:00:36] Jeremy: A lot of people are gonna be familiar with Google Maps. Why should they worry about whether something's open source? Why shouldn't they just go and use the Google maps API? [00:00:47] Brandon: So Google Maps is like an awesome thing it's an awesome product. Probably one of the best tech products ever right? And just to have a map that tells you what restaurants are open and something that I use like all the time especially like when you're traveling it has all that data. And the most amazing part is that it's free for consumers but it's not necessarily free for developers. Like if you wanted to embed that map onto your website or app, that usually has an API cost which still has a free tier and is affordable. But one motivation, one basic reason to use open source is if you have some project that doesn't really fit into that pricing model. You know like where you have to pay the cost of Google Maps, you have a side project, a nonprofit, that's one reason. But there's lots of other reasons related to flexibility or customization where you might want to use open source instead. Protomaps examples [00:01:49] Jeremy: Can you give some examples where people have used Protomaps and where that made sense for them? [00:01:56] Brandon: I follow a lot of the use cases and I also don't know about a lot of them because I don't have an API where I can track a hundred percent of the users. Some of them use the hosted version, but I would say most of them probably use it on their own infrastructure. One of the cool projects I've been seeing is called Toilet Map. And what toilet map is if you're in the UK and you want find a public restroom then it maps out, sort of crowdsourced all of the public restrooms. And that's important for like a lot of people if they have health issues, they need to find that information. And just a lot of different projects in the same vein. There's another one called Pinball Map which is sort of a hobby project to find all the pinball machines in the world. And they wanted to have a customized map that fit in with their theme of pinball. So these sorts of really cool indie projects are the ones I'm most excited about. Basemaps vs Overlays [00:02:57] Jeremy: And if we talk about, like the pinball map as an example, there's this concept of a basemap and then there's the things that you lay on top of it. What is a basemap and then is the pinball locations is that part of it or is that something separate? [00:03:12] Brandon: It's usually something separate. The example I usually use is if you go to a real estate site, like Zillow, you'll open up the map of Seattle and it has a bunch of pins showing all the houses, and then it has some information beneath it. That information beneath it is like labels telling, this neighborhood is Capitol Hill, or there is a park here. But all that information is common to a lot of use cases and it's not specific to real estate. So I think usually that's the distinction people use in the industry between like a base map versus your overlay. The overlay is like the data for your product or your company while the base map is something you could get from Google or from Protomaps or from Apple or from Mapbox that kind of thing. PMTiles for hosting the basemap and overlays [00:03:58] Jeremy: And so Protomaps in particular is responsible for the base map, and that information includes things like the streets and the locations of landmarks and things like that. Where is all that information coming from? [00:04:12] Brandon: So the base map information comes from a project called OpenStreetMap. And I would also, point out that for Protomaps as sort of an ecosystem. You can also put your overlay data into a format called PMTiles, which is sort of the core of what Protomaps is. So it can really do both. It can transform your data into the PMTiles format which you can host and you can also host the base map. So you kind of have both of those sides of the product in one solution. [00:04:43] Jeremy: And so when you say you have both are you saying that the PMTiles file can have, the base map in one file and then you would have the data you're laying on top in another file? Or what are you describing there? [00:04:57] Brandon: That's usually how I recommend to do it. Oftentimes there'll be sort of like, a really big basemap 'cause it has all of that data about like where the rivers are. Or while, if you want to put your map of toilets or park benches or pickleball courts on top, that's another file. But those are all just like assets you can move around like JSON or CSV files. Statically Hosted [00:05:19] Jeremy: And I think one of the things you mentioned was that your goal was to make Protomaps or the, the use of these PMTiles files easy to use. What does that look like for, for a developer? I wanna host a map. What do I actually need to, to put on my servers? [00:05:38] Brandon: So my usual pitch is that basically if you know how to use S3 or cloud storage, that you know how to deploy a map. And that, I think is the main sort of differentiation from most open source projects. Like a lot of them, they call themselves like, like some sort of self-hosted solution. But I've actually avoided using the term self-hosted because I think in most cases that implies a lot of complexity. Like you have to log into a Linux server or you have to use Kubernetes or some sort of Docker thing. What I really want to emphasize is the idea that, for Protomaps, it's self-hosted in the same way like CSS is self-hosted. So you don't really need a service from Amazon to host the JSON files or CSV files. It's really just a static file. [00:06:32] Jeremy: When you say static file that means you could use any static web host to host your HTML file, your JavaScript that actually renders the map. And then you have your PMTiles files, and you're not running a process or anything, you're just putting your files on a static file host. [00:06:50] Brandon: Right. So I think if you're a developer, you can also argue like a static file server is a server. It's you know, it's the cloud, it's just someone else's computer. It's really just nginx under the hood. But I think static storage is sort of special. If you look at things like static site generators, like Jekyll or Hugo, they're really popular because they're a commodity or like the storage is a commodity. And you can take your blog, make it a Jekyll blog, hosted on S3. One day, Amazon's like, we're charging three times as much so you can move it to a different cloud provider. And that's all vendor neutral. So I think that's really the special thing about static storage as a primitive on the web. Why running servers is a problem for resilience [00:07:36] Jeremy: Was there a prior experience you had? Like you've worked with maps for a very long time. Were there particular difficulties you had where you said I just gotta have something that can be statically hosted? [00:07:50] Brandon: That's sort of exactly why I got into this. I've been working sort of in and around the map space for over a decade, and Protomaps is really like me trying to solve the same problem I've had over and over again in the past, just like once and forever right? Because like once this problem is solved, like I don't need to deal with it again in the future. So I've worked at a couple of different companies before, mostly as a contractor, for like a humanitarian nonprofit for a design company doing things like, web applications to visualize climate change. Or for even like museums, like digital signage for museums. And oftentimes they had some sort of data visualization component, but always sort of the challenge of how to like, store and also distribute like that data was something that there wasn't really great open source solutions. So just for map data, that's really what motivated that design for Protomaps. [00:08:55] Jeremy: And in those, those projects in the past, were those things where you had to run your own server, run your own database, things like that? [00:09:04] Brandon: Yeah. And oftentimes we did, we would spin up an EC2 instance, for maybe one client and then we would have to host this server serving map data forever. Maybe the client goes away, or I guess it's good for business if you can sign some sort of like long-term support for that client saying, Hey, you know, like we're done with a project, but you can pay us to maintain the EC2 server for the next 10 years. And that's attractive. but it's also sort of a pain, because usually what happens is if people are given the choice, like a developer between like either I can manage the server on EC2 or on Rackspace or Hetzner or whatever, or I can go pay a SaaS to do it. In most cases, businesses will choose to pay the SaaS. So that's really like what creates a sort of lock-in is this preference for like, so I have this choice between like running the server or paying the SaaS. Like businesses will almost always go and pay the SaaS. [00:10:05] Jeremy: Yeah. And in this case, you either find some kind of free hosting or low-cost hosting just to host your files and you upload the files and then you're good from there. You don't need to maintain anything. [00:10:18] Brandon: Exactly, and that's really the ideal use case. so I have some users these, climate science consulting agencies, and then they might have like a one-off project where they have to generate the data once, but instead of having to maintain this server for the lifetime of that project, they just have a file on S3 and like, who cares? If that costs a couple dollars a month to run, that's fine, but it's not like S3 is gonna be deprecated, like it's gonna be on an insecure version of Ubuntu or something. So that's really the ideal, set of constraints for using Protomaps. [00:10:58] Jeremy: Yeah. Something this also makes me think about is, is like the resilience of sites like remaining online, because I, interviewed, Kyle Drake, he runs Neocities, which is like a modern version of GeoCities. And if I remember correctly, he was mentioning how a lot of old websites from that time, if they were running a server backend, like they were running PHP or something like that, if you were to try to go to those sites, now they're like pretty much all dead because there needed to be someone dedicated to running a Linux server, making sure things were patched and so on and so forth. But for static sites, like the ones that used to be hosted on GeoCities, you can go to the internet archive or other websites and they were just files, right? You can bring 'em right back up, and if anybody just puts 'em on a web server, then you're good. They're still alive. Case study of news room preferring static hosting [00:11:53] Brandon: Yeah, exactly. One place that's kind of surprising but makes sense where this comes up, is for newspapers actually. Some of the users using Protomaps are the Washington Post. And the reason they use it, is not necessarily because they don't want to pay for a SaaS like Google, but because if they make an interactive story, they have to guarantee that it still works in a couple of years. And that's like a policy decision from like the editorial board, which is like, so you can't write an article if people can't view it in five years. But if your like interactive data story is reliant on a third party, API and that third party API becomes deprecated, or it changes the pricing or it, you know, it gets acquired, then your journalism story is not gonna work anymore. So I have seen really good uptake among local news rooms and even big ones to use things like Protomaps just because it makes sense for the requirements. Working on Protomaps as an open source project for five years [00:12:49] Jeremy: How long have you been working on Protomaps and the parts that it's made up of such as PMTiles? [00:12:58] Brandon: I've been working on it for about five years, maybe a little more than that. It's sort of my pandemic era project. But the PMTiles part, which is really the heart of it only came in about halfway. Why not make a SaaS? [00:13:13] Brandon: So honestly, like when I first started it, I thought it was gonna be another SaaS and then I looked at it and looked at what the environment was around it. And I'm like, uh, so I don't really think I wanna do that. [00:13:24] Jeremy: When, when you say you looked at the environment around it what do you mean? Why did you decide not to make it a SaaS? [00:13:31] Brandon: Because there already is a lot of SaaS out there. And I think the opportunity of making something that is unique in terms of those use cases, like I mentioned like newsrooms, was clear. Like it was clear that there was some other solution, that could be built that would fit these needs better while if it was a SaaS, there are plenty of those out there. And I don't necessarily think that they're well differentiated. A lot of them all use OpenStreetMap data. And it seems like they mainly compete on price. It's like who can build the best three column pricing model. And then once you do that, you need to build like billing and metrics and authentication and like those problems don't really interest me. So I think, although I acknowledge sort of the indie hacker ethos now is to build a SaaS product with a monthly subscription, that's something I very much chose not to do, even though it is for sure like the best way to build a business. [00:14:29] Jeremy: Yeah, I mean, I think a lot of people can appreciate that perspective because it's, it's almost like we have SaaS overload, right? Where you have so many little bills for your project where you're like, another $5 a month, another $10 a month, or if you're a business, right? Those, you add a bunch of zeros and at some point it's just how many of these are we gonna stack on here? [00:14:53] Brandon: Yeah. And honestly. So I really think like as programmers, we're not really like great at choosing how to spend money like a $10 SaaS. That's like nothing. You know? So I can go to Starbucks and I can buy a pumpkin spice latte, and that's like $10 basically now, right? And it's like I'm able to make that consumer choice in like an instant just to spend money on that. But then if you're like, oh, like spend $10 on a SaaS that somebody put a lot of work into, then you're like, oh, that's too expensive. I could just do it myself. So I'm someone that also subscribes to a lot of SaaS products. and I think for a lot of things it's a great fit. Many open source SaaS projects are not easy to self host [00:15:37] Brandon: But there's always this tension between an open source project that you might be able to run yourself and a SaaS. And I think a lot of projects are at different parts of the spectrum. But for Protomaps, it's very much like I'm trying to move maps to being it is something that is so easy to run yourself that anyone can do it. [00:16:00] Jeremy: Yeah, and I think you can really see it with, there's a few SaaS projects that are successful and they're open source, but then you go to look at the self-hosting instructions and it's either really difficult to find and you find it, and then the instructions maybe don't work, or it's really complicated. So I think doing the opposite with Protomaps. As a user, I'm sure we're all appreciative, but I wonder in terms of trying to make money, if that's difficult. [00:16:30] Brandon: No, for sure. It is not like a good way to make money because I think like the ideal situation for an open source project that is open that wants to make money is the product itself is fundamentally complicated to where people are scared to run it themselves. Like a good example I can think of is like Supabase. Supabase is sort of like a platform as a service based on Postgres. And if you wanted to run it yourself, well you need to run Postgres and you need to handle backups and authentication and logging, and that stuff all needs to work and be production ready. So I think a lot of people, like they don't trust themselves to run database backups correctly. 'cause if you get it wrong once, then you're kind of screwed. So I think that fundamental aspect of the product, like a database is something that is very, very ripe for being a SaaS while still being open source because it's fundamentally hard to run. Another one I can think of is like tailscale, which is, like a VPN that works end to end. That's something where, you know, it has this networking complexity where a lot of developers don't wanna deal with that. So they'd happily pay, for tailscale as a service. There is a lot of products or open source projects that eventually end up just changing to becoming like a hosted service. Businesses going from open source to closed or restricted licenses [00:17:58] Brandon: But then in that situation why would they keep it open source, right? Like, if it's easy to run yourself well, doesn't that sort of cannibalize their business model? And I think that's really the tension overall in these open source companies. So you saw it happen to things like Elasticsearch to things like Terraform where they eventually change the license to one that makes it difficult for other companies to compete with them. [00:18:23] Jeremy: Yeah, I mean there's been a number of cases like that. I mean, specifically within the mapping community, one I can think of was Mapbox's. They have Mapbox gl. Which was a JavaScript client to visualize maps and they moved from, I forget which license they picked, but they moved to a much more restrictive license. I wonder what your thoughts are on something that releases as open source, but then becomes something maybe a little more muddy. [00:18:55] Brandon: Yeah, I think it totally makes sense because if you look at their business and their funding, it seems like for Mapbox, I haven't used it in a while, but my understanding is like a lot of their business now is car companies and doing in dash navigation. And that is probably way better of a business than trying to serve like people making maps of toilets. And I think sort of the beauty of it is that, so Mapbox, the story is they had a JavaScript renderer called Mapbox GL JS. And they changed that to a source available license a couple years ago. And there's a fork of it that I'm sort of involved in called MapLibre GL. But I think the cool part is Mapbox paid employees for years, probably millions of dollars in total to work on this thing and just gave it away for free. Right? So everyone can benefit from that work they did. It's not like that code went away, like once they changed the license. Well, the old version has been forked. It's going its own way now. It's quite different than the new version of Mapbox, but I think it's extremely generous that they're able to pay people for years, you know, like a competitive salary and just give that away. [00:20:10] Jeremy: Yeah, so we should maybe look at it as, it was a gift while it was open source, and they've given it to the community and they're on continuing on their own path, but at least the community running Map Libre, they can run with it, right? It's not like it just disappeared. [00:20:29] Brandon: Yeah, exactly. And that is something that I use for Protomaps quite extensively. Like it's the primary way of showing maps on the web and I've been trying to like work on some enhancements to it to have like better internationalization for if you are in like South Asia like not show languages correctly. So I think it is being taken in a new direction. And I think like sort of the combination of Protomaps and MapLibre, it addresses a lot of use cases, like I mentioned earlier with like these like hobby projects, indie projects that are almost certainly not interesting to someone like Mapbox or Google as a business. But I'm happy to support as a small business myself. Financially supporting open source work (GitHub sponsors, closed source, contracts) [00:21:12] Jeremy: In my previous interview with Tom, one of the main things he mentioned was that creating a mapping business is incredibly difficult, and he said he probably wouldn't do it again. So in your case, you're building Protomaps, which you've admitted is easy to self-host. So there's not a whole lot of incentive for people to pay you. How is that working out for you? How are you supporting yourself? [00:21:40] Brandon: There's a couple of strategies that I've tried and oftentimes failed at. Just to go down the list, so I do have GitHub sponsors so I do have a hosted version of Protomaps you can use if you don't want to bother copying a big file around. But the way I do the billing for that is through GitHub sponsors. If you wanted to use this thing I provide, then just be a sponsor. And that definitely pays for itself, like the cost of running it. And that's great. GitHub sponsors is so easy to set up. It just removes you having to deal with Stripe or something. 'cause a lot of people, their credit card information is already in GitHub. GitHub sponsors I think is awesome if you want to like cover costs for a project. But I think very few people are able to make that work. A thing that's like a salary job level. It's sort of like Twitch streaming, you know, there's a handful of people that are full-time streamers and then you look down the list on Twitch and it's like a lot of people that have like 10 viewers. But some of the other things I've tried, I actually started out, publishing the base map as a closed source thing, where I would sell sort of like a data package instead of being a SaaS, I'd be like, here's a one-time download, of the premium data and you can buy it. And quite a few people bought it I just priced it at like $500 for this thing. And I thought that was an interesting experiment. The main reason it's interesting is because the people that it attracts to you in terms of like, they're curious about your products, are all people willing to pay money. While if you start out everything being open source, then the people that are gonna be try to do it are only the people that want to get something for free. So what I discovered is actually like once you transition that thing from closed source to open source, a lot of the people that used to pay you money will still keep paying you money because like, it wasn't necessarily that that closed source thing was why they wanted to pay. They just valued that thought you've put into it your expertise, for example. So I think that is one thing, that I tried at the beginning was just start out, closed source proprietary, then make it open source. That's interesting to people. Like if you release something as open source, if you go the other way, like people are really mad if you start out with something open source and then later on you're like, oh, it's some other license. Then people are like that's so rotten. But I think doing it the other way, I think is quite valuable in terms of being able to find an audience. [00:24:29] Jeremy: And when you said it was closed source and paid to open source, do you still sell those map exports? [00:24:39] Brandon: I don't right now. It's something that I might do in the future, you know, like have small customizations of the data that are available, uh, for a fee. still like the core OpenStreetMap based map that's like a hundred gigs you can just download. And that'll always just be like a free download just because that's already out there. All the source code to build it is open source. So even if I said, oh, you have to pay for it, then someone else can just do it right? So there's no real reason like to make that like some sort of like paywall thing. But I think like overall if the project is gonna survive in the long term it's important that I'd ideally like to be able to like grow like a team like have a small group of people that can dedicate the time to growing the project in the long term. But I'm still like trying to figure that out right now. [00:25:34] Jeremy: And when you mentioned that when you went from closed to open and people were still paying you, you don't sell a product anymore. What were they paying for? [00:25:45] Brandon: So I have some contracts with companies basically, like if they need a feature or they need a customization in this way then I am very open to those. And I sort of set it up to make it clear from the beginning that this is not just a free thing on GitHub, this is something that you could pay for if you need help with it, if you need support, if you wanted it. I'm also a little cagey about the word support because I think like it sounds a little bit too wishy-washy. Pretty much like if you need access to the developers of an open source project, I think that's something that businesses are willing to pay for. And I think like making that clear to potential users is a challenge. But I think that is one way that you might be able to make like a living out of open source. [00:26:35] Jeremy: And I think you said you'd been working on it for about five years. Has that mostly been full time? [00:26:42] Brandon: It's been on and off. it's sort of my pandemic era project. But I've spent a lot of time, most of my time working on the open source project at this point. So I have done some things that were more just like I'm doing a customization or like a private deployment for some client. But that's been a minority of the time. Yeah. [00:27:03] Jeremy: It's still impressive to have an open source project that is easy to self-host and yet is still able to support you working on it full time. I think a lot of people might make the assumption that there's nothing to sell if something is, is easy to use. But this sort of sounds like a counterpoint to that. [00:27:25] Brandon: I think I'd like it to be. So when you come back to the point of like, it being easy to self-host. Well, so again, like I think about it as like a primitive of the web. Like for example, if you wanted to start a business today as like hosted CSS files, you know, like where you upload your CSS and then you get developers to pay you a monthly subscription for how many times they fetched a CSS file. Well, I think most developers would be like, that's stupid because it's just an open specification, you just upload a static file. And really my goal is to make Protomaps the same way where it's obvious that there's not really some sort of lock-in or some sort of secret sauce in the server that does this thing. How PMTiles works and building a primitive of the web [00:28:16] Brandon: If you look at video for example, like a lot of the tech for how Protomaps and PMTiles works is based on parts of the HTTP spec that were made for video. And 20 years ago, if you wanted to host a video on the web, you had to have like a real player license or flash. So you had to go license some server software from real media or from macromedia so you could stream video to a browser plugin. But now in HTML you can just embed a video file. And no one's like, oh well I need to go pay for my video serving license. I mean, there is such a thing, like YouTube doesn't really use that for DRM reasons, but people just have the assumption that video is like a primitive on the web. So if we're able to make maps sort of that same way like a primitive on the web then there isn't really some obvious business or licensing model behind how that works. Just because it's a thing and it helps a lot of people do their jobs and people are happy using it. So why bother? [00:29:26] Jeremy: You mentioned that it a tech that was used for streaming video. What tech specifically is it? [00:29:34] Brandon: So it is byte range serving. So when you open a video file on the web, So let's say it's like a 100 megabyte video. You don't have to download the entire video before it starts playing. It streams parts out of the file based on like what frames... I mean, it's based on the frames in the video. So it can start streaming immediately because it's organized in a way to where the first few frames are at the beginning. And what PMTiles really is, is it's just like a video but in space instead of time. So it's organized in a way where these zoomed out views are at the beginning and the most zoomed in views are at the end. So when you're like panning or zooming in the map all you're really doing is fetching byte ranges out of that file the same way as a video. But it's organized in, this tiled way on a space filling curve. IIt's a little bit complicated how it works internally and I think it's kind of cool but that's sort of an like an implementation detail. [00:30:35] Jeremy: And to the person deploying it, it just looks like a single file. [00:30:40] Brandon: Exactly in the same way like an mp3 audio file is or like a JSON file is. [00:30:47] Jeremy: So with a video, I can sort of see how as someone seeks through the video, they start at the beginning and then they go to the middle if they wanna see the middle. For a map, as somebody scrolls around the map, are you seeking all over the file or is the way it's structured have a little less chaos? [00:31:09] Brandon: It's structured. And that's kind of the main technical challenge behind building PMTiles is you have to be sort of clever so you're not spraying the reads everywhere. So it uses something called a hilbert curve, which is a mathematical concept of a space filling curve. Where it's one continuous curve that essentially lets you break 2D space into 1D space. So if you've seen some maps of IP space, it uses this crazy looking curve that hits all the points in one continuous line. And that's the same concept behind PMTiles is if you're looking at one part of the world, you're sort of guaranteed that all of those parts you're looking at are quite close to each other and the data you have to transfer is quite minimal, compared to if you just had it at random. [00:32:02] Jeremy: How big do the files get? If I have a PMTiles of the entire world, what kind of size am I looking at? [00:32:10] Brandon: Right now, the default one I distribute is 128 gigabytes, so it's quite sizable, although you can slice parts out of it remotely. So if you just wanted. if you just wanted California or just wanted LA or just wanted only a couple of zoom levels, like from zero to 10 instead of zero to 15, there is a command line tool that's also called PMTiles that lets you do that. Issues with CDNs and range queries [00:32:35] Jeremy: And when you're working with files of this size, I mean, let's say I am working with a CDN in front of my application. I'm not typically accustomed to hosting something that's that large and something that's where you're seeking all over the file. is that, ever an issue or is that something that's just taken care of by the browser and, and taken care of by, by the hosts? [00:32:58] Brandon: That is an issue actually, so a lot of CDNs don't deal with it correctly. And my recommendation is there is a kind of proxy server or like a serverless proxy thing that I wrote. That runs on like cloudflare workers or on Docker that lets you proxy those range requests into a normal URL and then that is like a hundred percent CDN compatible. So I would say like a lot of the big commercial installations of this thing, they use that because it makes more practical sense. It's also faster. But the idea is that this solution sort of scales up and scales down. If you wanted to host just your city in like a 10 megabyte file, well you can just put that into GitHub pages and you don't have to worry about it. If you want to have a global map for your website that serves a ton of traffic then you probably want a little bit more sophisticated of a solution. It still does not require you to run a Linux server, but it might require (you) to use like Lambda or Lambda in conjunction with like a CDN. [00:34:09] Jeremy: Yeah. And that sort of ties into what you were saying at the beginning where if you can host on something like CloudFlare Workers or Lambda, there's less time you have to spend keeping these things running. [00:34:26] Brandon: Yeah, exactly. and I think also the Lambda or CloudFlare workers solution is not perfect. It's not as perfect as S3 or as just static files, but in my experience, it still is better at building something that lasts on the time span of years than being like I have a server that is on this Ubuntu version and in four years there's all these like security patches that are not being applied. So it's still sort of serverless, although not totally vendor neutral like S3. Customizing the map [00:35:03] Jeremy: We've mostly been talking about how you host the map itself, but for someone who's not familiar with these kind of tools, how would they be customizing the map? [00:35:15] Brandon: For customizing the map there is front end style customization and there's also data customization. So for the front end if you wanted to change the water from the shade of blue to another shade of blue there is a TypeScript API where you can customize it almost like a text editor color scheme. So if you're able to name a bunch of colors, well you can customize the map in that way you can change the fonts. And that's all done using MapLibre GL using a TypeScript API on top of that for customizing the data. So all the pipeline to generate this data from OpenStreetMap is open source. There is a Java program using a library called PlanetTiler which is awesome, which is this super fast multi-core way of building map tiles. And right now there isn't really great hooks to customize what data goes into that. But that's something that I do wanna work on. And finally, because the data comes from OpenStreetMap if you notice data that's missing or you wanted to correct data in OSM then you can go into osm.org. You can get involved in contributing the data to OSM and the Protomaps build is daily. So if you make a change, then within 24 hours you should see the new base map. Have that change. And of course for OSM your improvements would go into every OSM based project that is ingesting that data. So it's not a protomap specific thing. It's like this big shared data source, almost like Wikipedia. OpenStreetMap is a dataset and not a map [00:37:01] Jeremy: I think you were involved with OpenStreetMap to some extent. Can you speak a little bit to that for people who aren't familiar, what OpenStreetMap is? [00:37:11] Brandon: Right. So I've been using OSM as sort of like a tools developer for over a decade now. And one of the number one questions I get from developers about what is Protomaps is why wouldn't I just use OpenStreetMap? What's the distinction between Protomaps and OpenStreetMap? And it's sort of like this funny thing because even though OSM has map in the name it's not really a map in that you can't... In that it's mostly a data set and not a map. It does have a map that you can see that you can pan around to when you go to the website but the way that thing they show you on the website is built is not really that easily reproducible. It involves a lot of c++ software you have to run. But OpenStreetMap itself, the heart of it is almost like a big XML file that has all the data in the map and global. And it has tagged features for example. So you can go in and edit that. It has a web front end to change the data. It does not directly translate into making a map actually. Protomaps decides what shows at each zoom level [00:38:24] Brandon: So a lot of the pipeline, that Java program I mentioned for building this basemap for protomaps is doing things like you have to choose what data you show when you zoom out. You can't show all the data. For example when you're zoomed out and you're looking at all of a state like Colorado you don't see all the Chipotle when you're zoomed all the way out. That'd be weird, right? So you have to make some sort of decision in logic that says this data only shows up at this zoom level. And that's really what is the challenge in optimizing the size of that for the Protomaps map project. [00:39:03] Jeremy: Oh, so those decisions of what to show at different Zoom levels those are decisions made by you when you're creating the PMTiles file with Protomaps. [00:39:14] Brandon: Exactly. It's part of the base maps build pipeline. and those are honestly very subjective decisions. Who really decides when you're zoomed out should this hospital show up or should this museum show up nowadays in Google, I think it shows you ads. Like if someone pays for their car repair shop to show up when you're zoomed out like that that gets surfaced. But because there is no advertising auction in Protomaps that doesn't happen obviously. So we have to sort of make some reasonable choice. A lot of that right now in Protomaps actually comes from another open source project called Mapzen. So Mapzen was a company that went outta business a couple years ago. They did a lot of this work in designing which data shows up at which Zoom level and open sourced it. And then when they shut down, they transferred that code into the Linux Foundation. So it's this totally open source project, that like, again, sort of like Mapbox gl has this awesome legacy in that this company funded it for years for smart people to work on it and now it's just like a free thing you can use. So the logic in Protomaps is really based on mapzen. [00:40:33] Jeremy: And so the visualization of all this... I think I understand what you mean when people say oh, why not use OpenStreetMaps because it's not really clear it's hard to tell is this the tool that's visualizing the data? Is it the data itself? So in the case of using Protomaps, it sounds like Protomaps itself has all of the data from OpenStreetMap and then it has made all the decisions for you in terms of what to show at different Zoom levels and what things to have on the map at all. And then finally, you have to have a separate, UI layer and in this case, it sounds like the one that you recommend is the Map Libre library. [00:41:18] Brandon: Yeah, that's exactly right. For Protomaps, it has a portion or a subset of OSM data. It doesn't have all of it just because there's too much, like there's data in there. people have mapped out different bushes and I don't include that in Protomaps if you wanted to go in and edit like the Java code to add that you can. But really what Protomaps is positioned at is sort of a solution for developers that want to use OSM data to make a map on their app or their website. because OpenStreetMap itself is mostly a data set, it does not really go all the way to having an end-to-end solution. Financials and the idea of a project being complete [00:41:59] Jeremy: So I think it's great that somebody who wants to make a map, they have these tools available, whether it's from what was originally built by Mapbox, what's built by Open StreetMap now, the work you're doing with Protomaps. But I wonder one of the things that I talked about with Tom was he was saying he was trying to build this mapping business and based on the financials of what was coming in he was stressed, right? He was struggling a bit. And I wonder for you, you've been working on this open source project for five years. Do you have similar stressors or do you feel like I could keep going how things are now and I feel comfortable? [00:42:46] Brandon: So I wouldn't say I'm a hundred percent in one bucket or the other. I'm still seeing it play out. One thing, that I really respect in a lot of open source projects, which I'm not saying I'm gonna do for Protomaps is the idea that a project is like finished. I think that is amazing. If a software project can just be done it's sort of like a painting or a novel once you write, finish the last page, have it seen by the editor. I send it off to the press is you're done with a book. And I think one of the pains of software is so few of us can actually do that. And I don't know obviously people will say oh the map is never finished. That's more true of OSM, but I think like for Protomaps. One thing I'm thinking about is how to limit the scope to something that's quite narrow to where we could be feature complete on the core things in the near term timeframe. That means that it does not address a lot of things that people want. Like search, like if you go to Google Maps and you search for a restaurant, you will get some hits. that's like a geocoding issue. And I've already decided that's totally outta scope for Protomaps. So, in terms of trying to think about the future of this, I'm mostly looking for ways to cut scope if possible. There are some things like better tooling around being able to work with PMTiles that are on the roadmap. but for me, I am still enjoying working on the project. It's definitely growing. So I can see on NPM downloads I can see the growth curve of people using it and that's really cool. So I like hearing about when people are using it for cool projects. So it seems to still be going okay for now. [00:44:44] Jeremy: Yeah, that's an interesting perspective about how you were talking about projects being done. Because I think when people look at GitHub projects and they go like, oh, the last commit was X months ago. They go oh well this is dead right? But maybe that's the wrong framing. Maybe you can get a project to a point where it's like, oh, it's because it doesn't need to be updated. [00:45:07] Brandon: Exactly, yeah. Like I used to do a lot of c++ programming and the best part is when you see some LAPACK matrix math library from like 1995 that still works perfectly in c++ and you're like, this is awesome. This is the one I have to use. But if you're like trying to use some like React component library and it hasn't been updated in like a year, you're like, oh, that's a problem. So again, I think there's some middle ground between those that I'm trying to find. I do like for Protomaps, it's quite dependency light in terms of the number of hard dependencies I have in software. but I do still feel like there is a lot of work to be done in terms of project scope that needs to have stuff added. You mostly only hear about problems instead of people's wins [00:45:54] Jeremy: Having run it for this long. Do you have any thoughts on running an open source project in general? On dealing with issues or managing what to work on things like that? [00:46:07] Brandon: Yeah. So I have a lot. I think one thing people point out a lot is that especially because I don't have a direct relationship with a lot of the people using it a lot of times I don't even know that they're using it. Someone sent me a message saying hey, have you seen flickr.com, like the photo site? And I'm like, no. And I went to flickr.com/map and it has Protomaps for it. And I'm like, I had no idea. But that's cool, if they're able to use Protomaps for this giant photo sharing site that's awesome. But that also means I don't really hear about when people use it successfully because you just don't know, I guess they, NPM installed it and it works perfectly and you never hear about it. You only hear about people's negative experiences. You only hear about people that come and open GitHub issues saying this is totally broken, and why doesn't this thing exist? And I'm like, well, it's because there's an infinite amount of things that I want to do, but I have a finite amount of time and I just haven't gone into that yet. And that's honestly a lot of the things and people are like when is this thing gonna be done? So that's, that's honestly part of why I don't have a public roadmap because I want to avoid that sort of bickering about it. I would say that's one of my biggest frustrations with running an open source project is how it's self-selected to only hear the negative experiences with it. Be careful what PRs you accept [00:47:32] Brandon: 'cause you don't hear about those times where it works. I'd say another thing is it's changed my perspective on contributing to open source because I think when I was younger or before I had become a maintainer I would open a pull request on a project unprompted that has a hundred lines and I'd be like, Hey, just merge this thing. But I didn't realize when I was younger well if I just merge it and I disappear, then the maintainer is stuck with what I did forever. You know if I add some feature then that person that maintains the project has to do that indefinitely. And I think that's very asymmetrical and it's changed my perspective a lot on accepting open source contributions. I wanna have it be open to anyone to contribute. But there is some amount of back and forth where it's almost like the default answer for should I accept a PR is no by default because you're the one maintaining it. And do you understand the shape of that solution completely to where you're going to support it for years because the person that's contributing it is not bound to those same obligations that you are. And I think that's also one of the things where I have a lot of trepidation around open source is I used to think of it as a lot more bazaar-like in terms of anyone can just throw their thing in. But then that creates a lot of problems for the people who are expected out of social obligation to continue this thing indefinitely. [00:49:23] Jeremy: Yeah, I can totally see why that causes burnout with a lot of open source maintainers, because you probably to some extent maybe even feel some guilt right? You're like, well, somebody took the time to make this. But then like you said you have to spend a lot of time trying to figure out is this something I wanna maintain long term? And one wrong move and it's like, well, it's in here now. [00:49:53] Brandon: Exactly. To me, I think that is a very common failure mode for open source projects is they're too liberal in the things they accept. And that's a lot of why I was talking about how that choice of what features show up on the map was inherited from the MapZen projects. If I didn't have that then somebody could come in and say hey, you know, I want to show power lines on the map. And they open a PR for power lines and now everybody who's using Protomaps when they're like zoomed out they see power lines are like I didn't want that. So I think that's part of why a lot of open source projects eventually evolve into a plugin system is because there is this demand as the project grows for more and more features. But there is a limit in the maintainers. It's like the demand for features is exponential while the maintainer amount of time and effort is linear. Plugin systems might reduce need for PRs [00:50:56] Brandon: So maybe the solution to smash that exponential down to quadratic maybe is to add a plugin system. But I think that is one of the biggest tensions that only became obvious to me after working on this for a couple of years. [00:51:14] Jeremy: Is that something you're considering doing now? [00:51:18] Brandon: Is the plugin system? Yeah. I think for the data customization, I eventually wanted to have some sort of programmatic API to where you could declare a config file that says I want ski routes. It totally makes sense. The power lines example is maybe a little bit obscure but for example like a skiing app and you want to be able to show ski slopes when you're zoomed out well you're not gonna be able to get that from Mapbox or from Google because they have a one size fits all map that's not specialized to skiing or to golfing or to outdoors. But if you like, in theory, you could do this with Protomaps if you changed the Java code to show data at different zoom levels. And that is to me what makes the most sense for a plugin system and also makes the most product sense because it enables a lot of things you cannot do with the one size fits all map. [00:52:20] Jeremy: It might also increase the complexity of the implementation though, right? [00:52:25] Brandon: Yeah, exactly. So that's like. That's really where a lot of the terrifying thoughts come in, which is like once you create this like config file surface area, well what does that look like? Is that JSON? Is that TOML, is that some weird like everything eventually evolves into some scripting language right? Where you have logic inside of your templates and I honestly do not really know what that looks like right now. That feels like something in the medium term roadmap. [00:52:58] Jeremy: Yeah and then in terms of bug reports or issues, now it's not just your code it's this exponential combination of whatever people put into these config files. [00:53:09] Brandon: Exactly. Yeah. so again, like I really respect the projects that have done this well or that have done plugins well. I'm trying to think of some, I think obsidian has plugins, for example. And that seems to be one of the few solutions to try and satisfy the infinite desire for features with the limited amount of maintainer time. Time split between code vs triage vs talking to users [00:53:36] Jeremy: How would you say your time is split between working on the code versus issue and PR triage? [00:53:43] Brandon: Oh, it varies really. I think working on the code is like a minority of it. I think something that I actually enjoy is talking to people, talking to users, getting feedback on it. I go to quite a few conferences to talk to developers or people that are interested and figure out how to refine the message, how to make it clearer to people, like what this is for. And I would say maybe a plurality of my time is spent dealing with non-technical things that are neither code or GitHub issues. One thing I've been trying to do recently is talk to people that are not really in the mapping space. For example, people that work for newspapers like a lot of them are front end developers and if you ask them to run a Linux server they're like I have no idea. But that really is like one of the best target audiences for Protomaps. So I'd say a lot of the reality of running an open source project is a lot like a business is it has all the same challenges as a business in terms of you have to figure out what is the thing you're offering. You have to deal with people using it. You have to deal with feedback, you have to deal with managing emails and stuff. I don't think the payoff is anywhere near running a business or a startup that's backed by VC money is but it's definitely not the case that if you just want to code, you should start an open source project because I think a lot of the work for an opensource project has nothing to do with just writing the code. It is in my opinion as someone having done a VC backed business before, it is a lot more similar to running, a tech company than just putting some code on GitHub. Running a startup vs open source project [00:55:43] Jeremy: Well, since you've done both at a high level what did you like about running the company versus maintaining the open source project? [00:55:52] Brandon: So I have done some venture capital accelerator programs before and I think there is an element of hype and energy that you get from that that is self perpetuating. Your co-founder is gungho on like, yeah, we're gonna do this thing. And your investors are like, you guys are geniuses. You guys are gonna make a killing doing this thing. And the way it's framed is sort of obvious to everyone that it's like there's a much more traditional set of motivations behind that, that people understand while it's definitely not the case for running an open source project. Sometimes you just wake up and you're like what the hell is this thing for, it is this thing you spend a lot of time on. You don't even know who's using it. The people that use it and make a bunch of money off of it they know nothing about it. And you know, it's just like cool. And then you only hear from people that are complaining about it. And I think like that's honestly discouraging compared to the more clear energy and clearer motivation and vision behind how most people think about a company. But what I like about the open source project is just the lack of those constraints you know? Where you have a mandate that you need to have this many customers that are paying by this amount of time. There's that sort of pressure on delivering a business result instead of just making something that you're proud of that's simple to use and has like an elegant design. I think that's really a difference in motivation as well. Having control [00:57:50] Jeremy: Do you feel like you have more control? Like you mentioned how you've decided I'm not gonna make a public roadmap. I'm the sole developer. I get to decide what goes in. What doesn't. Do you feel like you have more control in your current position than you did running the startup? [00:58:10] Brandon: Definitely for sure. Like that agency is what I value the most. It is possible to go too far. Like, so I'm very wary of the BDFL title, which I think is how a lot of open source projects succeed. But I think there is some element of for a project to succeed there has to be somebody that makes those decisions. Sometimes those decisions will be wrong and then hopefully they can be rectified. But I think going back to what I was talking about with scope, I think the overall vision and the scope of the project is something that I am very opinionated about in that it should do these things. It shouldn't do these things. It should be easy to use for this audience. Is it gonna be appealing to this other audience? I don't know. And I think that is really one of the most important parts of that leadership role, is having the power to decide we're doing this, we're not doing this. I would hope other developers would be able to get on board if they're able to make good use of the project, if they use it for their company, if they use it for their business, if they just think the project is cool. So there are other contributors at this point and I want to get more involved. But I think being able to make those decisions to what I believe is going to be the best project is something that is very special about open source, that isn't necessarily true about running like a SaaS business. [00:59:50] Jeremy: I think that's a good spot to end it on, so if people want to learn more about Protomaps or they wanna see what you're up to, where should they head? [01:00:00] Brandon: So you can go to Protomaps.com, GitHub, or you can find me or Protomaps on bluesky or Mastodon. [01:00:09] Jeremy: All right, Brandon, thank you so much for chatting today. [01:00:12] Brandon: Great. Thank you very much.
UML, オブジェクト指向, XML, DRY原則, ハンガリアン記法など、この10年20年で廃れたアイディアやツールたちについて話しました。DRY原則の初出は1999年出版の「達人プログラマ」https://amzn.to/4l8WHnX のようです。間違ったコードは間違って見えるようにする (ハンガリアン記法, きれいなパン工場)富豪的プログラミング http://www.pitecan.com/fugo.html 原典は、1997年コンピュータサイエンス誌『bit』で増井俊之氏が書かれた記事のようです。感想をぜひハッシュタグ #todayILearnedFM #tilfm でつぶやいてください!Your co-hosts:Tomoaki Imai, Noxx CTO https://twitter.com/tomoaki_imaiRyoichi Kato, Software Engineer https://twitter.com/ryo1kato
Looking for the best Pinterest SEO practices to save you a tonne of time and money while growing your business? Watch me, on video, break down in 25 minutes, how Pinterest Business works as a visual search engine at Cre8tive Con in Intercontinental Hotel, Downtown Chicago.Watch this video on YouTube Unlisted: https://www.youtube.com/watch?v=Pu7TmBzKQJEGoes Public on April 1This video will walk you though:◉ Pinterest SEO: Statistics to Know◉ How to Use Pinterest Boards and Pins◉ Why Pinterest SEO is Important for BusinessesHere are the timestamps for the discussed topics in the video:00:00 - 00:14: Introduction and initial interaction with the audience.00:14 - 00:31: Uses of Pinterest by the audience.00:31- 00:57: General thoughts about Pinterest and its significance.00:57 - 01:12: The concept of "interest" derived from "Pinterest".01:12 - 01:50: Pinterest's taste graph, user statistics, and unique advertising features.01:50 - 02:29: Zip code marketing and its advantages on Pinterest.02:29 - 03:24: Comparison of content lifespan between Pinterest and other platforms.03:24 - 04:03: The importance of planning ahead on Pinterest.04:03 - 05:09: Pinterest as a connector between different platforms and scale opportunities.05:09 - 06:59: How Pinterest connects social media platforms and builds audience engagement.06:59 - 08:20: Strategic organization of boards and pins on Pinterest.08:20 - 09:59: Unique features for business and affiliate marketing on Pinterest.09:59 - 11:10: Distinction between personal and business use of Pinterest.11:10 - 12:32: Personal testimony of Pinterest's effectiveness.12:32 - 13:59: Detailed explanation on types of data connections.13:59 - 15:38: Strategic use of other platforms like Instagram and LinkedIn with Pinterest.15:38 - 17:02: Minimizing wasted efforts and money with ads on Pinterest.17:02 - 19:32: Importance of embedding content on websites to enhance engagement.19:32 - 21:49: Explaining technical details of XML files and RSS feeds.21:49 - 22:56: Guest interaction and testimonial on using Pinterest effectively.22:56 - 23:25: Explanation of Pinterest's algorithm “Pixie”.23:25 - 24:08: How to leverage image content for better searchability.24:08 - 24:22: Further clarification on operational use.24:22 - 24:49: Audience appreciation and closing remarks.How to stay connected with me
An airhacks.fm conversation with Volker Simonis (@volker_simonis) about: early computing experiences with Schneider CPC (Amstrad in UK) with Z80 CPU, CP/M operating system as an add-on that provided a real file system, programming in Basic and Turbo Pascal on early computers, discussion about gaming versus programming interests, using a 9-pin needle printer for school work, programming on pocket computers with BASIC in school, memories of Digital Research's CP/M and DR-DOS competing with MS-DOS, HiMEM memory management in early operating systems, programming in Logo language with turtle graphics and fractals, fascination with Lindenmayer systems (L-systems) for simulating biological growth patterns, interest in biology and carnivorous plants, transition to PCs with floppy disk drives, using SGI Iris workstations at university with IRIX operating system, early experiences with Linux installed from floppy disks, challenges of configuring X Window System, programming graphics on interlaced monitors, early work with HP using Tickle/Tk and python around 1993, first experiences with Java around version 0.8/0.9, attraction to Java's platform-independent networking and graphics capabilities, using Blackdown Java for Linux created by Johan Vos, freelance work creating Java applets for accessing databases of technical standards, PhD work creating software for analyzing parallel text corpora in multiple languages, developing internationalization and XML capabilities in Java Swing applications, career at Sun Microsystems porting MaxDB to Solaris, transition to SAP to work on JVM development, Adabas and MaxDB, reflections on ABAP programming language at SAP and its database-centric nature Volker Simonis on twitter: @volker_simonis
SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast
Python Bot Delivered Through DLL Side-Loading A "normal", but vulnerable to DLL side-loading PDF reader may be used to launch additional exploit code https://isc.sans.edu/diary/Python%20Bot%20Delivered%20Through%20DLL%20Side-Loading/31778 Tomcat RCE Correction To exploit the Tomcat RCE I mentioned yesterday, two non-default configuration options must be selected by the victim. https://x.com/dkx02668274/status/1901893656316969308 SAML Roulette: The Hacker Always Wins This Portswigger blog explains in detail how to exploit the ruby-saml vulnerablity against GitLab. https://portswigger.net/research/saml-roulette-the-hacker-always-wins Windows Shortcut Zero Day Exploit Attackers are currently taking advantage of an unpatched vulnerability in how Windows displays Shortcut (.lnk file) details. Trendmicro explains how the attack works and provides PoC code. Microsoft is not planning to fix this issue https://www.trendmicro.com/en_us/research/25/c/windows-shortcut-zero-day-exploit.html
SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast
Static Analysis of GUID Encoded Shellcode Didier explains how to decode shell code embeded as GUIDs in malware, and how to feed the result to his tool 1768.py which will extract Cobal Strike configuration information from the code. https://isc.sans.edu/diary/Static%20Analysis%20of%20GUID%20Encoded%20Shellcode/31774 SAMLStorm: Critical Authentication Bypass in xml-crypto and Node.js libraries xml-crypto, a library use in Node.js applications to decode XML and support SAML, has found to parse comments incorrectly leading to several SAML vulnerabilities. https://workos.com/blog/samlstorm One PUT Request to Own Tomcat: CVE-2025-24813 RCE is in the Wild A just made public deserialization vulnerablity in Tomcat is already being exploited. Contributing to the rapid exploit release is the similarity of this vulnerability to other Java deserializtion vulnerabilities. https://lab.wallarm.com/one-put-request-to-own-tomcat-cve-2025-24813-rce-is-in-the-wild/ CVE-2025-24813 CSS Abuse for Evasion and Tracking Attackers are using cascading stylesheets to evade detection and enable more stealthy tracking of users https://blog.talosintelligence.com/css-abuse-for-evasion-and-tracking/
SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast
Mirai Bot Now Incorporating Malformed DrayTek Vigor Router Exploits One of the many versions of the Mirai botnet added some new exploit strings attempting to take advantage of an old DrayTek Vigor Router vulnerability, but they got the URL wrong. https://isc.sans.edu/diary/Mirai%20Bot%20now%20incroporating%20%28malformed%3F%29%20DrayTek%20Vigor%20Router%20Exploits/31770 Compromised GitHub Action The popular GitHub action tj-actions/changed-files was compromised and leaks credentials via the action logs https://www.stepsecurity.io/blog/harden-runner-detection-tj-actions-changed-files-action-is-compromised ruby-saml authentication bypass A confusion in how to parse SAML messages between two XML parsers used by Ruby leads to an authentication bypass in saml-ruby. https://github.blog/security/sign-in-as-anyone-bypassing-saml-sso-authentication-with-parser-differentials/ GitHub Fake Security Alerts Fake GitHub security alerts are used to trick package maintainers into adding OAUTH privileges to malicious apps. https://www.bleepingcomputer.com/news/security/fake-security-alert-issues-on-github-use-oauth-app-to-hijack-accounts/
Discussion this week starts with the ESP32 "backdoor" drama that circled the media, with some XML-based vulnerabilities in the mix. Finally, we cap off with a post on reviving modprobe_path for Linux exploitation, and some discussion around an attack chain against China that was attributed to the NSA.Links and vulnerability summaries for this episode are available at: https://dayzerosec.com/podcast/277.html[00:00:00] Introduction[00:00:25] The ESP32 "backdoor" that wasn't[00:14:26] Speedrunners are vulnerability researchers[00:27:58] Sign in as anyone: Bypassing SAML SSO authentication with parser differentials[00:38:47] Impossible XXE in PHP[00:52:41] Reviving the modprobe_path Technique: Overcoming search_binary_handler() Patch[01:04:15] Trigon: developing a deterministic kernel exploit for iOS[01:06:43] An inside look at NSA (Equation Group) TTPs from China's lensePodcast episodes are available on the usual podcast platforms: -- Apple Podcasts: https://podcasts.apple.com/us/podcast/id1484046063 -- Spotify: https://open.spotify.com/show/4NKCxk8aPEuEFuHsEQ9Tdt -- Google Podcasts: https://www.google.com/podcasts?feed=aHR0cHM6Ly9hbmNob3IuZm0vcy9hMTIxYTI0L3BvZGNhc3QvcnNz -- Other audio platforms can be found at https://anchor.fm/dayzerosecYou can also join our discord: https://discord.gg/daTxTK9
How AI is transforming content creation by removing technical barriers and allowing creators to focus on ideas. Why this matters: AI has changed the way people approach podcasting, video, and written content. Creators are shifting from figuring out how to create content to focusing on what to create. Read the blog post that inspired this episode, from Barry Kantz on the Blubrry team: AI Has Changed My Brain This is an exciting time for podcasters and content creators... How AI Enhances Creative Processes in Podcasting 1. AI and the Shift from “How-To” to “What-To” What was the "How-To"? In early podcasting, creators had to: Manually build RSS feeds. Learn complex audio/video editing. Invest in expensive software and equipment. Overcome a steep learning curve. The problem: Technical challenges took up too much time, limiting creativity. What is the "What-To"? Now, AI helps with: Brainstorming topics → AI can generate ideas based on trends and user preferences. Writing assistance → AI drafts scripts, outlines, and even refines writing style. Image & Video creation → AI generates visuals and edits videos quickly. Podcast automation → AI tools (like Blubrry's services) streamline publishing and promotion. The result: Creators can focus on their ideas, message, and audience engagement instead of technical tasks. AI allows for more experimentation and creativity without being held back by logistics. 2. The Evolution of Podcasting and Content Creation Podcasting Then (2004-2005): Mostly tech-savvy creators due to technical barriers. Recording, editing, and distributing a podcast required expertise. Small niche audience, mostly early adopters. Podcasting Now: More accessible than ever → AI-driven services handle the majority of the work (record, upload, and distribute). Lower barrier to entry → No need for coding, XML feeds, or advanced editing skills. More diverse voices → AI has allowed anyone with ideas to start podcasting, regardless of technical skill. Key Takeaway for Listeners: AI has made podcasting easier, so there's no excuse not to start! If you have an idea, AI can help you bring it to life. 3. The Role of AI in Video Creation How AI is Improving Video Creation: AI automates editing, transcription, and animation. Platforms now generate videos from text (e.g., AI avatars reading scripts). AI enhances video quality, removes background noise, and adjusts lighting automatically. Blubrry's Role in Simplifying Video Content: Pod2Vid → Transforms podcasts into YouTube videos (no extra effort needed). AI tools help convert videos into podcasts → Vid2Pod (capturing both audiences). Future Trends: AI-generated video content will continue to improve. More seamless integration of podcasts and video across platforms. Eventually, AI will make video content creation as easy as podcasting. What This Means for Podcasters: If you're not using video yet, AI makes it easier than ever. Repurpose your podcast into video content to reach a wider audience. 4. The Impact on Businesses and Creators How businesses and entrepreneurs can leverage AI to create content that connects with their audience: AI enables businesses to: Quickly create valuable content → blogs, videos, and podcasts with minimal effort. Generate topic ideas based on customer interests and industry trends. Repurpose content → Turn one podcast episode into multiple pieces of content (blog posts, video clips, social media posts). Enhance engagement → AI helps personalize content for different audience segments. What This Means for Business Owners & Marketers: Focus on storytelling instead of production logistics. Use AI-powered content to build trust with customers. Consistently deliver high-quality content without needing a big team. Example:
This show has been flagged as Clean by the host. Learning to use the terminal is an important step in becoming a true power user of Linux, but it's easy (and normal) to make mistakes along the way. Here are the top 5 mistakes new terminal users make, and what you can learn from them. 1. Current working directory When you first open a terminal, your current working directory is your home folder. You have access to all those directories you see in your home directory every time you open a file manager (Desktop, Documents, Downloads, Music, Pictures, and Videos). You can verify your location with the pwd command: $ pwd /home/seth You can list the files and folders within your current directory with the ls or dir or tree commands: $ ls Desktop Documents Downloads Music Pictures Videos But you don't usually stay in one place while using the terminal. You frequently move from folder to folder so you can open or modify or edit files. It's easy to get lost, forgetting what directory you're in and what files are around you. Lesson learned: When working in the terminal, it's important to regularly verify your current working directory with pwd so you don't accidentally issue a command you intended to run in a different location. 2. Use interactive options when using wildcards Wildcards are great shorthand to make command entry faster, and to perform bulk actions on lots of files. However, they can be dangerous when you get them wrong. It's easy to process hundreds of the wrong files by using a wildcard in the wrong directory, or by using a wildcard that's too broad. For example, suppose you want to run a sed command on all HTML files in a directory, so you run this: $ sed --in-place 's/day/night/g' *ml Job done, until you find out that you accidentally ran that command on all your XML files, too. Lesson learned: Run a safe test command on the wildcard you think you want to target before making a change. Some commands have a literal --dry-run option. Others have an --interactive option that forces the command to prompt you to confirm that you want to carry out the action. Sometimes the logic is reversed: a command refuses to make a major change unless you use a command (for example, sed doesn't write changes to a file without the --in-place option or redirection). When in doubt, improvise. You can always “expand” a wildcard using the echo command: $ echo ./*ml ./four.html ./one.xml ./three.html ./two.xml $ echo ./*tml ./four.html ./three.html 3. File paths Many new terminal users don't understand where files are located within the file system. It's not a common mistake to make on the desktop because there are visual reminders there. You wouldn't try to double-click on a document to open it if there was no icon to double-click. It's easy to assume that the terminal application contains all your files all at once, but the terminal is, by design, limited in scope. Were the terminal to have access to every file everywhere on your system all at once, you'd end up accidentally renaming and moving and copying a lot more files than intended. Specificity is a super power, and it's defined by the file path. A file path describes where a file exists in a file system. A full (or “absolute”) file path always starts from the single folder at the start of your operating system, indicated by just a /, and then lists each folder within that folder until it traces the path to a specific file. For example, I have a file called IMG_0001.JPG in my Pictures directory. You probably have a mental image of where that file is and how you'd get there on the desktop. But for the terminal to understand how to find it, the location must be expressed as /home/seth/Pictures/IMG_0001.JPG. An absolute file path is definitive. The terminal always understands an absolute file path, no matter what your current working directory is. The absolute path to a file can be unwieldy, though. Once you understand absolute paths, you can abbreviate any path to a relative file path. A relative file path is based on your current location in the terminal. As long as you're in the Pictures folder, the full path /home/seth/Pictures/IMG_0001.JPG can be shortened to just IMG_0001.JPG, or ./IMG_0001.JPG for added clarity (the . indicates no movement from your current location, and the / is a directory separator as usual). But suppose your current working directory was your home directory. Your Pictures folder is located in your home directory, so to get to IMG_0001.JPG you have to enter Pictures first. The relative path in that case is ./Pictures/IMG_0001.JPG or just Pictures/IMG_0001.JPG. Lesson learned: An absolute file path always starts from the start of a file system. A relative file path changes based on your location. The terminal understands both. For new users, the absolute file path is the most explicit and exact way to reference a file, so practice using them until you're comfortable with the concept of file paths. 4. Executable permissions By default, most files aren't executable. You can't run them like an application, because most files are meant to be opened in an application. That's not true for shell scripts, though. Shell scripts are text files containing a list of commands, and they're meant to be run like an application. They're a powerful way to string existing commands together to form a new custom command. However, because a shell script starts out as a regular text file, it's not seen by your terminal as an executable entity. To execute a file as an application, you can grant it executable permission with the chmod command: $ chmod +x ./example.sh Alternatively, you can run the file in a sub-shell: $ bash ./example.sh Notice that in these examples, I use the ./ notation as if the example.sh shell script exists in my current directory. 5. Typing errors It sometimes feels like the more you type, the more you're getting done. In a terminal, though, typing too much is one of the best ways to introduce mistakes. When you try to type a long and complex command, you're liable to spell something wrong or use the wrong option. When you try to type a filename or a file path, you might forget to escape special characters (like spaces). The errors aren't usually catastrophic, but they're frustrating and time consuming. Lesson learned: There are several ways to ensure you're entering the correct commands into your terminal: Copy and paste: If you're using a command you found on a trusted website, copy it in your browser and then paste it into your terminal using Ctrl+Shift+V or right-click and select Paste. TAB: You can type part of a command or file path, and then press the TAB key on your keyboard for either auto-completion or for suggested completions. Use it even when you don't think you need it. It'll save you errors every single time, even when it appears to not work (hint: it's not working because you're trying to auto-complete something that's not where you think it is). Drag-and-drop: It's the 21st century! You can drag a file or folder from anywhere on your computer, and drop it into your terminal. It gets replaced by its absolute path automatically. Practice makes perfect To get good in the terminal, you have to use it as often as you can. You don't have to use it for “serious” work at first, and you arguably shouldn't, but you can and should do simple exercises in the terminal. Understand file paths, get used to wildcards, learn shortcuts, use the TAB key. The biggest mistake you can make when learning the terminal is to not use the terminal, so open it up every day, do your exercise, and you'll be an expert in the terminal in no time. Provide feedback on this episode.
In today's episode, Casey will discuss 5 primary points to consider for your own SEO strategy.1. Keyword Research & StrategyWhat It Is:Keyword research involves finding the right words and phrases that potential customers are using to search for pest control services. This includes short-tail (broad) and long-tail (specific) keywords.How It Applies to a Pest Control Company:Identifying local keywords like "pest control near me," "exterminator in [city]," or "termite treatment in [location]"Researching service-specific terms like "bed bug removal Cincinnati" or "rodent exclusion Ann Arbor"Using keyword variations like "affordable pest control," "same-day exterminator," etc.Leveraging tools like Google Keyword Planner or SEMRush to find low-competition, high-intent keywords.What It Is:On-page SEO refers to optimizing website elements like page titles, meta descriptions, content, URLs, and images for search engines.How It Applies to a Pest Control Company:Creating optimized title tags (e.g., "Best Pest Control in Phoenix - Victory Pest Defense")Writing compelling meta descriptions that improve click-through rates (e.g., "Fast, reliable pest control services in Chandler, AZ. Call now for a free inspection!")Using header tags (H1, H2, H3) effectively with location-specific keywordsOptimizing images by adding descriptive alt text like "Bed bug extermination in Lexington, KY"Creating SEO-friendly URLs, such as:✅ www.example.com/ant-control-cincinnati❌ www.example.com/services1234What It Is:Local SEO focuses on optimizing your online presence for location-based searches. This includes Google Business Profile (GBP), local citations, and reviews.How It Applies to a Pest Control Company:Claiming and fully optimizing a Google Business Profile with correct business name, phone number, hours, and service areasAdding high-quality photos of technicians, vehicles, and completed jobsEncouraging positive customer reviews with follow-ups (e.g., “Thanks for using our service! We'd love to hear your feedback on Google.”)Ensuring NAP (Name, Address, Phone Number) consistency across all listings (Yelp, Angi, BBB, etc.)Creating location-based service pages (e.g., "Pest Control in Killeen, TX" or "Rodent Removal in Northern Kentucky")What It Is:Technical SEO involves optimizing the backend of the website to improve speed, mobile-friendliness, security, and crawlability.How It Applies to a Pest Control Company:Ensuring a fast-loading website (customers expect quick responses when dealing with pests!)Using mobile-responsive design (since most people search for pest control services from their phones)Implementing SSL security (HTTPS) for customer trust and SEO rankingCreating an XML sitemap and submitting it to Google for better indexingFixing broken links, duplicate content, and redirect errors that could hurt rankingsWhat It Is:Content marketing focuses on creating valuable content that educates potential customers and helps with ranking. Link building involves getting other reputable websites to link back to your site.Final ThoughtsBy applying these 5 SEO pillars, a local pest control company can rank higher in Google searches, attract more leads, and grow their customer base. A well-executed local SEO strategy combined with strong content marketing and technical SEO can significantly boost visibility and revenue.Please review us at Rhino Pest Control Marketing and interact with us to let us know how we can improve in 2025.Casey Lewiscasey@rhinopros.com(925) 464-8383Follow and subscribe at the following links:https://www.youtube.com/@RhinoPestControlMarketinghttps://www.facebook.com/rhinopestcontrolmarketingLeave us a review on Google: https://g.page/r/CT9-E84ypVI0EBM/review
Tiff and Dana share common pitfalls that scale back your practice's production — and what to do to address them. Included solutions are Dental A-Team's scorecard and a fixed cost spreadsheet, which you can reach out to the DAT for help on: hello@thedentalateam.com. Episode resources: Subscribe to The Dental A-Team podcast Schedule a Practice Assessment Leave us a review Transcript: The Dental A Team (00:01.967) Hello, Dental A Team podcast listeners. I am so excited to be here with you today. This is Tiffanie. I never introduced myself, which is weird. Hopefully there's like some sort of intro that goes to that. just thought of that. Anyhow, here we are. Another day, another podcast. And first and foremost, I want to thank all of you guys who listen. I know we get a lot of practice assessment. schedules from people who listen to our podcasts. That's how you guys are finding us. And it just means a lot to us that you're here, that you're with us, that you're supporting us. We want to support you and we are doing forever. As far as I can tell, we're doing new free practice assessments for practices all over, all over the country, all over the world. Sometimes we get Canadian practices and it's super cool. I know we've worked with practices all the way in New Zealand and it's just really cool. And these complimentary practice assessment tools. are fantastic because we really are helping you deep dive and figure out where your focus should be or could be to get you to the results that you're looking to gain, whether you're gonna work with us one-on-one in a group fashion or just continue being a listener no matter what. We love doing these complimentary practice assessments with you guys. And it's just really fun. It's so cool to see where dentistry is at and where you guys are at. wins and the struggles you guys are having and it's just, it's super awesome. So thank you to everybody who's here with us today. We are excited to take you on this journey with us and doctors and practice owners, leaders, whoever's here today. I really wanted to chat here. I've got my girl Dana with me and I wanted to chat about projections, scheduling, reaching goals. I think that's a huge focus. for everyone always needs to reach goals, right? But I think in 2025 so far, Dana, like we need to heavily focus on this because 2024, there was a lot that happened in 2024. It was a weird, it was a wild year, right? Like was so weird. It felt like, okay, we're getting momentum with everyone. And then it was like two steps forward, one step back, five steps forward, three steps back. And it was like, gosh, we're getting momentum. But it was an uphill battle in 2024. I don't know what happened, but holy cow, this year feels cleaner already. It feels different. Dana (01:57.805) wild. The Dental A Team (02:17.795) And I think everyone's kind of shifted their focus to the areas to be able to see what's the most important. So I'm excited to chat with you about that today, Dana, and I hope our listeners are excited for this one. I wanted to just have a conversation around what impacts production from a schedule standpoint, not technical scheduling, right? Not our blog scheduling. We've done a million freaking podcasts on that. Dana and I are not doing that today. Dana (02:45.276) No? The Dental A Team (02:47.601) But really those other pieces that impact it and how doctors can and practice owners can look at these factors and project. It's still early enough in the year that if you haven't done this yet, get on it. It's totally fine and look for those pieces. So I wanted to pick your brain a little bit Dana and I think let's take it in the space of let's talk about the things that can impact and then let's talk about how we can. project that and fix it and work it into our goals. So what are the spaces Dana that you have, your clients, and I think we likely do this the same, but what spaces and what do you have your clients look for when they are getting prepped for that next year? We're into the year and we're trying to figure out what's going on, what things impact production goals from a scheduling standpoint like that that you guys are taking a look at. Dana (03:37.388) Yeah, I love this topic because I do think when we think about production and impacts on production, we go right to scheduling, we go right to those pieces. And so I love that today is a little bit different. And I think that sometimes we just forget that like taking a vacation is going to impact production, having holidays in there, holidays will sometimes fall on work days and sometimes not. you know, yearly, they're different. so looking at how many holidays do we have in there looking at, if we're going to take CE, how much of that is time away from the office, team meetings and quarterly meetings and admin time, knowing all of those things, right? All of those things impact your production because then that isn't necessarily time spent taking care of patients. It's definitely needed. It's time spent working on the business. It's time spent training and, working on all of our processes. So The Dental A Team (04:21.495) So. Dana (04:27.664) let's not not do it because it affects production, right? Knowing it will impact your production and being able to combat that is definitely super helpful. The Dental A Team (04:31.174) Yeah, yeah. The Dental A Team (04:40.57) Yeah, I totally agree. think CE is something that a lot of doctors will find on a whim or be like, I need to take this course. You need to block me out these days. And if we're not projecting and planning for that or accommodating the schedule in other places, it really impacts it. I had a client that I was chatting with last week, an office manager, and she's like, Tiff, what the heck do I do? Like they want to make XML this month, but they took between the two doctors, they took two weeks off. And I'm like, well, This is the shared reality, right? So their reality is they're not looking at that. They're not thinking about that. Your reality is that you have to get it on the schedule. And so that's the first place you look when they come to you and tell you to shut down days is you're like, well, what am I supposed to do with this? So coming to that shared reality is huge and trying to project as much as you can the CE that you want to take this year or estimating how much CE you're willing to take time off for is always huge as well. I know I've got a few doctors that don't know when they want to take vacation or where they want to go or if they want it. And I say, you know what? That's totally fine. In the perfect world, how much vacation time do you want to take with your family? And I have one doctor that said four to six weeks. I said, great. Then plan for six. I want you to take six weeks out of your productive numbers, days of work, right, take those six weeks out of that productive time, now estimate what you can do and how we can bump those goals. Because the reality is we need to increase production and collections by seven to ten percent every year to keep up with inflation. And team members that are listening, every year, no matter what, you're going to get a new goal. It's going to be different than it was last year. And guess what? We're doing things on the other side to help you with that. just FYI, you're going to get a bigger goal every year. Soap. Soapbox. The days of the year that we're willing, that we're able to work, right, impact if how easily or how difficult it's going to be to get to that goal. So we've got to say, okay, this is our goal, this is the number of days we're working with, what do have to do every day to get to that goal? So it's not necessarily, this is how many days I have to work, what is my goal, it's, this is what I need to do, this is how many days I The Dental A Team (07:03.878) I have that I'm working, how do we fix that? So I love that. And I've had a lot of doctors in the last couple of years that have really learned to just say, okay, well, I don't have it planned yet, but I would love to take three CEs this year, because I want them and I need them. And I'd love to take this much vacation with my family. Great. Swipe it off, figure it out, we move on. Other pieces that I know have come into play, in the dental industry. So Guess what, guys? We've got a lot of babies that come along. Maternity and paternity leave is a real thing. Like, we're family. We're healthcare providers, so we are family-oriented beings. So making sure that we're considering that as well. I know a lot of offices that are like, shoot, I didn't even think about the fact that my hygienist is going to be out for three months starting in June, and what am I supposed to do? And it's May. And I'm like, my gosh. Dana (07:59.95) you The Dental A Team (08:00.052) Why didn't Oyrin want this information? You know, it's just like those pieces we forget about. So I love that. Like, Observed Holidays, which 2024 taught us a lesson if we were not pre-planning for Observed Holidays because Christmas and New Year's landed on wild days for that year, and it really, really messed with December production. 2025, it's similar. It's at least towards the end of the week, so you can work the beginning of both. Dana (08:03.246) Yeah. The Dental A Team (08:29.897) but observed holidays and how much time are you gonna give yourself and your team off during those holidays? CE, how much do you want to on vacations? How much do you want to take? How much do you aspire to take? Always go big, because you can work extra days. Team vacations, especially when it comes to providers, so associates and hygienists, you've gotta make sure we're prepped for that. Maternity and paternity leave. making sure we're prepped for all of those situations and scenarios, and I'm sure there are some that I didn't even think about, that impact it. I do have doctors that will take some time off and then they'll work maybe extra Fridays or they'll work extra long on Thursdays where they normally would have done some admin work, which I don't love you guys getting rid of admin work. and you said, know, admin hours and meeting hours, things like that, making sure we're accounting for those things. But if you can, pick up extra days for sure. I'm never gonna turn that down and neither will your schedulers. But making sure we're prepping and planning for that. some tools, Day, let's talk about some tools that we have that we have our clients utilize and then also that you guys who aren't yet clients, your listeners, use tools like this or reach out and we can help you with tools like this. Really narrowing in and making things easy is... what I love in life. I hate... doesn't mean things are never going to be hard. I hate hard forever. I want hard right now, so maybe it's time consuming right now, but it's going to save me pain later. That's what I want to do. So, Dana, we have our projections sheet, so maybe talk about our projections sheet a little bit and our scorecard and how those combine and then how you're using that now two months into the year with your clients to really assess and see. Dana (09:59.5) Yeah. The Dental A Team (10:21.856) how their goals are going for them as we're rounding out middle of Q1. Dana (10:26.092) Yeah, so I love our projection sheet and I feel like in 2024 I used it more than ever because 2024 was wild so let's prep and make sure 2025 stays a little less wild than 2024 was. The Dental A Team (10:32.031) Yeah. The Dental A Team (10:37.896) Absolutely. Dana (10:38.766) And so really, it truly is it's mapping out I go through every month of the year with the client, how many working days are we going to have for each provider, including hygiene. And some of that does take prep work as far as hey, I need you to get that information from your heart to can they do they have any idea when they're going to take vacations? Do they have any idea and then honestly, if we don't, okay, well, how much PTO do they have and we build that in just saying like, okay, well, we know they're probably going to use their two weeks of PTO somewhere. So if we even it out throughout the The Dental A Team (10:53.716) Mm-hmm. The Dental A Team (11:01.909) Yeah. Dana (11:08.72) quarters, at least we kind of have the number of days that we're working with. And so we'll go through and we'll say, where do you want to end up next year, right? We'll base it on this year, we'll look at all sorts of things. And then we go through that sheet together and we basically say, okay, if these are the number of working days, if this is the number of providers we have available, this is what it's going to take every single day from each provider to get you to your ultimate goal. So it's kind of some reverse engineering, but it also is some planning and prepping. The Dental A Team (11:34.315) Yeah. Dana (11:38.83) And I love that meeting with clients because a lot of that goes like a lot of things go into that projection sheet. And I love that you said, okay, well, how many CE days do you want to take? Sometimes actually CE comes down to what do we have for a CE budget next year? And then of that budget, okay, well, that can likely get you two to three CE courses, which then helps with our projections that way. So it is a lot of planning and prepping that goes into it. But I think that it really helps. got them down to the penny what needs to happen every single day in each month and really gives them a roadmap that then as the year starts going and we bring in our scorecard we're constantly paying attention to where are we to getting to that and following that roadmap that we set out at the end of the year prior. So it's a really cool combination of tools and and I love to see especially this year is like we're The Dental A Team (12:21.32) Yeah. The Dental A Team (12:30.802) Yeah. Dana (12:38.704) really customizing and tightening of the scorecard that it constantly has them assessing gaps, it constantly has them going back to that initial roadmap we set, and then like you said, really just looking for opportunities of what is the thing we need to focus on right now. The Dental A Team (12:56.073) I love that. think you nailed it with those. The projection sheet is huge. And then coupling it with the scorecard and a piece that made me think of that's on the scorecard. And we purposely put it on the scorecard because this made a massive impact on goals being reached last year. I think every year, but last year, it really showed its face is the number of hours that were left. open on the schedule, so they were not filled with production either in hygiene or on the doctor's schedule. So when you look at the projections, you're like, great, this is awesome. This is if I were full to 100%, this is what my estimated production could be by provider. But then when you get in there and you start utilizing the scorecard and you're looking at the production, you compare the two, you're like, why am I off and why is it hard to reach these goals? Or why am I just not there yet? Why am I not reaching this goal yet? A lot of times it's lost in those pieces that we haven't always paid a lot of attention to, or just like we could, we should, why isn't it happening? I'm not sure. Looking at those open hours and even taking into account maybe, know, with that, that you're likely going to have like 3 % or so left open on the schedule. But looking at those open hours and saying, great, well, if I multiplied these open hours by that producer's average dollar per hour, that's going to show me I probably could have made that goal. really focusing the attention in on there and bringing it back then to the schedulers and treatment planners and full team of handoffs and all of those pieces we talk about because of schedule working takes the full practice. So it takes you guys projecting, you guys prepping and planning takes, you know, the back office and front office working really hard together to ensure that our patients are the healthiest they possibly can be. Dana, you mentioned CE budget. And that made me think of the other tools that we're utilizing that I hope other practices that are not yet Delay Team clients are utilizing as well. that's in the, which goes along with that projection, right? The fixed cost spreadsheets, making sure you know what your fixed costs are. That's also part of that planning and projecting because if you're fixed costs and your bare minimum production and collections that you need in order to have the overhead that you want, The Dental A Team (15:12.587) If those aren't in alignment, right, with your projections, your projections are low, we might need to look then and say, okay, well, where can we add days back in? Because if we're not combining all of that information, again, we're just not seeing all the levels. It's like not seeing all of the colors. It's like being colorblind to orange, right? You can't see orange anywhere and you're just missing all of these flowers and things that could light up your life. It's like. not being able to see those pieces. So the fixed cost spreadsheet and knowing what are the fixed costs, what are my average payrolls, what are my average supplies, what should this look like? Then you look at what that needs to be, what are my projections, and what can I do then minus those fixed costs? What would my overhead look like if this is the projection for that month? So then tackling that and looking at that growth percentage and profit percentage, opportunity allows you then to again look at what you're projecting and what you're taking time off for and evaluate what do I need to do to change that result or it's perfect. And Dana, I you've been working a lot, a lot, a lot with the fixed cost spreadsheet. How has that impacted the being able to project and impact the schedule for the clients that you've got utilizing that spreadsheet? Dana (16:35.374) Yeah, I really used to fix costs heavily this year because I do feel like that was part of the wild of 2024 is you know with everything going on and things are just more expensive wages are higher and I feel like practices we're seeing really good growth as far as looking at production but offices who weren't keeping quite an eye on expenses got some surprises because it ended up that like The Dental A Team (16:41.294) Mm-hmm. The Dental A Team (16:56.345) Yeah. Dana (17:03.36) you know, we set goals based on the numbers from the previous year. And so then when this year's expenses are quite a bit higher, and they're not watching them, or that's not something, you know, that they consistently look at, it just became more necessary this year to ensure that they knew where their fixed costs were. And they also knew that, okay, if it's outside of this, right, that is like, The Dental A Team (17:07.769) Mm-hmm. The Dental A Team (17:17.051) you Dana (17:28.31) an alert that like we need to take a deeper dive into this and we need to really keep an eye on it. So it was something that I really incorporated in 2024. And again, this year, I love that it is part of our scorecard and the things that we're really tracking with clients because that was an area that really hit offices hard. was like, were just giving raises and we were just saying, yep, you know what, I think I can extend my pay range for this new position to this and, just making some of these The Dental A Team (17:29.614) Yeah. The Dental A Team (17:56.641) Mm-hmm. Dana (17:58.186) decisions like a little bit on the fly without crunching and saying, All right, well, what does that look like on the expense side? Or how much does that actually add to my BAM and my bottom line? And so I think that that it really was impactful this last year. And so, you know, I with you, I'm encouraging offices, even those listening to really make sure that that is a number that you know. The Dental A Team (18:08.548) Yeah. The Dental A Team (18:22.36) For sure, and I think you're 100 % right because our goal is to be profitable and to have a thriving business that patients can continue to come to. But when we're making decisions without knowing the full spectrum, makes it really difficult. And to combine those pieces into what you just said, the... holiday situation at the end of the year. I know I had a lot of clients and it was like November, December. They're like, Tiff, we decided to just take the two weeks off. And it's like, my gosh, like you just deleted four days out of December and that's massive. So going from a 16 day month to a 12 day month, like if you're averaging, you know, $10,000 a day, that's 40 grand that we've now deleted at the end of the year. There's nowhere else to put that. So being able to know. Is that 40 grand going to affect the profit, the profitability and the overhead of the practice? Do I have that saved somewhere to cover it because I'm not collecting that 40 grand in January from December, right? So having that fixed cost information and what your bare minimum is to know where your overhead will be to know, can I actually deduct 40 grand from my production? and be okay because I've got a couple practices now, you know, into January, February that are like, Tip, why is my collections low? And I'm like, well, girl, you took two weeks off in December and we collect on a lag when it comes to dental insurances. So I'm not surprised that it was a little low in January. So I think that was brilliant to really be able to combine all of those pieces. And again, that's like missing a color in the rainbow spectrum of colors, whatever. Like, you can't, you've got to look at it all. Dana (19:51.595) Yeah. The Dental A Team (20:09.444) And if any of those pieces aren't missing, it could be really detrimental to the practice. So I think that was really cool, Dana. Thank you for bringing that up. You guys, it's simple. Even though it's February, if you haven't done this yet, like, that's okay. It's not December. You guys, you're not telling me in November, you're taking four days off in December. So guess what? It's only February. It's only whatever month you decided to live, you decided to listen to this in, go do these things. So look at, if you don't have the spreadsheets, if you're not one of our clients, then... Dana (20:26.67) Thank you. The Dental A Team (20:37.805) Create one. All you have to do is know what you can work and what your estimated dollar per hours are or reach out to us. Hello@TheDentalATeam.com We're happy to help you with some ideas and some tools as well. Again, we are doing our complimentary practice assessments. Always TheDentalATeam.com That'll pop up. You can sign up for one those. This is definitely something that we talk about on those a lot. So we always look at your profitability. We always look at how the business can grow clients and non-clients. We like to help you guys with all of that. So Dana, thank you. Thank you, thank you. Thank you for being such a huge advocate of the business side of all of this and really helping your clients and all of everyone to see those pieces and for helping on the back end so much of the creation of all of these pieces. It was a huge project within our company to ensure that we had it dialed in as much as we. have the knowledge for now or know to look at. So thank you so much for that. Thank you for this conversation today and freaking rock it out. Your clients are doing amazing and I love seeing you all, you and your clients just thrive in those worlds. So super cool. Awesome. Guys, go do the things. Like I said, it's gonna take some time upfront, but it's gonna save you. pain in the long run. So do the hard right now, save yourself pain and hard later. Get those numbers dialed in. Dana had some amazing tips and tools today within all of those different areas and realms to look for. Don't forget the meetings, you guys, I think that's a space that a lot of us miss. Meetings, CE credits, those hours, those pieces like that that you're taking off in admin hours. So make sure you're calculating correctly. Go look at what can impact your production and your schedule and work with your team to make sure you guys are doing the best ever possible. Dana, thank you for being here with me today. I loved this conversation. Your brain is incredible and I don't, that's it. Like you're just freaking amazing. I don't know how you do it and I just love it. So thank you so much. Dana (22:37.826) Thanks for having me and right back at you, Tiff. The Dental A Team (22:40.263) Thank you. Thank you. All right, guys, let us know what you thought about this. We'd love to see a five-star review and letting us know how much help this information was or what have you. Hello@TheDentalATeam.com And we can't wait to see you guys, if you are not yet a client, on a complimentary practice assessment because I really, really, really want you to get dialed in this year, no matter what that looks like for us and you. And keep listening, you guys. We're gonna always have some amazing content here for you. Catch you next time.
If you're in SF, join us tomorrow for a fun meetup at CodeGen Night!If you're in NYC, join us for AI Engineer Summit! The Agent Engineering track is now sold out, but 25 tickets remain for AI Leadership and 5 tickets for the workshops. You can see the full schedule of speakers and workshops at https://ai.engineer!It's exceedingly hard to introduce someone like Bret Taylor. We could recite his Wikipedia page, or his extensive work history through Silicon Valley's greatest companies, but everyone else already does that.As a podcast by AI engineers for AI engineers, we had the opportunity to do something a little different. We wanted to dig into what Bret sees from his vantage point at the top of our industry for the last 2 decades, and how that explains the rise of the AI Architect at Sierra, the leading conversational AI/CX platform.“Across our customer base, we are seeing a new role emerge - the role of the AI architect. These leaders are responsible for helping define, manage and evolve their company's AI agent over time. They come from a variety of both technical and business backgrounds, and we think that every company will have one or many AI architects managing their AI agent and related experience.”In our conversation, Bret Taylor confirms the Paul Buchheit legend that he rewrote Google Maps in a weekend, armed with only the help of a then-nascent Google Closure Compiler and no other modern tooling. But what we find remarkable is that he was the PM of Maps, not an engineer, though of course he still identifies as one. We find this theme recurring throughout Bret's career and worldview. We think it is plain as day that AI leadership will have to be hands-on and technical, especially when the ground is shifting as quickly as it is today:“There's a lot of power in combining product and engineering into as few people as possible… few great things have been created by committee.”“If engineering is an order taking organization for product you can sometimes make meaningful things, but rarely will you create extremely well crafted breakthrough products. Those tend to be small teams who deeply understand the customer need that they're solving, who have a maniacal focus on outcomes.”“And I think the reason why is if you look at like software as a service five years ago, maybe you can have a separation of product and engineering because most software as a service created five years ago. I wouldn't say there's like a lot of technological breakthroughs required for most business applications. And if you're making expense reporting software or whatever, it's useful… You kind of know how databases work, how to build auto scaling with your AWS cluster, whatever, you know, it's just, you're just applying best practices to yet another problem. "When you have areas like the early days of mobile development or the early days of interactive web applications, which I think Google Maps and Gmail represent, or now AI agents, you're in this constant conversation with what the requirements of your customers and stakeholders are and all the different people interacting with it and the capabilities of the technology. And it's almost impossible to specify the requirements of a product when you're not sure of the limitations of the technology itself.”This is the first time the difference between technical leadership for “normal” software and for “AI” software was articulated this clearly for us, and we'll be thinking a lot about this going forward. We left a lot of nuggets in the conversation, so we hope you'll just dive in with us (and thank Bret for joining the pod!)Timestamps* 00:00:02 Introductions and Bret Taylor's background* 00:01:23 Bret's experience at Stanford and the dot-com era* 00:04:04 The story of rewriting Google Maps backend* 00:11:06 Early days of interactive web applications at Google* 00:15:26 Discussion on product management and engineering roles* 00:21:00 AI and the future of software development* 00:26:42 Bret's approach to identifying customer needs and building AI companies* 00:32:09 The evolution of business models in the AI era* 00:41:00 The future of programming languages and software development* 00:49:38 Challenges in precisely communicating human intent to machines* 00:56:44 Discussion on Artificial General Intelligence (AGI) and its impact* 01:08:51 The future of agent-to-agent communication* 01:14:03 Bret's involvement in the OpenAI leadership crisis* 01:22:11 OpenAI's relationship with Microsoft* 01:23:23 OpenAI's mission and priorities* 01:27:40 Bret's guiding principles for career choices* 01:29:12 Brief discussion on pasta-making* 01:30:47 How Bret keeps up with AI developments* 01:32:15 Exciting research directions in AI* 01:35:19 Closing remarks and hiring at Sierra Transcript[00:02:05] Introduction and Guest Welcome[00:02:05] Alessio: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co host swyx, founder of smol.ai.[00:02:17] swyx: Hey, and today we're super excited to have Bret Taylor join us. Welcome. Thanks for having me. It's a little unreal to have you in the studio.[00:02:25] swyx: I've read about you so much over the years, like even before. Open AI effectively. I mean, I use Google Maps to get here. So like, thank you for everything that you've done. Like, like your story history, like, you know, I think people can find out what your greatest hits have been.[00:02:40] Bret Taylor's Early Career and Education[00:02:40] swyx: How do you usually like to introduce yourself when, you know, you talk about, you summarize your career, like, how do you look at yourself?[00:02:47] Bret: Yeah, it's a great question. You know, we, before we went on the mics here, we're talking about the audience for this podcast being more engineering. And I do think depending on the audience, I'll introduce myself differently because I've had a lot of [00:03:00] corporate and board roles. I probably self identify as an engineer more than anything else though.[00:03:04] Bret: So even when I was. Salesforce, I was coding on the weekends. So I think of myself as an engineer and then all the roles that I do in my career sort of start with that just because I do feel like engineering is sort of a mindset and how I approach most of my life. So I'm an engineer first and that's how I describe myself.[00:03:24] Bret: You majored in computer[00:03:25] swyx: science, like 1998. And, and I was high[00:03:28] Bret: school, actually my, my college degree was Oh, two undergrad. Oh, three masters. Right. That old.[00:03:33] swyx: Yeah. I mean, no, I was going, I was going like 1998 to 2003, but like engineering wasn't as, wasn't a thing back then. Like we didn't have the title of senior engineer, you know, kind of like, it was just.[00:03:44] swyx: You were a programmer, you were a developer, maybe. What was it like in Stanford? Like, what was that feeling like? You know, was it, were you feeling like on the cusp of a great computer revolution? Or was it just like a niche, you know, interest at the time?[00:03:57] Stanford and the Dot-Com Bubble[00:03:57] Bret: Well, I was at Stanford, as you said, from 1998 to [00:04:00] 2002.[00:04:02] Bret: 1998 was near the peak of the dot com bubble. So. This is back in the day where most people that they're coding in the computer lab, just because there was these sun microsystems, Unix boxes there that most of us had to do our assignments on. And every single day there was a. com like buying pizza for everybody.[00:04:20] Bret: I didn't have to like, I got. Free food, like my first two years of university and then the dot com bubble burst in the middle of my college career. And so by the end there was like tumbleweed going to the job fair, you know, it was like, cause it was hard to describe unless you were there at the time, the like level of hype and being a computer science major at Stanford was like, A thousand opportunities.[00:04:45] Bret: And then, and then when I left, it was like Microsoft, IBM.[00:04:49] Joining Google and Early Projects[00:04:49] Bret: And then the two startups that I applied to were VMware and Google. And I ended up going to Google in large part because a woman named Marissa Meyer, who had been a teaching [00:05:00] assistant when I was, what was called a section leader, which was like a junior teaching assistant kind of for one of the big interest.[00:05:05] Bret: Yes. Classes. She had gone there. And she was recruiting me and I knew her and it was sort of felt safe, you know, like, I don't know. I thought about it much, but it turned out to be a real blessing. I realized like, you know, you always want to think you'd pick Google if given the option, but no one knew at the time.[00:05:20] Bret: And I wonder if I'd graduated in like 1999 where I've been like, mom, I just got a job at pets. com. It's good. But you know, at the end I just didn't have any options. So I was like, do I want to go like make kernel software at VMware? Do I want to go build search at Google? And I chose Google. 50, 50 ball.[00:05:36] Bret: I'm not really a 50, 50 ball. So I feel very fortunate in retrospect that the economy collapsed because in some ways it forced me into like one of the greatest companies of all time, but I kind of lucked into it, I think.[00:05:47] The Google Maps Rewrite Story[00:05:47] Alessio: So the famous story about Google is that you rewrote the Google maps back in, in one week after the map quest quest maps acquisition, what was the story there?[00:05:57] Alessio: Is it. Actually true. Is it [00:06:00] being glorified? Like how, how did that come to be? And is there any detail that maybe Paul hasn't shared before?[00:06:06] Bret: It's largely true, but I'll give the color commentary. So it was actually the front end, not the back end, but it turns out for Google maps, the front end was sort of the hard part just because Google maps was.[00:06:17] Bret: Largely the first ish kind of really interactive web application, say first ish. I think Gmail certainly was though Gmail, probably a lot of people then who weren't engineers probably didn't appreciate its level of interactivity. It was just fast, but. Google maps, because you could drag the map and it was sort of graphical.[00:06:38] Bret: My, it really in the mainstream, I think, was it a map[00:06:41] swyx: quest back then that was, you had the arrows up and down, it[00:06:44] Bret: was up and down arrows. Each map was a single image and you just click left and then wait for a few seconds to the new map to let it was really small too, because generating a big image was kind of expensive on computers that day.[00:06:57] Bret: So Google maps was truly innovative in that [00:07:00] regard. The story on it. There was a small company called where two technologies started by two Danish brothers, Lars and Jens Rasmussen, who are two of my closest friends now. They had made a windows app called expedition, which had beautiful maps. Even in 2000.[00:07:18] Bret: For whenever we acquired or sort of acquired their company, Windows software was not particularly fashionable, but they were really passionate about mapping and we had made a local search product that was kind of middling in terms of popularity, sort of like a yellow page of search product. So we wanted to really go into mapping.[00:07:36] Bret: We'd started working on it. Their small team seemed passionate about it. So we're like, come join us. We can build this together.[00:07:42] Technical Challenges and Innovations[00:07:42] Bret: It turned out to be a great blessing that they had built a windows app because you're less technically constrained when you're doing native code than you are building a web browser, particularly back then when there weren't really interactive web apps and it ended up.[00:07:56] Bret: Changing the level of quality that we [00:08:00] wanted to hit with the app because we were shooting for something that felt like a native windows application. So it was a really good fortune that we sort of, you know, their unusual technical choices turned out to be the greatest blessing. So we spent a lot of time basically saying, how can you make a interactive draggable map in a web browser?[00:08:18] Bret: How do you progressively load, you know, new map tiles, you know, as you're dragging even things like down in the weeds of the browser at the time, most browsers like Internet Explorer, which was dominant at the time would only load two images at a time from the same domain. So we ended up making our map tile servers have like.[00:08:37] Bret: Forty different subdomains so we could load maps and parallels like lots of hacks. I'm happy to go into as much as like[00:08:44] swyx: HTTP connections and stuff.[00:08:46] Bret: They just like, there was just maximum parallelism of two. And so if you had a map, set of map tiles, like eight of them, so So we just, we were down in the weeds of the browser anyway.[00:08:56] Bret: So it was lots of plumbing. I can, I know a lot more about browsers than [00:09:00] most people, but then by the end of it, it was fairly, it was a lot of duct tape on that code. If you've ever done an engineering project where you're not really sure the path from point A to point B, it's almost like. Building a house by building one room at a time.[00:09:14] Bret: The, there's not a lot of architectural cohesion at the end. And then we acquired a company called Keyhole, which became Google earth, which was like that three, it was a native windows app as well, separate app, great app, but with that, we got licenses to all this satellite imagery. And so in August of 2005, we added.[00:09:33] Bret: Satellite imagery to Google Maps, which added even more complexity in the code base. And then we decided we wanted to support Safari. There was no mobile phones yet. So Safari was this like nascent browser on, on the Mac. And it turns out there's like a lot of decisions behind the scenes, sort of inspired by this windows app, like heavy use of XML and XSLT and all these like.[00:09:54] Bret: Technologies that were like briefly fashionable in the early two thousands and everyone hates now for good [00:10:00] reason. And it turns out that all of the XML functionality and Internet Explorer wasn't supporting Safari. So people are like re implementing like XML parsers. And it was just like this like pile of s**t.[00:10:11] Bret: And I had to say a s**t on your part. Yeah, of[00:10:12] Alessio: course.[00:10:13] Bret: So. It went from this like beautifully elegant application that everyone was proud of to something that probably had hundreds of K of JavaScript, which sounds like nothing. Now we're talking like people have modems, you know, not all modems, but it was a big deal.[00:10:29] Bret: So it was like slow. It took a while to load and just, it wasn't like a great code base. Like everything was fragile. So I just got. Super frustrated by it. And then one weekend I did rewrite all of it. And at the time the word JSON hadn't been coined yet too, just to give you a sense. So it's all XML.[00:10:47] swyx: Yeah.[00:10:47] Bret: So we used what is now you would call JSON, but I just said like, let's use eval so that we can parse the data fast. And, and again, that's, it would literally as JSON, but at the time there was no name for it. So we [00:11:00] just said, let's. Pass on JavaScript from the server and eval it. And then somebody just refactored the whole thing.[00:11:05] Bret: And, and it wasn't like I was some genius. It was just like, you know, if you knew everything you wished you had known at the beginning and I knew all the functionality, cause I was the primary, one of the primary authors of the JavaScript. And I just like, I just drank a lot of coffee and just stayed up all weekend.[00:11:22] Bret: And then I, I guess I developed a bit of reputation and no one knew about this for a long time. And then Paul who created Gmail and I ended up starting a company with him too, after all of this told this on a podcast and now it's large, but it's largely true. I did rewrite it and it, my proudest thing.[00:11:38] Bret: And I think JavaScript people appreciate this. Like the un G zipped bundle size for all of Google maps. When I rewrote, it was 20 K G zipped. It was like much smaller for the entire application. It went down by like 10 X. So. What happened on Google? Google is a pretty mainstream company. And so like our usage is shot up because it turns out like it's faster.[00:11:57] Bret: Just being faster is worth a lot of [00:12:00] percentage points of growth at a scale of Google. So how[00:12:03] swyx: much modern tooling did you have? Like test suites no compilers.[00:12:07] Bret: Actually, that's not true. We did it one thing. So I actually think Google, I, you can. Download it. There's a, Google has a closure compiler, a closure compiler.[00:12:15] Bret: I don't know if anyone still uses it. It's gone. Yeah. Yeah. It's sort of gone out of favor. Yeah. Well, even until recently it was better than most JavaScript minifiers because it was more like it did a lot more renaming of variables and things. Most people use ES build now just cause it's fast and closure compilers built on Java and super slow and stuff like that.[00:12:37] Bret: But, so we did have that, that was it. Okay.[00:12:39] The Evolution of Web Applications[00:12:39] Bret: So and that was treated internally, you know, it was a really interesting time at Google at the time because there's a lot of teams working on fairly advanced JavaScript when no one was. So Google suggest, which Kevin Gibbs was the tech lead for, was the first kind of type ahead, autocomplete, I believe in a web browser, and now it's just pervasive in search boxes that you sort of [00:13:00] see a type ahead there.[00:13:01] Bret: I mean, chat, dbt[00:13:01] swyx: just added it. It's kind of like a round trip.[00:13:03] Bret: Totally. No, it's now pervasive as a UI affordance, but that was like Kevin's 20 percent project. And then Gmail, Paul you know, he tells the story better than anyone, but he's like, you know, basically was scratching his own itch, but what was really neat about it is email, because it's such a productivity tool, just needed to be faster.[00:13:21] Bret: So, you know, he was scratching his own itch of just making more stuff work on the client side. And then we, because of Lars and Yen sort of like setting the bar of this windows app or like we need our maps to be draggable. So we ended up. Not only innovate in terms of having a big sync, what would be called a single page application today, but also all the graphical stuff you know, we were crashing Firefox, like it was going out of style because, you know, when you make a document object model with the idea that it's a document and then you layer on some JavaScript and then we're essentially abusing all of this, it just was running into code paths that were not.[00:13:56] Bret: Well, it's rotten, you know, at this time. And so it was [00:14:00] super fun. And, and, you know, in the building you had, so you had compilers, people helping minify JavaScript just practically, but there is a great engineering team. So they were like, that's why Closure Compiler is so good. It was like a. Person who actually knew about programming languages doing it, not just, you know, writing regular expressions.[00:14:17] Bret: And then the team that is now the Chrome team believe, and I, I don't know this for a fact, but I'm pretty sure Google is the main contributor to Firefox for a long time in terms of code. And a lot of browser people were there. So every time we would crash Firefox, we'd like walk up two floors and say like, what the hell is going on here?[00:14:35] Bret: And they would load their browser, like in a debugger. And we could like figure out exactly what was breaking. And you can't change the code, right? Cause it's the browser. It's like slow, right? I mean, slow to update. So, but we could figure out exactly where the bug was and then work around it in our JavaScript.[00:14:52] Bret: So it was just like new territory. Like so super, super fun time, just like a lot of, a lot of great engineers figuring out [00:15:00] new things. And And now, you know, the word, this term is no longer in fashion, but the word Ajax, which was asynchronous JavaScript and XML cause I'm telling you XML, but see the word XML there, to be fair, the way you made HTTP requests from a client to server was this.[00:15:18] Bret: Object called XML HTTP request because Microsoft and making Outlook web access back in the day made this and it turns out to have nothing to do with XML. It's just a way of making HTTP requests because XML was like the fashionable thing. It was like that was the way you, you know, you did it. But the JSON came out of that, you know, and then a lot of the best practices around building JavaScript applications is pre React.[00:15:44] Bret: I think React was probably the big conceptual step forward that we needed. Even my first social network after Google, we used a lot of like HTML injection and. Making real time updates was still very hand coded and it's really neat when you [00:16:00] see conceptual breakthroughs like react because it's, I just love those things where it's like obvious once you see it, but it's so not obvious until you do.[00:16:07] Bret: And actually, well, I'm sure we'll get into AI, but I, I sort of feel like we'll go through that evolution with AI agents as well that I feel like we're missing a lot of the core abstractions that I think in 10 years we'll be like, gosh, how'd you make agents? Before that, you know, but it was kind of that early days of web applications.[00:16:22] swyx: There's a lot of contenders for the reactive jobs of of AI, but no clear winner yet. I would say one thing I was there for, I mean, there's so much we can go into there. You just covered so much.[00:16:32] Product Management and Engineering Synergy[00:16:32] swyx: One thing I just, I just observe is that I think the early Google days had this interesting mix of PM and engineer, which I think you are, you didn't, you didn't wait for PM to tell you these are my, this is my PRD.[00:16:42] swyx: This is my requirements.[00:16:44] mix: Oh,[00:16:44] Bret: okay.[00:16:45] swyx: I wasn't technically a software engineer. I mean,[00:16:48] Bret: by title, obviously. Right, right, right.[00:16:51] swyx: It's like a blend. And I feel like these days, product is its own discipline and its own lore and own industry and engineering is its own thing. And there's this process [00:17:00] that happens and they're kind of separated, but you don't produce as good of a product as if they were the same person.[00:17:06] swyx: And I'm curious, you know, if, if that, if that sort of resonates in, in, in terms of like comparing early Google versus modern startups that you see out there,[00:17:16] Bret: I certainly like wear a lot of hats. So, you know, sort of biased in this, but I really agree that there's a lot of power and combining product design engineering into as few people as possible because, you know few great things have been created by committee, you know, and so.[00:17:33] Bret: If engineering is an order taking organization for product you can sometimes make meaningful things, but rarely will you create extremely well crafted breakthrough products. Those tend to be small teams who deeply understand the customer need that they're solving, who have a. Maniacal focus on outcomes.[00:17:53] Bret: And I think the reason why it's, I think for some areas, if you look at like software as a service five years ago, maybe you can have a [00:18:00] separation of product and engineering because most software as a service created five years ago. I wouldn't say there's like a lot of like. Technological breakthroughs required for most, you know, business applications.[00:18:11] Bret: And if you're making expense reporting software or whatever, it's useful. I don't mean to be dismissive of expense reporting software, but you probably just want to understand like, what are the requirements of the finance department? What are the requirements of an individual file expense report? Okay.[00:18:25] Bret: Go implement that. And you kind of know how web applications are implemented. You kind of know how to. How databases work, how to build auto scaling with your AWS cluster, whatever, you know, it's just, you're just applying best practices to yet another problem when you have areas like the early days of mobile development or the early days of interactive web applications, which I think Google Maps and Gmail represent, or now AI agents, you're in this constant conversation with what the requirements of your customers and stakeholders are and all the different people interacting with it.[00:18:58] Bret: And the capabilities of the [00:19:00] technology. And it's almost impossible to specify the requirements of a product when you're not sure of the limitations of the technology itself. And that's why I use the word conversation. It's not literal. That's sort of funny to use that word in the age of conversational AI.[00:19:15] Bret: You're constantly sort of saying, like, ideally, you could sprinkle some magic AI pixie dust and solve all the world's problems, but it's not the way it works. And it turns out that actually, I'll just give an interesting example.[00:19:26] AI Agents and Modern Tooling[00:19:26] Bret: I think most people listening probably use co pilots to code like Cursor or Devon or Microsoft Copilot or whatever.[00:19:34] Bret: Most of those tools are, they're remarkable. I'm, I couldn't, you know, imagine development without them now, but they're not autonomous yet. Like I wouldn't let it just write most code without my interactively inspecting it. We just are somewhere between it's an amazing co pilot and it's an autonomous software engineer.[00:19:53] Bret: As a product manager, like your aspirations for what the product is are like kind of meaningful. But [00:20:00] if you're a product person, yeah, of course you'd say it should be autonomous. You should click a button and program should come out the other side. The requirements meaningless. Like what matters is like, what is based on the like very nuanced limitations of the technology.[00:20:14] Bret: What is it capable of? And then how do you maximize the leverage? It gives a software engineering team, given those very nuanced trade offs. Coupled with the fact that those nuanced trade offs are changing more rapidly than any technology in my memory, meaning every few months you'll have new models with new capabilities.[00:20:34] Bret: So how do you construct a product that can absorb those new capabilities as rapidly as possible as well? That requires such a combination of technical depth and understanding the customer that you really need more integration. Of product design and engineering. And so I think it's why with these big technology waves, I think startups have a bit of a leg up relative to incumbents because they [00:21:00] tend to be sort of more self actualized in terms of just like bringing those disciplines closer together.[00:21:06] Bret: And in particular, I think entrepreneurs, the proverbial full stack engineers, you know, have a leg up as well because. I think most breakthroughs happen when you have someone who can understand those extremely nuanced technical trade offs, have a vision for a product. And then in the process of building it, have that, as I said, like metaphorical conversation with the technology, right?[00:21:30] Bret: Gosh, I ran into a technical limit that I didn't expect. It's not just like changing that feature. You might need to refactor the whole product based on that. And I think that's, that it's particularly important right now. So I don't, you know, if you, if you're building a big ERP system, probably there's a great reason to have product and engineering.[00:21:51] Bret: I think in general, the disciplines are there for a reason. I think when you're dealing with something as nuanced as the like technologies, like large language models today, there's a ton of [00:22:00] advantage of having. Individuals or organizations that integrate the disciplines more formally.[00:22:05] Alessio: That makes a lot of sense.[00:22:06] Alessio: I've run a lot of engineering teams in the past, and I think the product versus engineering tension has always been more about effort than like whether or not the feature is buildable. But I think, yeah, today you see a lot more of like. Models actually cannot do that. And I think the most interesting thing is on the startup side, people don't yet know where a lot of the AI value is going to accrue.[00:22:26] Alessio: So you have this rush of people building frameworks, building infrastructure, layered things, but we don't really know the shape of the compute. I'm curious that Sierra, like how you thought about building an house, a lot of the tooling for evals or like just, you know, building the agents and all of that.[00:22:41] Alessio: Versus how you see some of the startup opportunities that is maybe still out there.[00:22:46] Bret: We build most of our tooling in house at Sierra, not all. It's, we don't, it's not like not invented here syndrome necessarily, though, maybe slightly guilty of that in some ways, but because we're trying to build a platform [00:23:00] that's in Dorian, you know, we really want to have control over our own destiny.[00:23:03] Bret: And you had made a comment earlier that like. We're still trying to figure out who like the reactive agents are and the jury is still out. I would argue it hasn't been created yet. I don't think the jury is still out to go use that metaphor. We're sort of in the jQuery era of agents, not the react era.[00:23:19] Bret: And, and that's like a throwback for people listening,[00:23:22] swyx: we shouldn't rush it. You know?[00:23:23] Bret: No, yeah, that's my point is. And so. Because we're trying to create an enduring company at Sierra that outlives us, you know, I'm not sure we want to like attach our cart to some like to a horse where it's not clear that like we've figured out and I actually want as a company, we're trying to enable just at a high level and I'll, I'll quickly go back to tech at Sierra, we help consumer brands build customer facing AI agents.[00:23:48] Bret: So. Everyone from Sonos to ADT home security to Sirius XM, you know, if you call them on the phone and AI will pick up with you, you know, chat with them on the Sirius XM homepage. It's an AI agent called Harmony [00:24:00] that they've built on our platform. We're what are the contours of what it means for someone to build an end to end complete customer experience with AI with conversational AI.[00:24:09] Bret: You know, we really want to dive into the deep end of, of all the trade offs to do it. You know, where do you use fine tuning? Where do you string models together? You know, where do you use reasoning? Where do you use generation? How do you use reasoning? How do you express the guardrails of an agentic process?[00:24:25] Bret: How do you impose determinism on a fundamentally non deterministic technology? There's just a lot of really like as an important design space. And I could sit here and tell you, we have the best approach. Every entrepreneur will, you know. But I hope that in two years, we look back at our platform and laugh at how naive we were, because that's the pace of change broadly.[00:24:45] Bret: If you talk about like the startup opportunities, I'm not wholly skeptical of tools companies, but I'm fairly skeptical. There's always an exception for every role, but I believe that certainly there's a big market for [00:25:00] frontier models, but largely for companies with huge CapEx budgets. So. Open AI and Microsoft's Anthropic and Amazon Web Services, Google Cloud XAI, which is very well capitalized now, but I think the, the idea that a company can make money sort of pre training a foundation model is probably not true.[00:25:20] Bret: It's hard to, you're competing with just, you know, unreasonably large CapEx budgets. And I just like the cloud infrastructure market, I think will be largely there. I also really believe in the applications of AI. And I define that not as like building agents or things like that. I define it much more as like, you're actually solving a problem for a business.[00:25:40] Bret: So it's what Harvey is doing in legal profession or what cursor is doing for software engineering or what we're doing for customer experience and customer service. The reason I believe in that is I do think that in the age of AI, what's really interesting about software is it can actually complete a task.[00:25:56] Bret: It can actually do a job, which is very different than the value proposition of [00:26:00] software was to ancient history two years ago. And as a consequence, I think the way you build a solution and For a domain is very different than you would have before, which means that it's not obvious, like the incumbent incumbents have like a leg up, you know, necessarily, they certainly have some advantages, but there's just such a different form factor, you know, for providing a solution and it's just really valuable.[00:26:23] Bret: You know, it's. Like just think of how much money cursor is saving software engineering teams or the alternative, how much revenue it can produce tool making is really challenging. If you look at the cloud market, just as a analog, there are a lot of like interesting tools, companies, you know, Confluent, Monetized Kafka, Snowflake, Hortonworks, you know, there's a, there's a bunch of them.[00:26:48] Bret: A lot of them, you know, have that mix of sort of like like confluence or have the open source or open core or whatever you call it. I, I, I'm not an expert in this area. You know, I do think [00:27:00] that developers are fickle. I think that in the tool space, I probably like. Default towards open source being like the area that will win.[00:27:09] Bret: It's hard to build a company around this and then you end up with companies sort of built around open source to that can work. Don't get me wrong, but I just think that it's nowadays the tools are changing so rapidly that I'm like, not totally skeptical of tool makers, but I just think that open source will broadly win, but I think that the CapEx required for building frontier models is such that it will go to a handful of big companies.[00:27:33] Bret: And then I really believe in agents for specific domains which I think will, it's sort of the analog to software as a service in this new era. You know, it's like, if you just think of the cloud. You can lease a server. It's just a low level primitive, or you can buy an app like you know, Shopify or whatever.[00:27:51] Bret: And most people building a storefront would prefer Shopify over hand rolling their e commerce storefront. I think the same thing will be true of AI. So [00:28:00] I've. I tend to like, if I have a, like an entrepreneur asked me for advice, I'm like, you know, move up the stack as far as you can towards a customer need.[00:28:09] Bret: Broadly, but I, but it doesn't reduce my excitement about what is the reactive building agents kind of thing, just because it is, it is the right question to ask, but I think we'll probably play out probably an open source space more than anything else.[00:28:21] swyx: Yeah, and it's not a priority for you. There's a lot in there.[00:28:24] swyx: I'm kind of curious about your idea maze towards, there are many customer needs. You happen to identify customer experience as yours, but it could equally have been coding assistance or whatever. I think for some, I'm just kind of curious at the top down, how do you look at the world in terms of the potential problem space?[00:28:44] swyx: Because there are many people out there who are very smart and pick the wrong problem.[00:28:47] Bret: Yeah, that's a great question.[00:28:48] Future of Software Development[00:28:48] Bret: By the way, I would love to talk about the future of software, too, because despite the fact it didn't pick coding, I have a lot of that, but I can talk to I can answer your question, though, you know I think when a technology is as [00:29:00] cool as large language models.[00:29:02] Bret: You just see a lot of people starting from the technology and searching for a problem to solve. And I think it's why you see a lot of tools companies, because as a software engineer, you start building an app or a demo and you, you encounter some pain points. You're like,[00:29:17] swyx: a lot of[00:29:17] Bret: people are experiencing the same pain point.[00:29:19] Bret: What if I make it? That it's just very incremental. And you know, I always like to use the metaphor, like you can sell coffee beans, roasted coffee beans. You can add some value. You took coffee beans and you roasted them and roasted coffee beans largely, you know, are priced relative to the cost of the beans.[00:29:39] Bret: Or you can sell a latte and a latte. Is rarely priced directly like as a percentage of coffee bean prices. In fact, if you buy a latte at the airport, it's a captive audience. So it's a really expensive latte. And there's just a lot that goes into like. How much does a latte cost? And I bring it up because there's a supply chain from growing [00:30:00] coffee beans to roasting coffee beans to like, you know, you could make one at home or you could be in the airport and buy one and the margins of the company selling lattes in the airport is a lot higher than the, you know, people roasting the coffee beans and it's because you've actually solved a much more acute human problem in the airport.[00:30:19] Bret: And, and it's just worth a lot more to that person in that moment. It's kind of the way I think about technology too. It sounds funny to liken it to coffee beans, but you're selling tools on top of a large language model yet in some ways your market is big, but you're probably going to like be price compressed just because you're sort of a piece of infrastructure and then you have open source and all these other things competing with you naturally.[00:30:43] Bret: If you go and solve a really big business problem for somebody, that's actually like a meaningful business problem that AI facilitates, they will value it according to the value of that business problem. And so I actually feel like people should just stop. You're like, no, that's, that's [00:31:00] unfair. If you're searching for an idea of people, I, I love people trying things, even if, I mean, most of the, a lot of the greatest ideas have been things no one believed in.[00:31:07] Bret: So I like, if you're passionate about something, go do it. Like who am I to say, yeah, a hundred percent. Or Gmail, like Paul as far, I mean I, some of it's Laura at this point, but like Gmail is Paul's own email for a long time. , and then I amusingly and Paul can't correct me, I'm pretty sure he sent her in a link and like the first comment was like, this is really neat.[00:31:26] Bret: It would be great. It was not your email, but my own . I don't know if it's a true story. I'm pretty sure it's, yeah, I've read that before. So scratch your own niche. Fine. Like it depends on what your goal is. If you wanna do like a venture backed company, if its a. Passion project, f*****g passion, do it like don't listen to anybody.[00:31:41] Bret: In fact, but if you're trying to start, you know an enduring company, solve an important business problem. And I, and I do think that in the world of agents, the software industries has shifted where you're not just helping people more. People be more productive, but you're actually accomplishing tasks autonomously.[00:31:58] Bret: And as a consequence, I think the [00:32:00] addressable market has just greatly expanded just because software can actually do things now and actually accomplish tasks and how much is coding autocomplete worth. A fair amount. How much is the eventual, I'm certain we'll have it, the software agent that actually writes the code and delivers it to you, that's worth a lot.[00:32:20] Bret: And so, you know, I would just maybe look up from the large language models and start thinking about the economy and, you know, think from first principles. I don't wanna get too far afield, but just think about which parts of the economy. We'll benefit most from this intelligence and which parts can absorb it most easily.[00:32:38] Bret: And what would an agent in this space look like? Who's the customer of it is the technology feasible. And I would just start with these business problems more. And I think, you know, the best companies tend to have great engineers who happen to have great insight into a market. And it's that last part that I think some people.[00:32:56] Bret: Whether or not they have, it's like people start so much in the technology, they [00:33:00] lose the forest for the trees a little bit.[00:33:02] Alessio: How do you think about the model of still selling some sort of software versus selling more package labor? I feel like when people are selling the package labor, it's almost more stateless, you know, like it's easier to swap out if you're just putting an input and getting an output.[00:33:16] Alessio: If you think about coding, if there's no ID, you're just putting a prompt and getting back an app. It doesn't really matter. Who generates the app, you know, you have less of a buy in versus the platform you're building, I'm sure on the backend customers have to like put on their documentation and they have, you know, different workflows that they can tie in what's kind of like the line to draw there versus like going full where you're managed customer support team as a service outsource versus.[00:33:40] Alessio: This is the Sierra platform that you can build on. What was that decision? I'll sort of[00:33:44] Bret: like decouple the question in some ways, which is when you have something that's an agent, who is the person using it and what do they want to do with it? So let's just take your coding agent for a second. I will talk about Sierra as well.[00:33:59] Bret: Who's the [00:34:00] customer of a, an agent that actually produces software? Is it a software engineering manager? Is it a software engineer? And it's there, you know, intern so to speak. I don't know. I mean, we'll figure this out over the next few years. Like what is that? And is it generating code that you then review?[00:34:16] Bret: Is it generating code with a set of unit tests that pass, what is the actual. For lack of a better word contract, like, how do you know that it did what you wanted it to do? And then I would say like the product and the pricing, the packaging model sort of emerged from that. And I don't think the world's figured out.[00:34:33] Bret: I think it'll be different for every agent. You know, in our customer base, we do what's called outcome based pricing. So essentially every time the AI agent. Solves the problem or saves a customer or whatever it might be. There's a pre negotiated rate for that. We do that. Cause it's, we think that that's sort of the correct way agents, you know, should be packaged.[00:34:53] Bret: I look back at the history of like cloud software and notably the introduction of the browser, which led to [00:35:00] software being delivered in a browser, like Salesforce to. Famously invented sort of software as a service, which is both a technical delivery model through the browser, but also a business model, which is you subscribe to it rather than pay for a perpetual license.[00:35:13] Bret: Those two things are somewhat orthogonal, but not really. If you think about the idea of software running in a browser, that's hosted. Data center that you don't own, you sort of needed to change the business model because you don't, you can't really buy a perpetual license or something otherwise like, how do you afford making changes to it?[00:35:31] Bret: So it only worked when you were buying like a new version every year or whatever. So to some degree, but then the business model shift actually changed business as we know it, because now like. Things like Adobe Photoshop. Now you subscribe to rather than purchase. So it ended up where you had a technical shift and a business model shift that were very logically intertwined that actually the business model shift was turned out to be as significant as the technical as the shift.[00:35:59] Bret: And I think with [00:36:00] agents, because they actually accomplish a job, I do think that it doesn't make sense to me that you'd pay for the privilege of like. Using the software like that coding agent, like if it writes really bad code, like fire it, you know, I don't know what the right metaphor is like you should pay for a job.[00:36:17] Bret: Well done in my opinion. I mean, that's how you pay your software engineers, right? And[00:36:20] swyx: and well, not really. We paid to put them on salary and give them options and they vest over time. That's fair.[00:36:26] Bret: But my point is that you don't pay them for how many characters they write, which is sort of the token based, you know, whatever, like, There's a, that famous Apple story where we're like asking for a report of how many lines of code you wrote.[00:36:40] Bret: And one of the engineers showed up with like a negative number cause he had just like done a big refactoring. There was like a big F you to management who didn't understand how software is written. You know, my sense is like the traditional usage based or seat based thing. It's just going to look really antiquated.[00:36:55] Bret: Cause it's like asking your software engineer, how many lines of code did you write today? Like who cares? Like, cause [00:37:00] absolutely no correlation. So my old view is I don't think it's be different in every category, but I do think that that is the, if an agent is doing a job, you should, I think it properly incentivizes the maker of that agent and the customer of, of your pain for the job well done.[00:37:16] Bret: It's not always perfect to measure. It's hard to measure engineering productivity, but you can, you should do something other than how many keys you typed, you know Talk about perverse incentives for AI, right? Like I can write really long functions to do the same thing, right? So broadly speaking, you know, I do think that we're going to see a change in business models of software towards outcomes.[00:37:36] Bret: And I think you'll see a change in delivery models too. And, and, you know, in our customer base you know, we empower our customers to really have their hands on the steering wheel of what the agent does they, they want and need that. But the role is different. You know, at a lot of our customers, the customer experience operations folks have renamed themselves the AI architects, which I think is really cool.[00:37:55] Bret: And, you know, it's like in the early days of the Internet, there's the role of the webmaster. [00:38:00] And I don't know whether your webmaster is not a fashionable, you know, Term, nor is it a job anymore? I just, I don't know. Will they, our tech stand the test of time? Maybe, maybe not. But I do think that again, I like, you know, because everyone listening right now is a software engineer.[00:38:14] Bret: Like what is the form factor of a coding agent? And actually I'll, I'll take a breath. Cause actually I have a bunch of pins on them. Like I wrote a blog post right before Christmas, just on the future of software development. And one of the things that's interesting is like, if you look at the way I use cursor today, as an example, it's inside of.[00:38:31] Bret: A repackaged visual studio code environment. I sometimes use the sort of agentic parts of it, but it's largely, you know, I've sort of gotten a good routine of making it auto complete code in the way I want through tuning it properly when it actually can write. I do wonder what like the future of development environments will look like.[00:38:55] Bret: And to your point on what is a software product, I think it's going to change a lot in [00:39:00] ways that will surprise us. But I always use, I use the metaphor in my blog post of, have you all driven around in a way, Mo around here? Yeah, everyone has. And there are these Jaguars, the really nice cars, but it's funny because it still has a steering wheel, even though there's no one sitting there and the steering wheels like turning and stuff clearly in the future.[00:39:16] Bret: If once we get to that, be more ubiquitous, like why have the steering wheel and also why have all the seats facing forward? Maybe just for car sickness. I don't know, but you could totally rearrange the car. I mean, so much of the car is oriented around the driver, so. It stands to reason to me that like, well, autonomous agents for software engineering run through visual studio code.[00:39:37] Bret: That seems a little bit silly because having a single source code file open one at a time is kind of a goofy form factor for when like the code isn't being written primarily by you, but it begs the question of what's your relationship with that agent. And I think the same is true in our industry of customer experience, which is like.[00:39:55] Bret: Who are the people managing this agent? What are the tools do they need? And they definitely need [00:40:00] tools, but it's probably pretty different than the tools we had before. It's certainly different than training a contact center team. And as software engineers, I think that I would like to see particularly like on the passion project side or research side.[00:40:14] Bret: More innovation in programming languages. I think that we're bringing the cost of writing code down to zero. So the fact that we're still writing Python with AI cracks me up just cause it's like literally was designed to be ergonomic to write, not safe to run or fast to run. I would love to see more innovation and how we verify program correctness.[00:40:37] Bret: I studied for formal verification in college a little bit and. It's not very fashionable because it's really like tedious and slow and doesn't work very well. If a lot of code is being written by a machine, you know, one of the primary values we can provide is verifying that it actually does what we intend that it does.[00:40:56] Bret: I think there should be lots of interesting things in the software development life cycle, like how [00:41:00] we think of testing and everything else, because. If you think about if we have to manually read every line of code that's coming out as machines, it will just rate limit how much the machines can do. The alternative is totally unsafe.[00:41:13] Bret: So I wouldn't want to put code in production that didn't go through proper code review and inspection. So my whole view is like, I actually think there's like an AI native I don't think the coding agents don't work well enough to do this yet, but once they do, what is sort of an AI native software development life cycle and how do you actually.[00:41:31] Bret: Enable the creators of software to produce the highest quality, most robust, fastest software and know that it's correct. And I think that's an incredible opportunity. I mean, how much C code can we rewrite and rust and make it safe so that there's fewer security vulnerabilities. Can we like have more efficient, safer code than ever before?[00:41:53] Bret: And can you have someone who's like that guy in the matrix, you know, like staring at the little green things, like where could you have an operator [00:42:00] of a code generating machine be like superhuman? I think that's a cool vision. And I think too many people are focused on like. Autocomplete, you know, right now, I'm not, I'm not even, I'm guilty as charged.[00:42:10] Bret: I guess in some ways, but I just like, I'd like to see some bolder ideas. And that's why when you were joking, you know, talking about what's the react of whatever, I think we're clearly in a local maximum, you know, metaphor, like sort of conceptual local maximum, obviously it's moving really fast. I think we're moving out of it.[00:42:26] Alessio: Yeah. At the end of 23, I've read this blog post from syntax to semantics. Like if you think about Python. It's taking C and making it more semantic and LLMs are like the ultimate semantic program, right? You can just talk to them and they can generate any type of syntax from your language. But again, the languages that they have to use were made for us, not for them.[00:42:46] Alessio: But the problem is like, as long as you will ever need a human to intervene, you cannot change the language under it. You know what I mean? So I'm curious at what point of automation we'll need to get, we're going to be okay making changes. To the underlying languages, [00:43:00] like the programming languages versus just saying, Hey, you just got to write Python because I understand Python and I'm more important at the end of the day than the model.[00:43:08] Alessio: But I think that will change, but I don't know if it's like two years or five years. I think it's more nuanced actually.[00:43:13] Bret: So I think there's a, some of the more interesting programming languages bring semantics into syntax. So let me, that's a little reductive, but like Rust as an example, Rust is memory safe.[00:43:25] Bret: Statically, and that was a really interesting conceptual, but it's why it's hard to write rust. It's why most people write python instead of rust. I think rust programs are safer and faster than python, probably slower to compile. But like broadly speaking, like given the option, if you didn't have to care about the labor that went into it.[00:43:45] Bret: You should prefer a program written in Rust over a program written in Python, just because it will run more efficiently. It's almost certainly safer, et cetera, et cetera, depending on how you define safe, but most people don't write Rust because it's kind of a pain in the ass. And [00:44:00] the audience of people who can is smaller, but it's sort of better in most, most ways.[00:44:05] Bret: And again, let's say you're making a web service and you didn't have to care about how hard it was to write. If you just got the output of the web service, the rest one would be cheaper to operate. It's certainly cheaper and probably more correct just because there's so much in the static analysis implied by the rest programming language that it probably will have fewer runtime errors and things like that as well.[00:44:25] Bret: So I just give that as an example, because so rust, at least my understanding that came out of the Mozilla team, because. There's lots of security vulnerabilities in the browser and it needs to be really fast. They said, okay, we want to put more of a burden at the authorship time to have fewer issues at runtime.[00:44:43] Bret: And we need the constraint that it has to be done statically because browsers need to be really fast. My sense is if you just think about like the, the needs of a programming language today, where the role of a software engineer is [00:45:00] to use an AI to generate functionality and audit that it does in fact work as intended, maybe functionally, maybe from like a correctness standpoint, some combination thereof, how would you create a programming system that facilitated that?[00:45:15] Bret: And, you know, I bring up Rust is because I think it's a good example of like, I think given a choice of writing in C or Rust, you should choose Rust today. I think most people would say that, even C aficionados, just because. C is largely less safe for very similar, you know, trade offs, you know, for the, the system and now with AI, it's like, okay, well, that just changes the game on writing these things.[00:45:36] Bret: And so like, I just wonder if a combination of programming languages that are more structurally oriented towards the values that we need from an AI generated program, verifiable correctness and all of that. If it's tedious to produce for a person, that maybe doesn't matter. But one thing, like if I asked you, is this rest program memory safe?[00:45:58] Bret: You wouldn't have to read it, you just have [00:46:00] to compile it. So that's interesting. I mean, that's like an, that's one example of a very modest form of formal verification. So I bring that up because I do think you have AI inspect AI, you can have AI reviewed. Do AI code reviews. It would disappoint me if the best we could get was AI reviewing Python and having scaled a few very large.[00:46:21] Bret: Websites that were written on Python. It's just like, you know, expensive and it's like every, trust me, every team who's written a big web service in Python has experimented with like Pi Pi and all these things just to make it slightly more efficient than it naturally is. You don't really have true multi threading anyway.[00:46:36] Bret: It's just like clearly that you do it just because it's convenient to write. And I just feel like we're, I don't want to say it's insane. I just mean. I do think we're at a local maximum. And I would hope that we create a programming system, a combination of programming languages, formal verification, testing, automated code reviews, where you can use AI to generate software in a high scale way and trust it.[00:46:59] Bret: And you're [00:47:00] not limited by your ability to read it necessarily. I don't know exactly what form that would take, but I feel like that would be a pretty cool world to live in.[00:47:08] Alessio: Yeah. We had Chris Lanner on the podcast. He's doing great work with modular. I mean, I love. LVM. Yeah. Basically merging rust in and Python.[00:47:15] Alessio: That's kind of the idea. Should be, but I'm curious is like, for them a big use case was like making it compatible with Python, same APIs so that Python developers could use it. Yeah. And so I, I wonder at what point, well, yeah.[00:47:26] Bret: At least my understanding is they're targeting the data science Yeah. Machine learning crowd, which is all written in Python, so still feels like a local maximum.[00:47:34] Bret: Yeah.[00:47:34] swyx: Yeah, exactly. I'll force you to make a prediction. You know, Python's roughly 30 years old. In 30 years from now, is Rust going to be bigger than Python?[00:47:42] Bret: I don't know this, but just, I don't even know this is a prediction. I just am sort of like saying stuff I hope is true. I would like to see an AI native programming language and programming system, and I use language because I'm not sure language is even the right thing, but I hope in 30 years, there's an AI native way we make [00:48:00] software that is wholly uncorrelated with the current set of programming languages.[00:48:04] Bret: or not uncorrelated, but I think most programming languages today were designed to be efficiently authored by people and some have different trade offs.[00:48:15] Evolution of Programming Languages[00:48:15] Bret: You know, you have Haskell and others that were designed for abstractions for parallelism and things like that. You have programming languages like Python, which are designed to be very easily written, sort of like Perl and Python lineage, which is why data scientists use it.[00:48:31] Bret: It's it can, it has a. Interactive mode, things like that. And I love, I'm a huge Python fan. So despite all my Python trash talk, a huge Python fan wrote at least two of my three companies were exclusively written in Python and then C came out of the birth of Unix and it wasn't the first, but certainly the most prominent first step after assembly language, right?[00:48:54] Bret: Where you had higher level abstractions rather than and going beyond go to, to like abstractions, [00:49:00] like the for loop and the while loop.[00:49:01] The Future of Software Engineering[00:49:01] Bret: So I just think that if the act of writing code is no longer a meaningful human exercise, maybe it will be, I don't know. I'm just saying it sort of feels like maybe it's one of those parts of history that just will sort of like go away, but there's still the role of this offer engineer, like the person actually building the system.[00:49:20] Bret: Right. And. What does a programming system for that form factor look like?[00:49:25] React and Front-End Development[00:49:25] Bret: And I, I just have a, I hope to be just like I mentioned, I remember I was at Facebook in the very early days when, when, what is now react was being created. And I remember when the, it was like released open source I had left by that time and I was just like, this is so f*****g cool.[00:49:42] Bret: Like, you know, to basically model your app independent of the data flowing through it, just made everything easier. And then now. You know, I can create, like there's a lot of the front end software gym play is like a little chaotic for me, to be honest with you. It is like, it's sort of like [00:50:00] abstraction soup right now for me, but like some of those core ideas felt really ergonomic.[00:50:04] Bret: I just wanna, I'm just looking forward to the day when someone comes up with a programming system that feels both really like an aha moment, but completely foreign to me at the same time. Because they created it with sort of like from first principles recognizing that like. Authoring code in an editor is maybe not like the primary like reason why a programming system exists anymore.[00:50:26] Bret: And I think that's like, that would be a very exciting day for me.[00:50:28] The Role of AI in Programming[00:50:28] swyx: Yeah, I would say like the various versions of this discussion have happened at the end of the day, you still need to precisely communicate what you want. As a manager of people, as someone who has done many, many legal contracts, you know how hard that is.[00:50:42] swyx: And then now we have to talk to machines doing that and AIs interpreting what we mean and reading our minds effectively. I don't know how to get across that barrier of translating human intent to instructions. And yes, it can be more declarative, but I don't know if it'll ever Crossover from being [00:51:00] a programming language to something more than that.[00:51:02] Bret: I agree with you. And I actually do think if you look at like a legal contract, you know, the imprecision of the English language, it's like a flaw in the system. How many[00:51:12] swyx: holes there are.[00:51:13] Bret: And I do think that when you're making a mission critical software system, I don't think it should be English language prompts.[00:51:19] Bret: I think that is silly because you want the precision of a a programming language. My point was less about that and more about if the actual act of authoring it, like if you.[00:51:32] Formal Verification in Software[00:51:32] Bret: I'll think of some embedded systems do use formal verification. I know it's very common in like security protocols now so that you can, because the importance of correctness is so great.[00:51:41] Bret: My intellectual exercise is like, why not do that for all software? I mean, probably that's silly just literally to do what we literally do for. These low level security protocols, but the only reason we don't is because it's hard and tedious and hard and tedious are no longer factors. So, like, if I could, I mean, [00:52:00] just think of, like, the silliest app on your phone right now, the idea that that app should be, like, formally verified for its correctness feels laughable right now because, like, God, why would you spend the time on it?[00:52:10] Bret: But if it's zero costs, like, yeah, I guess so. I mean, it never crashed. That's probably good. You know, why not? I just want to, like, set our bars really high. Like. We should make, software has been amazing. Like there's a Mark Andreessen blog post, software is eating the world. And you know, our whole life is, is mediated digitally.[00:52:26] Bret: And that's just increasing with AI. And now we'll have our personal agents talking to the agents on the CRO platform and it's agents all the way down, you know, our core infrastructure is running on these digital systems. We now have like, and we've had a shortage of software developers for my entire life.[00:52:45] Bret: And as a consequence, you know if you look, remember like health care, got healthcare. gov that fiasco security vulnerabilities leading to state actors getting access to critical infrastructure. I'm like. We now have like created this like amazing system that can [00:53:00] like, we can fix this, you know, and I, I just want to, I'm both excited about the productivity gains in the economy, but I just think as software engineers, we should be bolder.[00:53:08] Bret: Like we should have aspirations to fix these systems so that like in general, as you said, as precise as we want to be in the specification of the system. We can make it work correctly now, and I'm being a little bit hand wavy, and I think we need some systems. I think that's where we should set the bar, especially when so much of our life depends on this critical digital infrastructure.[00:53:28] Bret: So I'm I'm just like super optimistic about it. But actually, let's go to w
2025-02-11 Weekly News — Episode 228Watch the video version on YouTube at https://youtube.com/live/-08ciY2kW4c?feature=share Hosts: Gavin Pickin - Senior Developer at Ortus SolutionsBrad Wood - Senior Developer at Ortus SolutionsBig Thanks to our Sponsor - Ortus SolutionsThe makers of ColdBox, CommandBox, ForgeBox, TestBox and all your favorite box-es out there including BoxLang.A few ways to say thanks back to Ortus Solutions:Buy Tickets to Into the Box 2025 in Washington DC https://t.co/cFLDUJZEyMApril 30, 2025 - May 2, 2025 - Washington, DCLike and subscribe to our videos on YouTube. Help ORTUS reach for the Stars - Star and Fork our ReposStar all of your Github Box Dependencies from CommandBox with https://www.forgebox.io/view/commandbox-github Subscribe to our Podcast on your Podcast Apps and leave us a reviewSign up for a free or paid account on CFCasts, which is releasing new content regularlyBOXLife store: https://www.ortussolutions.com/about-us/shopBuy Ortus's Books102 ColdBox HMVC Quick Tips and Tricks on GumRoad (http://gum.co/coldbox-tips)Now on Amazon! In hardcover too!!!https://www.amazon.com/dp/B0CJHB712MLearn Modern ColdFusion (CFML) in 100+ Minutes - Free online https://modern-cfml.ortusbooks.com/ or buy an EBook or Paper copy https://www.ortussolutions.com/learn/books/coldfusion-in-100-minutes Patreon Support ()We have 61 patreons: https://www.patreon.com/ortussolutions. News and AnnouncementsOrtus announce BoxLang to the Java Community and JfokusJfokus has been the birth of BoxLang for the Java community. So Incredibly well received. We even had folks coding on their phones on https://try.boxlang.io for some sweet hoodies. What an amazing event. So much great feedback and amazing response to finally having momentum in the dynamic JVM space. We will definitely be back in 2026 In force. hashtag#boxlang hashtag#jfokus hashtag#dynamicJVM hashtag#communityhttps://www.linkedin.com/posts/lmajano_boxlang-jfokus-dynamicjvm-activity-7292961359632240640-N1nc?Get a Free BoxLang+ License with Your ITB 2025 Ticket!At Ortus Solutions, we are dedicated to delivering the best experience for our Into the Box attendees. This year's event will be an exciting opportunity to explore BoxLang and modern CFML development, and we want to ensure that attending in person is even more rewarding.Exclusive On-Site Attendee Benefit: Free 1-Year BoxLang+ License!As a special incentive, all on-site attendees will receive a free 1-year subscription to BoxLang+.BoxLang+ is a professional subscription that enhances development across multiple runtimes, including CLI, web, CommandBox, and serverless environments.https://www.ortussolutions.com/blog/get-a-free-boxlang-license-with-your-itb-2025-ticket Adobe ColdFusion Summit 2025 Adobe ColdFusion Summit 2025 is here—join us in Las Vegas on Sept 22-23 (optional certification days on Sept 21 or 24). Grab your early bird tickets for just $99 before they're gone. Secure your spot today!Register now: https://bit.ly/414pLF6Team Plans and Exclusive Deals: Into the Box 2025!Thinking about attending Into the Box 2025 but don't want to go alone? Or are you looking to train your team with the latest modern software development tools? We've got you covered. Take advantage of our exclusive team deals and bring your team for an even better experience.Get 50% off your second Into the Box on-site ticket.Buy 2, Get 1 Free – Purchase two on-site tickets, and the third one is on us.https://www.ortussolutions.com/blog/team-plans-and-exclusive-deals-into-the-box-2025 TeraTech release Free Online Course for Modernizing CF AppsA Call to the #ColdFusion Keepers of Middleware-Earth! ⚔️
Send me a Text Message hereFULL SHOW NOTES https://www.microsoftinnovationpodcast.com/652 Join Parvez Ghumra as he explores his journey as a Microsoft MVP from Leicester, UK. His passion for the Power Platform and Dynamics 365 CE development is shaped by strong family values, a love for travel—especially his mini pilgrimage to Mecca—and a spicy hobby of chili growing. Parvez reflects on the evolution of CRM deployment tools, from manual XML to modern no-code solutions like Azure DevOps and GitHub Actions, while acknowledging challenges with tools like Package Deployer. Alongside his insights, Mark also shares his own path to MVP recognition, emphasizing the power of community support in driving personal and professional growth. TAKEAWAYS• The role of family in professional development • Travel experiences influencing personal and professional growth • Transition from bespoke development to Dynamics 365 • Importance of Application Lifecycle Management in software delivery • Shift from SDK to low-code solutions in modern development • The value of community support in achieving the MVP statusThis year we're adding a new show to our line up - The AI Advantage. We'll discuss the skills you need to thrive in an AI-enabled world. DynamicsMinds is a world-class event in Slovenia that brings together Microsoft product managers, industry leaders, and dedicated users to explore the latest in Microsoft Dynamics 365, the Power Platform, and Copilot.Early bird tickets are on sale now and listeners of the Microsoft Innovation Podcast get 10% off with the code MIPVIP144bff https://www.dynamicsminds.com/register/?voucher=MIPVIP144bff Accelerate your Microsoft career with the 90 Day Mentoring Challenge We've helped 1,300+ people across 70+ countries establish successful careers in the Microsoft Power Platform and Dynamics 365 ecosystem.Benefit from expert guidance, a supportive community, and a clear career roadmap. A lot can change in 90 days, get started today!Support the showIf you want to get in touch with me, you can message me here on Linkedin.Thanks for listening
Picking up from Part 1, hosts Lois Houston and Nikita Abraham continue their deep dive into MySQL security with MySQL Solution Engineer Ravish Patel. In this episode, they focus on user authentication techniques and tools such as MySQL Enterprise Audit and MySQL Enterprise Firewall. MySQL 8.4 Essentials: https://mylearn.oracle.com/ou/course/mysql-84-essentials/141332/226362 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Nikita: Welcome to the Oracle University Podcast! I'm Nikita Abraham, Team Lead of Editorial Services with Oracle University, and with me is Lois Houston, Director of Innovation Programs. Lois: Hi everyone! Last week, we began exploring MySQL security, covering regulatory compliance and common security threats. Nikita: This week, we're continuing the conversation by digging deeper into MySQL's user authentication methods and taking a closer look at some powerful security tools in the MySQL Enterprise suite. 00:57 Lois: And we're joined once again by Ravish Patel, a MySQL Solution Engineer here at Oracle. Welcome, Ravish! How does user authentication work in MySQL? Ravish: MySQL authenticates users by storing account details in a system database. These accounts are authenticated with three elements, username and hostname commonly separated with an @ sign along with a password. The account identifier has the username and host. The host identifier specifies where the user connects from. It specifies either a DNS hostname or an IP address. You can use a wild card as part of the hostname or IP address if you want to allow this username to connect from a range of hosts. If the host value is just the percent sign wildcard, then that username can connect from any host. Similarly, if you create the user account with an empty host, then the user can connect from any host. 01:55 Lois: Ravish, can MySQL Enterprise Edition integrate with an organization's existing accounts? Ravish: MySQL Enterprise authentication integrates with existing authentication mechanisms in your infrastructure. This enables centralized account management, policies, and authentication based on group membership and assigned corporate roles, and MySQL supports a wide range of authentication plugins. If your organization uses Linux, you might already be familiar with PAM, also known as Pluggable Authentication Module. This is a standard interface in Linux and can be used to authenticate to MySQL. Kerberos is another widely used standard for granting authorization using a centralized service. The FIDO Alliance, short for Fast Identify Online, promotes an interface for passwordless authentication. This includes methods for authenticating with biometrics RUSB security tokens. And MySQL even supports logging into centralized authentication services that use LDAP, including having a dedicated plugin to connect to Windows domains. 03:05 Nikita: So, once users are authenticated, how does MySQL handle user authorization? Ravish: The MySQL privilege system uses the GRANT keyword. This grants some privilege X on some object Y to some user Z, and optionally gives you permission to grant the same privilege to others. These can be global administrative privileges that enable users to perform tasks at the server level, or they can be database-specific privileges that allow users to modify the structure or data within a database. 03:39 Lois: What about database privileges? Ravish: Database privileges can be fine-grained from the largest to the smallest. At the database level, you can permit users to create, alter, and delete whole databases. The same privileges apply at the table, view, index, and stored procedure levels. And in addition, you can control who can execute stored procedures and whether they do so with their own identity or with the privileges of the procedure's owner. For tables, you can control who can select, insert, update, and delete rows in those tables. You can even specify the column level, who can select, insert, and update data in those columns. Now, any privileged system carries with it the risk that you might forget an important password and lock yourself out. In MySQL, if you forget the password to the root account and don't have any other admin-level accounts, you will not be able to administer the MySQL server. 04:39 Nikita: Is there a way around this? Ravish: There is a way around this as long as you have physical access to the server that runs the MySQL process. If you launch the MySQL process with the --skip grant tables option, then MySQL will not load the privilege tables from the system database when it starts. This is clearly a dangerous thing to do, so MySQL also implicitly disables network access when you use that option to prevent users from connecting over the network. When you use this option, any client connection to MySQL succeeds and has root privileges. This means you should control who has shell access to the server during this time and you should restart the server or enable privileged system with the command flush privileges as soon as you have changed the root password. The privileges we have already discussed are built into MySQL and are always available. MySQL also makes use of dynamic privileges, which are privileges that are enabled at runtime and which can be granted once they are enabled. In addition, plugins and components can define privileges that relate to features of those plugins. For example, the enterprise firewall plugin defines the firewall admin privilege and the audit admin privilege is defined by the enterprise audit plugin. 06:04 Are you working towards an Oracle Certification this year? Join us at one of our certification prep live events in the Oracle University Learning Community. Get insider tips from seasoned experts and learn from others who have already taken their certifications. Go to community.oracle.com/ou to jump-start your journey towards certification today! 06:28 Nikita: Welcome back! Ravish, I want to move on to MySQL Enterprise security tools. Could you start with MySQL Enterprise Audit? Ravish: MySQL Enterprise Audit is an extension available in Enterprise Edition that makes it easier to comply with regulations that require observability and control over who does what in your database servers. It provides visibility of connections, authentication, and individual operations. This is a necessary part of compliance with various regulations, including GDPR, NIS2, HIPAA, and so on. You can control who has access to the audited events so that the audits themselves are protected. As well as configuring what you audit, you can also configure rotation policies so that unmonitored audit logs don't fill up your storage space. The configuration can be performed while the server is running with minimal effect on production applications. You don't need to restart the server to enable or disable auditing or to change the filtering options. You can output the audit logs in either XML or JSON format, depending on how you want to perform further searching and processing. If you need it, you can compress the logs to save space and you can encrypt the logs to provide address protection of audited identities and data modifications. The extension is available either as a component or if you prefer, as the legacy plugin. 07:53 Lois: But how does it all work? Ravish: Well, first, as a DBA, you'll enable the audit plugin and attach it to your running server. You can then configure filters to audit your connections and queries and record who does what, when they do it, and so on. Then once the system is up and running, it audits whenever a user authenticates, accesses data, or even when they perform schema changes. The logs are recorded in whatever format that you have configured. You can then monitor the audited events at will with MySQL tools such as Workbench or with any software that can view and manipulate XML or JSON files. You can even configure Enterprise Audit to export the logs to an external Audit Vault, enabling collection, and archiving of audit information from all over your enterprise. In general, you won't audit every action on every server. You can configure filters to control what specific information ends up in the logs. 08:50 Nikita: Why is this sort of filtering necessary, Ravish? Ravish: As a DBA, this enables you to create a custom designed audit process to monitor things that you're really interested in. Rules can be general or very fine grained, which enables you to reduce the overall log size, reduces the performance impact on the database server and underlying storage, makes it easier to process the log file once you've gathered data, and filters are configured with the easily used JSON file format. 09:18 Nikita: So what information is audited? Ravish: You can see who did what, when they did it, what commands they use, and whether they succeeded. You can also see where they connected from, which can be useful when identifying man in the middle attacks or stolen credentials. The log also records any available client information, including software versions and information about the operating system and much more. 09:42 Lois: Can you tell us about MySQL Enterprise Firewall, which I understand is a specific tool to learn and protect the SQL statements that MySQL executes? Ravish: MySQL Enterprise Firewall can be enabled on MySQL Enterprise Edition with a plugin. It uses an allow list to set policies for acceptable queries. You can apply this allow list to either specific accounts or groups. Queries are protected in real time. Every query that executes is verified per server and checked to make sure that it conforms to query structures that are defined in the allow list. This makes it very useful to block SQL injection attacks. Only transactions that match well-formed queries in the allow list are permitted. So any attempt to inject other types of SQL statements are blocked. Not only does it block such statements, but it also sends an alert to the MySQL error log in real time. This gives you visibility on any security gaps in your applications. The Enterprise Firewall has a learning mode during which you can train the firewall to identify the correct sort of query. This makes it easy to create the allow list based on a known good workload that you can create during development before your application goes live. 10:59 Lois: Does MySQL Enterprise Firewall operate seamlessly and transparently with applications? Ravish: Your application simply submits queries as normal and the firewall monitors incoming queries with no application changes required. When you use the Enterprise Firewall, you don't need to change your application. It can submit statements as normal to the MySQL server. This adds an extra layer of protection in your applications without requiring any additional application code so that you can protect against malicious SQL injection attacks. This not only applies to your application, but also to any client that configured user runs. 11:37 Nikita: How does this firewall system work? Ravish: When the application submits a SQL statement, the firewall verifies that the statement is in a form that matches the policy defined in the allow list before it passes to the server for execution. It blocks any statement that is in a form that's outside of policy. In many cases, a badly formed query can only be executed if there is some bug in the application's data validation. You can use the firewall's detection and alerting features to let when it blocks such a query, which will help you quickly detect such bugs, even when the firewall continues to block the malicious queries. 12:14 Lois: Can you take us through some of the encryption and masking features available in MySQL Enterprise Edition? Ravish: Transparent data encryption is a great way to protect against physical security disclosure. If someone gains access to the database files on the file system through a vulnerability of the operating system, or even if you've had a laptop stolen, your data will still be protected. This is called Data at Rest Encryption. It protects not only the data rows in tablespaces, but also other locations that store some version of the data, such as undo logs, redo logs, binary logs and relay logs. It is a strong encryption using the AES 256 algorithm. Once we enable transparent data encryption, it is, of course, transparent to the client software, applications, and users. Applications continue to submit SQL statements, and the encryption and decryptions happen in flight. The application code does not need to change. All data types, table structure, and database names remain the same. It's even transparent to the DBAs. The same data types, table structure, and so on is still how the DBA interacts with the system while creating indexes, views, and procedures. In fact, DBAs don't even need to be in possession of any encryption keys to perform their admin tasks. It is entirely transparent. 13:32 Nikita: What kind of management is required for encryption? Ravish: There is, of course, some key management required at the outside. You must keep the keys safe and put policies in place so that you store and rotate keys effectively, and ensure that you can recover those keys in the event of some disaster. This key management integrates with common standards, including KMIP and KMS. 13:53 Lois: Before we close, I want to ask you about the role of data masking in MySQL. Ravish: Data masking is when we replace some part of the private information with a placeholder. You can mask portions of a string based on the string position using the letter X or some other character. You can also create a table that contains a dictionary of suitable replacement words and use that dictionary to mask values in your data. There are specific functions that work with known formats of data, for example, social security numbers as used in the United States, national insurance numbers from the United Kingdom, and Canadian social insurance numbers. You can also mask various account numbers, such as primary account numbers like credit cards or IBAN numbers as used in the European Bank system. There are also functions to generate random values, which can be useful in test databases. This might be a random number within some range, or an email address, or a compliant credit card number, or social security number. You can also create random information using the dictionary table that contains suitable example values. 14:58 Nikita: Thank you, Ravish, for taking us through MySQL security. We really cannot overstate the importance of this, especially in today's data-driven world. Lois: That's right, Niki. Cyber threats are increasingly sophisticated these days. You really have to be on your toes when it comes to security. If you're interested in learning more about this, the MySQL 8.4 Essentials course on mylearn.oracle.com is a great next step. Nikita: We'd also love to hear your thoughts on our podcast so please feel free to share your comments, suggestions, or questions by emailing us at ou-podcast_ww@oracle.com. That's ou-podcast_ww@oracle.com. In our next episode, we'll journey into the world of MySQL backups. Until then, this is Nikita Abraham… Nikita: And Lois Houston, signing off! 15:51 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
Welcome back to PING, at the start of 2025. In this episode, Gautam Akiwate, (now with Apple, but at the time of recording with Stanford University) talks about the 2021 Advanced Network Research Prize winning paper, co-authored with Stefan Savage, Geoffrey Voelker and Kimberly Claffy which was titled "Risky BIZness: Risks Derived from Registrar Name Management". The paper explores a situation which emerged inside the supply chain behind DNS name delegation, in the use of an IETF protocol called Extensible Provisioning Protocol or EPP. EPP is implemented in XML over the SOAP mechanism, and is how registry-registrar communications take place, on behalf of a given domain name holder (the delegate) to record which DNS nameservers have the authority to publish the delegated zone. The problem doesn't lie in the DNS itself, but in the operational practices which emerged in some registrars, to remove dangling dependencies in the systems when domain names were de-registered. In effect they used an EPP feature to rename the dependency, so they could move on with selling the domain name to somebody else. The problem is that feature created valid names, which could themselves then be purchased. For some number of DNS consumers, those new valid nameservers would then be permitted to serve the domain, and enable attacks on the integrity of the DNS and the web. Gautam and his co-authors explored a very interesting quirk of the back end systems and in the process helped improve the security of the DNS and identified weaknesses in a long-standing "daily dump" process to provide audit and historical data.
An airhacks.fm conversation with Ladislav Thon (@ladicek) about: CDI history and evolution, transition from XML-based configuration to annotation-based dependency injection, introduction of CDI lite in version 4.0, differences between portable extensions and build-compatible extensions, Arc as Quarkus CDI implementation, challenges in implementing CDI at build time, new features in CDI 4.0 and 4.1 including lifecycle events and method invokers, comparison of CDI with other dependency injection frameworks, discussion on decorators, interceptors, and stereotypes in CDI, performance implications of CDI in Quarkus, Convention over Configuration in CDI, upcoming changes in CDI 5, removal of expression language dependency from CDI API, benefits of build-time oriented implementations like Quarkus, challenges in migrating portable extensions to build-compatible extensions, introduction of synthetic beans and observers, addition of priority support for stereotypes, improvements in invocation context API, ability to declare priority on producers in CDI 4.1, integration of CDI with application programming models, Convention over Configuration paired with dependency injection, performance considerations of CDI in Quarkus compared to manual dependency management Ladislav Thon on twitter: @ladicek
Join hosts Lois Houston and Nikita Abraham as they kick off a new season exploring the world of MySQL 8.4. Together with Perside Foster, a MySQL Principal Solution Engineer, they break down the fundamentals of MySQL, its wide range of applications, and why it's so popular among developers and database administrators. This episode also covers key topics like licensing options, support services, and the various tools, features, and plugins available in MySQL Enterprise Edition. ------------------------------------------------------------ Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:26 Lois: Hello and welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Happy New Year, everyone! Thank you for joining us as we begin a new season of the podcast, this time focused on the basics of MySQL 8.4. If you're a database administrator or want to become one, this is definitely for you. It's also great for developers working with data-driven apps or IT professionals handling MySQL installs, configurations, and support. 01:03 Lois: That's right, Niki. Throughout the season, we'll be delving into MySQL Enterprise Edition and covering a range of topics, including installation, security, backups, and even MySQL HeatWave on Oracle Cloud. Nikita: Today, we're going to discuss the Oracle MySQL ecosystem and its various components. We'll start by covering the fundamentals of MySQL and the different licenses that are available. Then, we'll explore the key tools and features to boost data security and performance. Plus, we'll talk a little bit about MySQL HeatWave, which is the cloud version of MySQL. 01:39 Lois: To take us through all of this, we've got Perside Foster with us today. Perside is a MySQL Principal Solution Engineer at Oracle. Hi Perside! For anyone new to MySQL, can you explain what it is and why it's so widely used? Perside: MySQL is a relational database management system that organizes data into structured tables, rows, and columns for efficient programming and data management. MySQL is transactional by nature. When storing and managing data, actions such as selecting, inserting, updating, or deleting are required. MySQL groups these actions into a transaction. The transaction is saved only if every part completes successfully. 02:29 Lois: Now, how does MySQL work under the hood? Perside: MySQL is a high-performance database that uses its default storage engine, known as InnoDB. InnoDB helps MySQL handle complex operations and large data volumes smoothly. 02:49 Nikita: For the unversed, what are some day-to-day applications of MySQL? How is it used in the real world? Perside: MySQL works well with online transaction processing workloads. It handles transactions quickly and manages large volumes of transaction at once. OLTP, with low latency and high throughput, makes MySQL ideal for high-speed environments like banking or online shopping. MySQL not only stores data but also replicates it from a main server to several replicas. 03:31 Nikita: That's impressive! And what are the benefits of using MySQL? Perside: It improves data availability and load balancing, which is crucial for businesses that need up-to-date information. MySQL replication supports read scale-out by distributing queries across servers, which increases high availability. MySQL is the most popular database on the web. 04:00 Lois: And why is that? What makes it so popular? What sets it apart from the other database management systems? Perside: First, it is a relational database management system that supports SQL. It also works as a document store, enabling the creation of both SQL and NoSQL applications without the need for separate NoSQL databases. Additionally, MySQL offers advanced security features to protect data integrity and privacy. It also uses tablespaces for better disk space management. This gives database administrators total control over their data storage. MySQL is simple, solid in its reliability, and secure by design. It is easy to use and ideal for both beginners and professionals. MySQL is proven at scale by efficiently handling large data volumes and high transaction rates. MySQL is also open source. This means anyone can download and use it for free. Users can modify the MySQL software to meet their needs. However, it is governed by the GNU General Public License, or GPL. GPL outlines specific rules for its use. MySQL offers two major editions. For developers and small teams, the Community Edition is available for free and includes all of the core features needed. For large enterprises, the Commercial Edition provides advanced features, management tools, and dedicated technical support. 05:58 Nikita: Ok. Let's shift focus to licensing. Who is it useful for? Perside: MySQL licensing is essential for independent software vendors. They're called ISVs. And original manufacturers, they're called OEMs. This is because these companies often incorporate MySQL code into their software products or hardware system to boost the functionality and performance of their product. MySQL licensing is equally important for value-added resellers. We call those VARs. And also, it's important for other distributors. These groups bundle MySQL with other commercially licensed software to sell as part of their product offering. The GPL v.2 license might suit Open Source projects that distribute their products under that license. 07:02 Lois: But what if some independent software vendors, original manufacturers, or value-add resellers don't want to create Open Source products. They don't want their source to be publicly available and they want to keep it private? What happens then? Perside: This is why Oracle provides a commercial licensing option. This license allows businesses to use MySQL in their products without having to disclose their source code as required by GPL v2. 07:33 Nikita: I want to bring up the robust support services that are available for MySQL Enterprise. What can we expect in terms of support, Perside? Perside: MySQL Enterprise Support provides direct access to the MySQL Support team. This team consists of experienced MySQL developers, who are experts in databases. They understand the issues and challenges their customers face because they, too, have personally tackled these issues and challenges. This support service operates globally and is available in 29 languages. So no matter where customers are located, Oracle Support provides assistance, most likely in their preferred language. MySQL Enterprise Support offers regular updates and hot fixes to ensure that the MySQL customer systems stays current with the latest improvements and security patches. MySQL Support is available 24 hours a day, 7 days a week. This ensures that whenever there is an issue, Oracle Support can provide the needed help without any delay. There are no restrictions on how many times customers can receive help from the team because MySQL Enterprise Support allows for unlimited incidents. MySQL Enterprise Support goes beyond simply fixing issues. It also offers guidance and advice. Whether customers require assistance with performance tuning or troubleshooting, the team is there to support them every step of the way. 09:27 Lois: Perside, can you walk us through the various tools and advanced features that are available within MySQL? Maybe we could start with MySQL Shell. Perside: MySQL Shell is an integrated client tool used for all MySQL database operations and administrative functions. It's a top choice among MySQL users for its versatility and powerful features. MySQL Shell offers multi-language support for JavaScript, Python, and SQL. These naturally scriptable languages make coding flexible and efficient. They also allow developers to use their preferred programming language for everything, from automating database tasks to writing complex queries. MySQL Shell supports both document and relational models. Whether your project needs the flexibility of NoSQL's document-oriented structures or the structured relationships of traditional SQL tables, MySQL Shell manages these different data types without any problems. Another key feature of MySQL Shell is its full access to both development and administrative APIs. This ability makes it easy to automate complex database operations and do custom development directly from MySQL Shell. MySQL Shell excels at DBA operations. It has extensive tools for database configuration, maintenance, and monitoring. These tools not only improve the efficiency of managing databases, but they also reduce the possibility for human error, making MySQL databases more reliable and easier to manage. 11:37 Nikita: What about the MySQL Server tool? I know that it is the core of the MySQL ecosystem and is available in both the community and commercial editions. But how does it enhance the MySQL experience? Perside: It connects with various devices, applications, and third-party tools to enhance its functionality. The server manages both SQL for structured data and NoSQL for schemaless applications. It has many key components. The parser, which interprets SQL commands. Optimizer, which ensures efficient query execution. And then the queue cache and buffer pools. They reduce disk usage and speed up access. InnoDB, the default storage engine, maintains data integrity and supports robust transaction and recovery mechanism. MySQL is designed for scalability and reliability. With features like replication and clustering, it distributes data, manage more users, and ensure consistent uptime. 13:00 Nikita: What role does MySQL Enterprise Edition play in MySQL server's capabilities? Perside: MySQL Enterprise Edition improves MySQL server by adding a suite of commercial extensions. These exclusive tools and services are designed for enterprise-level deployments and challenging environments. These tools and services include secure online backup. It keeps your data safe with efficient backup solutions. Real-time monitoring provides insight into database performance and health. The seamless integration connects easily with existing infrastructure, improving data flow and operations. Then you have the 24/7 expert support. It offers round the clock assistance to optimize and troubleshoot your databases. 14:04 Lois: That's an extensive list of features. Now, can you explain what MySQL Enterprise plugins are? I know they're specialized extensions that boost the capabilities of MySQL server, tools, and services, but I'd love to know a little more about how they work. Perside: Each plugin serves a specific purpose. Firewall plugin protects against SQL injection by allowing only pre-approved queries. The audit plugin logs database activities, tracking who accesses databases and what they do. Encryption plugin secures data at rest, protecting it from unauthorized access. Then we have the authentication plugin, which integrates with systems like LDAP and Active Directory for control access. Finally, the thread pool plugin optimizes performance in high load situation by effectively controlling how many execution threads are used and how long they run. The plugin and tools are included in the MySQL Enterprise Edition suite. 15:32 Join the Oracle University Learning Community and tap into a vibrant network of over 1 million members, including Oracle experts and fellow learners. This dynamic community is the perfect place to grow your skills, connect with likeminded learners, and celebrate your successes. As a MyLearn subscriber, you have access to engage with your fellow learners and participate in activities in the community. Visit community.oracle.com/ou to check things out today! 16:03 Nikita: Welcome back! We've been going through the various MySQL tools, and another important one is MySQL Enterprise Backup, right? Perside: MySQL Enterprise Backup is a powerful tool that offers online, non-blocking backup and recovery. It makes sure databases remain available and performs optimally during the backup process. It also includes advanced features, such as incremental and differential backup. Additionally, MySQL Enterprise Backup supports compression to reduce backups and encryptions to keep data secure. One of the standard capabilities of MySQL Enterprise Backup is its seamless integration with media management software, or MMS. This integration simplifies the process of managing and storing backups, ensuring that data is easily accessible and secure. Then we have the MySQL Workbench Enterprise. It enhances database development and design with robust tools for creating and managing your diagram and ensuring proper documentation. It simplifies data migration with powerful tools that makes it easy to move databases between platforms. For database administration, MySQL Workbench Enterprise offers efficient tools for monitoring, performance tuning, user management, and backup and recovery. MySQL Enterprise Monitor is another tool. It provides real-time MySQL performance and availability monitoring. It helps track database's health and performance. It visually finds and fixes problem queries. This is to make it easy to identify and address performance issues. It offers MySQL best-practice advisors to guide users in maintaining optimal performance and security. Lastly, MySQL Enterprise Monitor is proactive and it provides forecasting. 18:40 Lois: Oh that's really going to help users stay ahead of potential issues. That's fantastic! What about the Oracle Enterprise Manager Plugin for MySQL? Perside: This one offers availability and performance monitoring to make sure MySQL databases are running smoothly and efficiently. It provides configuration monitoring. This is to help keep track of the database settings and configuration. Finally, it collects all available metrics to provide comprehensive insight into the database operation. 19:19 Lois: Are there any tools designed to handle higher loads and improve security? Perside: MySQL Enterprise Thread Pool improves scalability as concurrent connections grows. It makes sure the database can handle increased loads efficiently. MySQL Enterprise Authentication is another tool. This one integrates MySQL with existing security infrastructures. It provides robust security solutions. It supports Linux PAM, LDAP, Windows, Kerberos, and even FIDO for passwordless authentication. 20:02 Nikita: Do any tools offer benefits like customized logging, data protection, database security? Perside: The MySQL Enterprise Audit provides out-of-the-box logging of connections, logins, and queries in XML or JSON format. It also offers simple to fine-grained policies for filtering and log rotation. This is to ensure comprehensive and customizable logging. MySQL Enterprise Firewall detects and blocks out of policy database transactions. This is to protect your data from unauthorized access and activities. We also have MySQL Enterprise Asymmetric Encryption. It uses MySQL encryption libraries for key management signing and verifying data. It ensures data stays secure during handling. MySQL Transparent Data Encryption, another tool, provides data-at-rest encryption within the database. The Master Key is stored outside of the database in a KMIP 1.1-compliant Key Vault. That is to improve database security. Finally, MySQL Enterprise Masking offers masking capabilities, including string masking and dictionary replacement. This ensures sensitive data is protected by obscuring it. It also provides random data generators, such as range-based, payment card, email, and social security number generators. These tools help create realistic but anonymized data for testing and development. 22:12 Lois: Can you tell us about HeatWave, the MySQL cloud service? We're going to have a whole episode dedicated to it soon, but just a quick introduction for now would be great. Perside: MySQL HeatWave offers a fully managed MySQL service. It provides deployment, backup and restore, high availability, resizing, and read replicas, all the features you need for efficient database management. This service is a powerful union of Oracle Infrastructure and MySQL Enterprise Edition 8. It combines robust performance with top-tier infrastructure. With MySQL HeatWave, your systems are always up to date with the latest security fixes, ensuring your data is always protected. Plus, it supports both OLTP and analytics/ML use cases, making it a versatile solution for diverse database needs. 23:22 Nikita: So to wrap up, what are your key takeways when it comes to MySQL? Perside: When you use MySQL, here is the bottom line. MySQL Enterprise Edition delivers unmatched performance at scale. It provides advanced monitoring and tuning capabilities to ensure efficient database operation, even under heavy loads. Plus, it provides insurance and immediate help when needed, allowing you to depend on expert support whenever an issue arises. Regarding total cost of ownership, TCO, this edition significantly reduces the risk of downtime and enhances productivity. This leads to significant cost savings and improved operational efficiency. On the matter of risk, MySQL Enterprise Edition addresses security and regulatory compliance. This is to make sure your data meets all necessary standards. Additionally, it provides direct contact with the MySQL team for expert guidance. In terms of DevOps agility, it supports automated scaling and management, as well as flexible real-time backups, making it ideal for agile development environments. Finally, concerning customer satisfaction, it enhances application performance and uptime, ensuring your customers have a reliable and smooth experience. 25:18 Lois: Thank you so much, Perside. This is really insightful information. To learn more about all the support services that are available, visit support.oracle.com. This is the central hub for all MySQL Enterprise Support resources. Nikita: Yeah, and if you want to know about the key commercial products offered by MySQL, visit mylearn.oracle.com and search for the MySQL 8.4: Essentials course. Join us next week for a discussion on installing MySQL. Until then, this is Nikita Abraham… Lois: And Lois Houston signing off! 25:53 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
Key Points From This Episode:Ram Venkatesh describes his career journey to founding Sema4.ai. The pain points he was trying to ease with Sema4.ai.How our general approach to big data is becoming more streamlined, albeit rather slowly. The ins and outs of Sema4.ai and how it serves its clients. What Ram means by “agent” and “agent agency” when referring to machine learning copilots.The difference between writing a program to execute versus an agent reasoning with it. Understanding the contextual work training method for agents. The relationship between an LLM and an agent and the risks of training LLMs on agent data.Exploring the next generation of LLM training protocols in the hopes of improving efficiency. The requirements of an LLM if you're not training it and unpacking modality improvements. Why agent input and feedback are major disruptions to SaaS and beyond. Our guest shares his hopes for the future of AI. Quotes:“I've spent the last 30 years in data. So, if there's a database out there, whether it's relational or object or XML or JSON, I've done something unspeakable to it at some point.” — @ramvzz [0:01:46]“As people are getting more experienced with how they could apply GenAI to solve their problems, then they're realizing that they do need to organize their data and that data is really important.” — @ramvzz [0:18:58]“Following the technology and where it can go, there's a lot of fun to be had with that.” — @ramvzz [0:23:29]“Now that we can see how software development itself is evolving, I think that 12-year-old me would've built so many more cooler things than I did with all the tech that's out here now.” — @ramvzz [0:29:14]Links Mentioned in Today's Episode:Ram Venkatesh on LinkedInRam Venkatesh on XSema4.aiClouderaHow AI HappensSama
In Episode 20 of the FileMaker DevCast, join our Portage Bay developers as they talk web apps. Charles, Jacob, Joe, John, Mike, & Russell discuss the nuances of deciding what approach to use when integrating FileMaker data with a web front-end. Do you stick with WebDirect, branch out to using FM BetterForms, or commit to full-stack development? WebDirect has an advantage of proven simplicity. But it also comes with limitations, such as a non-standard web experience and licensing costs for public-facing applications. FM Better Forms can be an excellent fit when you want to provide a polished, web-based user experience while still leveraging your FileMaker expertise. FM BetterForms can reduce development complexity by handling the back-end integration, deployment, and maintenance tasks that would normally be required in a full-stack web app approach. It also includes common front-end libraries - such as Tailwind, Bootstrap, and Lodash - that are kept updated for you. (NOTE: FM BetterForms is not currently compatible with FileMaker Cloud, since Cloud does not yet support XML web publishing.) Listen to the episode, subscribe to our channel, and send us your feedback! #Claris #FileMaker #webapps #WebDirect #FMBetterForms #fullstack #Tailwind #Bootstrap #Lodash #devs #DevCast #calbee #shrimpchips #wasabi
I'd write more here, but I've got places to be. Becky, Jeremy, and I are going to engage in some holiday festivities. We have a couple gingerbread houses to make and a tree to trim. And no nog to speak of. Really, that's all you get by way of show notes this time as a result, deal with it. Send your complaints to podcast@searls.co and they will be read on air. Some bullet points below the fold: My 90-minute, outdated guide to setting up a Mac Aaron's puns, ranked Jim Carrey is 62 and can't even retire I bought my 8 year old a switch and didn't realize how much games cost Teen creates memecoin, dumps it, earns $50,000 Startup will brick $800 emotional support robot for kids without refunds Install the Mozi app (manifesto here | app here) Vision Pro getting PSVR2 controllers The 2024 Game Awards news roundup Intergalactic: The Heretic Prophet looks badass, but is it too inclusive for The Gamers? We don't talk about Luigi An invisible desktop app for cheating on technical interviews (HN comments) Sora is out, but it's not good yet Indiana Jones and the Great Circle is out, and it is good yet Emudeck is so great it shouldn't be legal, and some people probably think it isn't Pikmin Stay tuned to my YouTube channel for upcoming LIVE streams Transcript: [00:00:00] Thank you. [00:00:29] Good morning, internet. [00:00:32] I started speaking before I realized, as an asynchronous audio production, it's actually pretty unlikely that it's the morning where you are. [00:00:43] Although, if it is the morning, coincidentally, please feel free to be creeped out, check over your shoulder. [00:00:51] Today was, I woke up with Vim and Vigor this morning, super excited to take on the day, thinking maybe I've got what it takes to record an audio production today. [00:01:07] And then we have an elderly coffee pot. [00:01:11] I don't want to completely put the blame on it because we were using it wrong for several years. [00:01:24] And it's a long story that I will shorten to say, any piece of consumer electronics or appliances in America, the half-life keeps decreasing. [00:01:37] And so when I say elderly coffee pot, I mean that we bought this coffee pot post-COVID. [00:01:42] And it's already feeling like, oh, we should probably get a new coffee pot, huh? [00:01:45] What happens is, from time to time, heat will build up in the grounds dingus. [00:01:55] I'm just realizing now that I'm like, you know, I'm not a coffee engineer. [00:01:58] Some of you are. [00:02:00] But, you know, of course, we all know that the dingus is connected to the water spigot, which is above the craft. [00:02:09] And what happens, as far as I can tell, is once in a while, you get all that hot water and grounds swirling around. [00:02:20] And if it clogs at all, like if it doesn't release just so, the whole little undercarriage, again, this is a technical term, just stay with me. [00:02:30] And we'll pop forward like three millimeters, which is just enough for the water to kind of miss its target on the craft and then spray all who's he what's it's, as well as for the spigot to start just kind of like splurring, you know, this water coffee slurry everywhere. [00:02:49] And so I went after, you know, but then you still get the triumphant ding dong sound that the coffee is ready. [00:02:56] So I walked over to the coffee expecting like, yes, it's the best, best way to start my day or whatever. [00:03:06] Pull out the coffee. [00:03:07] And the pot is too light. [00:03:10] And I had a familiarity of like what that means. [00:03:13] It means like there is water somewhere. [00:03:17] And it's not in this pot. [00:03:19] And so it's just like, you know, this big, big machine we actually have we've put because of our Mr. [00:03:26] Coffee's, you know, elderly onset incontinence. [00:03:33] We have we have put the entire coffee pot on a tray, like a rimmed silicone tray that you would use for like, I guess, a dog feeding bowl, right? [00:03:45] A dog, you know, messily eats food and slaps water around and stuff. [00:03:49] And you don't want it all over your hardwood. [00:03:50] Like you'd put this underneath that and it would catch some of the water. [00:03:53] So we I spent the first 30 minutes of my waking life today getting my hopes up that I was going to have coffee, followed by, you know, painstakingly carrying this entire cradle of of of coffee pot full of hot brown liquid. [00:04:10] That would stay in all of my clothes and, you know, get on the cabinets and stuff with a silicone underbelly thing. [00:04:18] And just kind of like, you know, we've got one of those big we're very fortunate to have one of those big farmers, farmer house, farmhouse. [00:04:25] I never know what to call it. [00:04:27] Steel, basically a double wide sink. [00:04:30] So what's nice about a double wide sink is that if you've got a problem in your kitchen and you're only a few steps away, whether it's the coffee pot part of the kitchen or the fridge or the freezer or the God forbid, the range or the oven, you can just sort of strategically hurl whatever it is you're holding just about into the into the sink. [00:04:51] And then once it hits the sink, it's, you know, the the the potential damage is limited. [00:04:57] So I gently hurled my coffee apparatus. [00:05:02] Is that the plural of apparatus? [00:05:04] One wonders into the into the into the sink and then spent the next 20 minutes, you know, scrubbing them and all to make another pot. [00:05:13] And Becky, of course, walks down the minute that the second pot is about to be finished. [00:05:18] And I'm like, I've already seen some shit and I'm going to go record a podcast now. [00:05:22] And that swallow you just heard was me having a sip of coffee that was not disgusting, but not great. [00:05:31] But I'll take it over where I was an hour ago. [00:05:39] Thank you for for subscribing as a as a true believer in breaking change. [00:05:47] We're coming up on one year now. [00:05:49] It's hard to believe that it's already been a year, not because this has been a lot of work or a big accomplishment, but just because the the the agony of existence seems to accelerate as you get older. [00:06:03] It's one of the few kindnesses in life and so as we whipsaw around the sun yet again, we're about to do that. [00:06:11] This is the 26th edition version 26 of the podcast. [00:06:17] I've got two names here to release titles and I haven't picked one yet. [00:06:22] So as a special. [00:06:24] Nearing the end of the year treat. [00:06:29] I'm going to pitch them both to you now, right? [00:06:31] So so we're in this together. [00:06:33] I like to think this is a highly collaborative one person show. [00:06:37] Version 26 rich nanotexture. [00:06:42] And that's a nod to the MacBook Pro has a nanotexture anti-glare screen coding option. [00:06:52] It's a reference to the rich Corinthian leather that was actually it's a Chrysler reference. [00:06:58] It's a made up thing. [00:06:59] There is no such thing as Corinthian leather, but like that's what they called their their seating. [00:07:03] And Steve Jobs referenced that as being the inspiration for I think it was the iPad calendar app. [00:07:13] With the rich Corinthian leather up at the top during the era of skeuomorphic designs back in 2010, 2009, maybe I can't remember exactly when they I think it's 2010 when he had his famous actually leather chair demonstration of the iPad. [00:07:28] Maybe the reason that that stood out to me was the car reference because it is it is an upsell. [00:07:34] The nanotexture $150 if you want to have a don't call it matte finish. [00:07:41] The other one, so that's option one, rich nanotexture. [00:07:46] And I didn't love it because I couldn't get texture. [00:07:49] I couldn't get the same Corinthian, right? [00:07:53] Like you want that bite, the multisyllabic bite that adds the extra, you know, the gravitas of a luxury good. [00:08:04] Yeah, texture just didn't have it for me. [00:08:06] But then if you change that word, it doesn't make sense. [00:08:08] So I mean, the other option two that came to mind version 26 don't don't by the way, don't think I'm going to edit this in post and fix it. [00:08:19] I will not. [00:08:20] I will ultimately land on one of these and that will be the title that you saw on your podcast player. [00:08:25] Or maybe some third thing will come to mind and then this conversation will be moot. [00:08:29] I do not think of this collaborative exercise. [00:08:32] Just imagine it's a it's a it's a quantum collaboration. [00:08:37] So by observing it, that's you actually took part. [00:08:41] You opened your podcast player and then the yeah, the entangled, you know, bits just they coalesced around one of these two names or some third name. [00:08:58] It's all just statistics version 26 Luigi's Mansion, which is a nod to two things at once. [00:09:05] I'm going to talk a little bit about GameCube, but also I'll probably not escape mentioning Luigi Manjoni Manjoni man. [00:09:15] You know, I haven't been watching the news. [00:09:17] I don't know how to pronounce his name, but it looks enough like mansion that I was like, oh, man. [00:09:21] I bet you there's a Nintendo PR guy whose day just got fucking ruined by the fella who is a overnight folk hero. [00:09:30] More attractive than most assassins, I would say. [00:09:35] Great hair. [00:09:36] Good skin. [00:09:37] Apparently, skincare Reddit is all about this fella who murdered in cold blood the CEO of UnitedHealthcare. [00:09:45] If you haven't caught the news, if you're even less online than I am. [00:09:51] And yeah, so I'm trying to decide. [00:09:53] I think Luigi's Mansion is probably going to win. [00:09:56] It's more timely. [00:09:57] It's the first time the name Luigi has come up in the last year. [00:10:00] And I may have mentioned nanotexture before when discussing Apple's very compromised studio display. [00:10:11] So I'm leaning Luigi's Mansion, but, you know, don't tempt me. [00:10:15] I might switch. [00:10:18] I'm going to just keep drinking coffee because I got to power through this. [00:10:21] Let's talk about some life stuff. [00:10:24] I so when we last talked that way back in the heady days of version 25, I had just gotten off a plane from Japan. [00:10:34] I was still a little bit jet lagged. [00:10:36] I recorded later in the evening. [00:10:38] I was tired. [00:10:39] You know, I was still overcoming. [00:10:41] I listened to the episode, realized I was overcoming a cold. [00:10:44] You know, then Becky shortly thereafter, after recording, she developed a pretty bad cough. [00:10:51] And so we've both been sleeping relatively poorly. [00:10:53] And I can't complain about this cough because her having a cough for four nights is nothing like me snoring on and off for over a year. [00:11:02] And I think the fact that her cough is consistent is actually a kindness compared to the sporadic nature of my snoring, where it's like I might go a week without it. [00:11:11] And then all of a sudden there's like, bam. [00:11:14] So she doesn't, you know, it's like sneaks up on her and that's not fair. [00:11:17] So so she's got a cough and I haven't been sleeping particularly well. [00:11:20] Maybe that's it. [00:11:22] I also, you know, I wanted to dry out because I was living on shoe highs, you know, canned cocktails in Japan for way too long. [00:11:30] Just drinking, you know, five whole dollars of alcohol every day, which is an irresponsible amount of alcohol. [00:11:36] It turns out. [00:11:40] Yeah, that's one nice thing about living in Orlando and theme park Orlando is that the average price of a cocktail here is seriously $20. [00:11:49] I think it is. [00:11:51] I am delighted and surprised when I find a cocktail under $20. [00:11:55] That's any good. [00:11:55] In fact, the four seasons right around the corner, their lobby bar has a some of the best bartenders in the state of Florida. [00:12:05] Like they went all kinds of awards. [00:12:06] And so when you say a lobby bar, you think it sucks. [00:12:09] But it's actually it's like it's a it's a restaurant with a room if you're ever around and they still do a happy hour with like $4. [00:12:18] It was $4 beers. [00:12:19] I think they finally increased to $5 beers draft beer. [00:12:23] And it's all craft. [00:12:25] You know, it's all fancy people stuff. [00:12:27] And they do it's I think it's $10 margaritas, French 75s, and they got some other happy hour cocktail. [00:12:37] It was highballs for a while. [00:12:39] Whiskey highballs was like probably centauri toki or something. [00:12:43] I gotta say like that $10 margarita. [00:12:47] They'll throw some jalapeno in there if you want some tahini rim, you know, they do it up. [00:12:52] They do it well. [00:12:54] But that might be the cheapest cocktail I've had in all of Orlando is at the Four Seasons. [00:13:01] Famous for that TikTok meme of the Four Seasons baby, if you're a TikTok person. [00:13:06] Anyway, all that all all this drinking talk back to the point. [00:13:11] I've been not drinking for a week. [00:13:12] And I, you know, I'm back to tracking my nutrients every day. [00:13:17] The things that I consume and adding up all of the protein and carbohydrate and realizing [00:13:21] if you don't drink, it's actually really easy to blow past one's protein goals. [00:13:25] And so I had one day where I had like 240 grams of protein, which is [00:13:28] enough protein that you'll feel it the next morning if you're not used to it. [00:13:34] And I still was losing weight. [00:13:38] I lost like five or six pounds in the last week. [00:13:43] And to the point where it was like, you know, I was feeling a little lightheaded, [00:13:47] a little bit woozy because I wasn't drinking enough is the takeaway. [00:13:52] So so thank God we got to go to a Christmas party last night. [00:13:57] It was it was great Gatsby themed. [00:13:58] And I dressed up like a man who wanted to do the bare minimum to not get made fun of at the party. [00:14:05] So I had some some suspenders on instead of a belt, which was the first time I ever put on suspenders. [00:14:13] They were not period appropriate suspenders simply because they had the, you know, the [00:14:18] little class B dues instead of how they had some other system for I don't I don't fucking know. [00:14:25] Like I, I had chat GPT basically helped me through this. [00:14:28] And it's like, hey, you want these kinds of suspenders? [00:14:30] I'm like, that sounds like an ordeal. [00:14:31] How about I just get some universal one size fits all fit and clip them in? [00:14:36] I also had a clip on bow tie. [00:14:37] So that worked. [00:14:39] When you think clip on bow tie, I guess I'd never used one before, but like it, I always [00:14:45] assumed it would just be like, you know, like a barrette clip that would go in front of the [00:14:49] front button and look silly for that reason. [00:14:51] And maybe that's how they used to be. [00:14:53] But it seems these days, if you want to spend $3 on a fancy clip on bow tie with a nice texturing, [00:14:58] I'll say, uh, it's just pre it's a pre tied bow with a still wraps around your neck. [00:15:04] It's just, it has a class mechanism, which seems smart to me, right? [00:15:08] I don't know what. [00:15:09] Look, if you're really into men's fashion, uh, there's this weird intersection or this tension [00:15:19] between I'm a manly man who, who ties my own shoes and, you know, kills my own dinner and [00:15:25] stuff. [00:15:25] And I, I, for fuck's sake, tie my own bow tie from scratch every day. [00:15:29] Right? [00:15:29] Like there's a toxically masculine approach to bow ties, but at the same time, it is such [00:15:35] a foofy accoutrement. [00:15:37] It's like an ascot, um, that the idea of like a manly man, like a man trying to demonstrate [00:15:43] his manliness by the fact that he doesn't use a clip on bow tie, uh, came to mind yesterday [00:15:50] when I was, uh, struggling even with the clasping kind. [00:15:54] I was like, man, I wish I could just get this to anyway. [00:15:58] Um, I had a vest at a gray vest. [00:16:03] This is all brand new territory for me. [00:16:05] Uh, yeah, I, I've, I've leaned pretty hard into the t-shirt and shorts and or jeans life [00:16:10] for so long. [00:16:12] Uh, the, the fella in front of us when we, when we were checking in, cause they took little [00:16:16] photos of you, uh, all of the women had the same exact flapper dress from Amazon, you know, [00:16:22] with the, the, the, the hairband thing with the, you know, fake, the polyester peacock tail. [00:16:28] Becky's looked the best. [00:16:29] I'm not gonna, I'm not even lying. [00:16:32] Uh, uh, her dress actually fit. [00:16:35] He had some, uh, very ill fitting flapper costumes that these women couldn't even move in. [00:16:40] Um, it was interesting. [00:16:42] Uh, but the, the fella in front of us at check-in was wearing a, a, a full blown, you know, tuxedo [00:16:48] get up that he brought from home. [00:16:50] And he was talking about, Oh yeah, well he's got two of them and his wife, you know, ribbed [00:16:54] him a little bit that he could only fit in one. [00:16:55] I was like, man, owning a tuxedo, that's nuts. [00:16:58] Like, and then it like turns out he's like got all these suits and these fancy clothes and [00:17:02] he's an older gentleman. [00:17:05] Uh, but my entire career only the first few years did I have to think about what I was [00:17:10] wearing and, and it never really got beyond pleated, you know, khakis and a starched shirt. [00:17:18] And, and I had, I had to wear a suit maybe on two sales calls. [00:17:22] Um, and they were always the sales calls that were just, uh, there were certain sales demos [00:17:30] when I was a, a, a baby consultant, these really complex bids. [00:17:39] I remember we were at cook County once, uh, uh, the, the county that wraps Chicago and it [00:17:44] has a lot of functions and facilities that operate at the county level. [00:17:48] So, but of course we're in Chicago in some, you know, uh, dystopian office building. [00:17:54] That's very Gothic, I should say. [00:17:57] And the, the solution that we were selling was a response to a bid around some kind of [00:18:05] document, electronic document ingestion and, and, and routing solution. [00:18:09] And so what, what that meant was it was like a 12 person team. [00:18:14] It was a big project working on this pitch. [00:18:18] And most of the work and most of the money came from the software side at the end of the [00:18:23] process. [00:18:23] It's like, you're going to get IBM file net and you're going to get all these different, [00:18:26] uh, enterprise tools. [00:18:28] And we're going to integrate, uh, with all your systems and, and build these custom integrations [00:18:32] that you've asked for here and here and here. [00:18:33] But the, the, the hard part is the human logistics of how do you get all of their paper documents [00:18:41] into the system. [00:18:42] Uh, and that was my job was I had to get paper and then scan it, uh, with a production, big [00:18:50] Kodak funkin fucking scanner. [00:18:52] Uh, and then use, what was it? [00:18:54] Kofax capture or something like a, like an OCR tool of the era. [00:18:59] And the thing about it is that scanning is not, was not ever a science and neither is [00:19:07] OCR, the OCR stuff and OCR stands for optical character recognition. [00:19:10] So you'd have a form and you'd write on the form, like, you know, uh, uh, uh, uh, some, [00:19:15] some demo address and name and all this. [00:19:19] I spent. [00:19:22] So like the people doing the software, like they, they could just like click a button and [00:19:26] like, they could even just use fakery, right? [00:19:29] Like, Oh, the API is not really there, but I'll always return this particular, like, let's [00:19:33] call it an XML soap message. [00:19:34] And so the, the software guys clocked in, clocked out, got back to their billable work. [00:19:39] I, because the stakes were so high in this particular, uh, and I'm here right now explaining [00:19:46] all of this nonsense because I had to wear a suit and that was also really bad, but I [00:19:51] was in Chicago late at night with a group of like, at that point it was like 9 PM and it [00:19:54] was just me and two partners. [00:19:56] Cause the partners had a sickness called avoid family, stay at work. [00:20:02] And, uh, I, I was just running over and over and over again where I'd like, you know, [00:20:09] I'd take the paper, I'd put it through the scanner and it would get 90% of the OCR stuff [00:20:13] done, or I'd get it perfect. [00:20:15] And it would scan everything just right, which would result in the downstream, you know, after [00:20:21] the capture, like all of my integrations, like would route it to the right thing. [00:20:24] So that like, it was basically a game of mousetrap or dominoes where like my task was both [00:20:29] the most important to being able to demonstrate, but also the most error prone, but also the [00:20:37] least, uh, financially like, um, valuable to, to our services company. [00:20:42] And so I had no support, uh, on top of that, they, the, our fucking it people pushed out some [00:20:49] kind of, um, you know, involuntary security update security and bunny quotes that, that [00:20:57] slowed my system down dramatically in the course of just like a day. [00:21:01] And I had, I had no way to test for this. [00:21:04] So I remember I was up at like 11 PM at that point, trying to make this work consistently [00:21:10] and realizing that the only way to get it to run it all required me to, um, install a virtual [00:21:16] machine, put windows in the virtual machine, install all this software inside that virtual [00:21:22] machine, and then run it there because only in the black box of an encrypted virtual machine [00:21:27] image or, uh, you know, a virtual machine, like disc image, could I evade all of the accountant [00:21:33] bullshit that was trying to track and encrypt and, and, and muck with files and flight and [00:21:38] so forth. [00:21:39] And so it was only around like probably one 30 or two that I got to bed and our, our demo [00:21:46] was like at seven in the morning and I had to wear a suit. [00:21:47] So if you ever wonder, Hey, why is Justin always just in a, a t-shirt and shorts? [00:21:54] Uh, I would say childhood trauma, fuck suits. [00:21:59] The only, the only time I associate like nice clothes, you know, having a lot of [00:22:03] having to dress up is church shit. [00:22:05] I didn't want to go to. [00:22:06] And usually it's like the worst church shit. [00:22:09] Like there's some cool church shit out there, you know, youth group where everyone's a horny, [00:22:14] right. [00:22:15] And singing pop songs to try to get people in. [00:22:17] That's as church shit goes, that's above average. [00:22:21] But when you're talking about like, Hey, you know, this aunt you've never heard of died and [00:22:27] we got to go all the way to goddamn Dearborn to sit in a Catholic mass, that's going to [00:22:32] be in Latin. [00:22:33] And they're going to, you know, one of those, you know, you should feel bad for him because [00:22:39] he's abused. [00:22:39] But one of the altar boys, he's going to be waving that little like incense thingy, [00:22:43] the jigger back and forth and back and forth like a metronome. [00:22:46] And, uh, you're going to get all this soot in your face, all of that, you know, frankincense [00:22:51] and myrrh and whatever the fuck they burn. [00:22:52] And, uh, yeah, then they're going to play some songs, but they're not going to be songs you [00:22:57] want to hear. [00:22:57] And you're going to be uncomfortable because I bought you this suit at JC Penny when you [00:23:01] were like nine and you're 12, you're 12 now, and you've gained a lot of weight, but [00:23:06] here we are. [00:23:07] And then you got to go and, you know, like, don't worry because after the service, there's [00:23:12] a big meal, but it's mostly just going to be, you know, styrofoam plates and plastic forks [00:23:16] and, uh, cold rubbery chicken. [00:23:19] And then a whole lot of family members who want to pinch your cheeks, uh, had an aunt that [00:23:24] always wanted to, um, put on a bunch of red lipstick and kiss me and leave kiss marks. [00:23:30] And she thought that was adorable and everyone else thought it was funny. [00:23:33] And for whatever reason, I wasn't a fan, uh, that's the kind of, uh, yeah, so anyway, moving [00:23:45] right along the, uh, the, the other than having to dress up, the, the Christmas party was really [00:23:50] nice because it had an all you can drink martini bar. [00:23:52] So that, that helped that took the edge off a little bit since I hadn't been drinking for [00:23:57] the previous week. [00:23:57] Uh, and it was, you know, uh, they, they had a great bartender, the, the, I assume that [00:24:07] that people drank gin martinis back in the day of Gatsby, but it seemed to be a vodka forward [00:24:12] martini bar, which I appreciated. [00:24:15] Uh, as I get older and my taste buds start dying, uh, I found myself going from dry martinis [00:24:23] to martinis with an olive to martinis with two olives to me asking for like a little bit of [00:24:30] olive juice and then drinking the martini and realizing that wasn't quite enough olive juice. [00:24:34] So that's just disgusting, but, um, it's where, uh, it's one of the signs of age, I guess. [00:24:43] Uh, so the martini bar was good. [00:24:46] Uh, they also had an aged old fashion that they'd made, you know, homemade, um, with like nutmeg [00:24:51] and cinnamon in there. [00:24:52] That was impressive. [00:24:53] Uh, so yeah, had a, had a big old Christmas party last night, had a couple of drinks, uh, [00:25:00] and, and, uh, because of the contrast, whenever I go, you know, go a week without any alcohol [00:25:06] and then I have some alcohol and then I wake up the next morning and I'm like, oh yes, I [00:25:11] know what people mean now that alcohol is poison. [00:25:13] And it's a mildly poisonous thing because I feel mildly poisoned. [00:25:19] Um, and, and I just usually feel that most days until I forget about it. [00:25:23] So it's a data point, uh, to think about, uh, uh, I, I, I had a good, good run for, [00:25:30] for a while there, just cause like when you live in a fucking theme park and there's nowadays [00:25:34] alcohol everywhere that I go and every outing, I had a good run for a few months. [00:25:40] Um, not last year, the year before where I just didn't drink at home as a rule to myself. [00:25:46] I was like, you know, I'm not going to pour any liquor for myself at home unless I'm entertaining [00:25:49] guests. [00:25:50] And, uh, even then go easy on it because I I'm, I'm, I'm going to just the background radiation [00:25:56] of existence in when you live in a bunch of resorts. [00:25:59] Uh, I'll, I'll get, I'll get, I'll get plenty of alcohol subcutaneously. [00:26:05] Um, a contact tie. [00:26:07] So maybe I'll, maybe I'll try that again. [00:26:10] I don't know. [00:26:11] It's the stuff you think about in mid December when you're just inundated with specialty food [00:26:17] and drink options, uh, do other life stuff that isn't alcohol or religion or clothing [00:26:27] related. [00:26:28] Oh, uh, uh, I've been on a quest to not necessarily save a bunch of money, not necessarily. [00:26:35] Uh, I was going to say, uh, tighten my belt, but, uh, I don't know what the suspender equivalent [00:26:43] is because I did not wear a belt last night. [00:26:45] I just wore suspenders. [00:26:46] Uh, I've been interested in, in not budgeting either. [00:26:52] Just, I think awareness. [00:26:54] Like I want, I know that a lot of money flies through my pockets every month in the form of, [00:27:01] um, SAS software subscriptions and streaming services. [00:27:05] I mentioned this last, uh, last go round that I was recommending, Hey, let's say, go take a [00:27:11] look at like our unused streaming subscriptions of those. [00:27:14] Uh, yesterday I did cancel max. [00:27:16] Cause I realized that, uh, if I'm not watching a lot of news, I'm not going to watch John Oliver [00:27:20] and, and they frankly, a lot of HBO's prestige shows haven't been besides they cut a Sesame [00:27:28] street and it just so happened that I canceled that day. [00:27:31] So maybe there's a, some data engineer at HBO who's like, Oh man, people are canceling because [00:27:37] we got rid of Sesame street. [00:27:38] Uh, that would be good. [00:27:40] That would be good for America to get that feedback. [00:27:43] Uh, yeah. [00:27:44] I just want awareness of like, where's the money going and in what proportion and does that sound [00:27:50] right to me? [00:27:50] Uh, and I've, there are software tools for this. [00:27:53] Uh, they are all compromised in some way. [00:27:57] For example, we just, uh, we'd used lunch money in the past, which is a cool app. [00:28:02] And it has the kind of, you know, basic integrations you would expect. [00:28:06] I don't know if it uses plaid or whatever behind the covers, but like you, you connect your, your, [00:28:11] your checking accounts, your credit card accounts. [00:28:14] It lists all your transactions is very, um, customizable in terms of rules that you can [00:28:21] set. [00:28:21] It has an API. [00:28:22] Jen is a solo co-founder and she seems really, really competent and lovely and responsive, [00:28:27] which are all great things. [00:28:29] But the UI is a little clunky for me. [00:28:32] I don't like how it handled URLs. [00:28:33] It was like, once you got all the transactions in there and, and set up, it didn't feel informative [00:28:41] because there wasn't like a good reporting or graphs that just kind of at a glance would [00:28:45] tell you, this is where your money's going. [00:28:46] At least for me. [00:28:47] Uh, additionally, like it, it can't do the Apple card. [00:28:51] That's the, that's become the crux for a lot of these services is that, um, Apple card [00:28:55] only added support for reading. [00:28:59] Uh, well now you can read, uh, uh, so I, Apple added away on iOS and specifically iPhone [00:29:07] OS to read, uh, transactions from Apple card, Apple savings and Apple cash. [00:29:14] And this was like nine months ago, if that, but copilot, uh, money is one of two apps maybe [00:29:22] that supports this. [00:29:23] And so if you, if you have, we have, we each have an Apple card and we use it for kind of [00:29:29] our silly stuff whenever we're, you know, using a tap to pay. [00:29:33] So, so if, if you want to track transactions and you don't want to manually export CSVs [00:29:40] from your wife's phone every 30 days, which is the process that I'd fallen into with, with [00:29:44] lunch money, then you, you basically have copilot money. [00:29:50] And then there's another one, maybe Monarch, uh, the copilot money. [00:29:53] People are always talking about this other app called Monarch. [00:29:55] I haven't checked it out. [00:29:55] I don't know if that's why they like it or if it's just the other one that's being developed [00:29:59] right now in this post mint apocalypse, as we all grapple with the fact that mint was [00:30:04] always bad, uh, but people got into it and I don't copilot money is like nice, but like [00:30:11] it, like, for example, like if I'm, uh, if I buy a, uh, if I put $10, the equivalent of [00:30:19] $10, so 1000 yen on my Starbucks card in Japan, which is totally separate because of course it [00:30:25] is there's two Starbucks cards. [00:30:27] There's the one in Japan and then the one in the rest of the world. [00:30:30] So you open the Japanese only app, you put a thousand yen on it. [00:30:33] Uh, you pay for that with Apple pay. [00:30:36] So which goes to my Apple card and copilot money will read that transaction. [00:30:40] But if you read like the text in the merchant description, it's literally like [00:30:44] staba day and it's like all no spaces. [00:30:47] It's just like 40 characters in a row to, and if you really squint, you can kind of see [00:30:52] Starbucks, Japan, um, you know, app store payment, which is, you know, like I want to [00:31:00] change that to Starbucks, Japan, and then set up a rule to just like always change that. [00:31:05] So I don't have to like memorize these random ass merchant names. [00:31:08] Uh, apparently like after, after two hours of setting up copilot money yesterday, I realized [00:31:13] that there's like both no way to set up that kind of rule. [00:31:16] The only rule that it supports is categorization of, of spending fine, but then if you set [00:31:22] up a rule and you don't like it, there's no way to edit the rules cause there's no UI for [00:31:25] rule editing. [00:31:26] And so then, you know, where do you go, but read it and you're like, okay, well there's [00:31:30] a subreddit. [00:31:30] And then like, what's half the post in the subreddit? [00:31:32] It's about, Oh, of course it's a bunch of dads who are like, I can't see my rules and I have [00:31:36] to contact support. [00:31:37] And it's been nine months. [00:31:38] And I was like, Oh God. [00:31:39] So that's, uh, if anyone's got any great budgeting software that supports Apple card, you let me [00:31:46] know. [00:31:47] Uh, and also isn't a part-time job. [00:31:50] I'm not gonna, I'm not gonna spend all day on this. [00:31:52] I'm not, I'm not gonna, I'm gonna check in on this, uh, the four times a year that I, that [00:31:58] I wake up in a cold sweat wondering, Oh my God, how many subscriptions do I have? [00:32:02] Which is, uh, I, I really missed my calling by not being a dad, I guess. [00:32:07] But it did land me on looking at rocket money. [00:32:11] Uh, so, so, so there was an app called true bill that marketed heavily with like a lot of [00:32:19] other DTC apps where the pitch was, we will negotiate your bills for you. [00:32:26] And by bills, I think that one of the reasons why this, this, this business probably struggled [00:32:31] is that there's really only two that they could reasonably negotiate on your behalf. [00:32:37] You know, you, you imagine they've got a call center or they've got people who've, who [00:32:40] are trained, who have scripts that they follow, who, who will doggedly keep calling back until [00:32:44] they get what, you know, the discount, the, just the steps that you would have to go through [00:32:48] if you wanted to call Comcast or Verizon, they, they, they, they can basically could basically [00:32:57] only really negotiate your ISP and your cell phone carrier. [00:33:01] Cause those are the two sort of, you know, that are, that are transactional enough that [00:33:08] are regionalized or nationalized enough that they, that they could train on. [00:33:11] And then of course, like they, they're the ones that like get you in with a teaser rate and [00:33:15] then gradually turn up the heat over the course of a couple of years. [00:33:19] Well, Quicken Loans bought, they rebranded as rocket and then rocket fill in the blank [00:33:26] with other products. [00:33:26] And they bought true bill around the same time. [00:33:29] And I, my understanding from a distance is that true bill, uh, uh, that became rocket money [00:33:36] in order to be an entree into other rocket star services. [00:33:41] So like you, you now, when you install rocket money, it's still got the negotiation thing. [00:33:46] Cause that's what they market it on, but you have to slog through so much like, no, I'm actually [00:33:52] all set with credit and, and, and, and debt repayment services. [00:33:57] And I'm, I'm already all set with financial advisors and retirement goals. [00:34:00] I just get me to the, to the thing where I can pay you 35% of whatever you save me on [00:34:06] my ISP bill. [00:34:07] And so of course, you know, like I, I, I signed up for the first time, went through the app [00:34:12] onboarding. [00:34:13] I was not impressed with the bugginess of the app, but I was able to soldier on through [00:34:19] it. [00:34:19] And where I landed was I was, uh, following its little setup wizard for first. [00:34:27] Spectrum, which is my internet provider. [00:34:28] And I was, I'd initially paid a hundred dollars when I moved here in 2021, uh, a month for, [00:34:36] for one gig down, call it 30 megabits per second up. [00:34:40] And I can't get a, another ISP here. [00:34:43] They had an exclusive agreement. [00:34:44] They're building neighborhoods bullshit. [00:34:47] Uh, and I, I, so I can't get higher upstream and that really gets in my crawl. [00:34:53] Nevertheless, they have increased prices about $15 a year. [00:34:59] Each time I'm here to the point now where I think my monthly, you know, debit is like $150, [00:35:05] $145 and you fill it out and you give them your pin number. [00:35:11] You got this customer pin that like, you know, is secures your account. [00:35:14] I'm like, eh, all right, well, that's four digits, you know? [00:35:17] And besides I'm already on like this one dead simple plan. [00:35:20] It's just their normal plan. [00:35:22] And it's, you know, like I'm paying top dollar for it. [00:35:26] So what's the worst that they could do if they, if somebody else were to call and change [00:35:30] my plan up, you know, like it, it wouldn't cause that much lasting damage. [00:35:34] Cause it's not like I'm on some teaser rate. [00:35:36] It's not like I've got a great deal as it is. [00:35:38] So I let them do it. [00:35:39] And three days later, I had low expectations, right? [00:35:42] Cause you go on Reddit, speaking of Reddit, you go on and you, you search other people's [00:35:46] experiences and people will say, oh yeah, well like the, you know, I, some of them are [00:35:52] pretty hyperbolic. [00:35:53] It's like, you know, like they, they changed my plan to this and now I'm stuck with this, [00:35:57] you know, TV subscription for the next four years. [00:35:59] And then they charged me a thousand dollars in imagined savings that never materialized. [00:36:03] I'm like, shit. [00:36:04] All right. [00:36:04] Well, that's, that's not good. [00:36:06] But I, I gave them a shot. [00:36:08] They came back three days later and they said, congratulations. [00:36:12] We saved you $859. [00:36:14] I was like, what the, excuse me over the next 12 months. [00:36:18] And it turned out that they got me from $142, $145 down to 70 flat. [00:36:25] You multiply that by 12 and then indeed comes out to eight something. [00:36:28] And I was like, damn. [00:36:29] All right. [00:36:30] And so I've been, I've been looking for the other shoe to drop like ever since, like something [00:36:36] is fishy here. [00:36:37] Like I, they didn't sign me up for other services. [00:36:39] I did receive, I'm looking over at it now. [00:36:43] I did receive a relatively large box that has a, you know, one of those wifi modem router [00:36:50] combo units in it. [00:36:51] That was partly like apparently part of the deal. [00:36:54] I don't know if they canceled my service and then in one fell swoop also signed me up for [00:36:58] service. [00:36:58] But now I've got this gigantic fucking wifi thing that wouldn't even fit in my patch box [00:37:02] if I wanted it, which I don't. [00:37:04] So I'm, I'm, I'm currently in this ether of like, well, if my modem that I rent is still [00:37:11] going to work, I rent for $0. [00:37:14] It's one nice thing about spectrum. [00:37:15] If my modem that I rent is still going to work, uh, maybe I can just keep this wifi thing in [00:37:20] the box and not call anyone. [00:37:22] And maybe everything will keep working and I'll pay the $70 a month, or maybe I should send [00:37:27] the other one back, but then that might trigger some other thing. [00:37:30] Right. [00:37:30] I, so look like, do I recommend the service? [00:37:36] I don't really, I don't, we'll see. [00:37:38] Right. [00:37:39] Like call me in a year. [00:37:40] I should set a reminder. [00:37:41] Oh, I'm sure if something bad happens, I'll, I'll be right on the airwaves screaming about [00:37:47] it. [00:37:47] Like I, like I do, but even after this experience, saving me a lot of money, like what I trust [00:37:53] them with my T-Mobile account, right. [00:37:54] Where I have been grandfathered in on what was called the one choice plus plan in 2014 [00:38:01] or whatever. [00:38:02] And it's genuine, honest to God, unlimited data without any real throttling. [00:38:08] As far as I can tell, until you get to some absurdly high number where you can watch your [00:38:12] videos in HD on your, you know, like, like it's, it's, it's a good one. [00:38:16] It's better than their magenta crap. [00:38:18] Um, and a lower price than their magenta max thing. [00:38:21] Well, we got three lines. [00:38:22] You got, you know, the watches and I would love to pay less for that, but I just don't [00:38:27] try like you, you, you fill out the rocket money form, uh, with the, uh, the, the, it wants [00:38:34] your T-Mobile, like login information. [00:38:36] And that's, that was a bridge too far for me. [00:38:40] I got there and I was like, you know, I could just imagine this going poorly. [00:38:44] You know, these plans are so complicated and feels like even when I call T-Mobile and I [00:38:48] ask, Hey, how's the weather? [00:38:49] Like they click a button and it fucks up my shit for two weeks. [00:38:52] So I'm, I'm, I'm good. [00:38:55] I can probably afford a cell phone bill. [00:38:57] Uh, I just, I just would prefer not to have to pay it. [00:39:01] Only one other life item in the last week, I was given a special opportunity. [00:39:11] Um, I've talked about massages a couple of times on this program and the, uh, I mentioned, [00:39:15] uh, the one I went, uh, the one I had most recently in a previous episode, I, I, I was, I was wrapping [00:39:29] up my massage with a human like you do. [00:39:31] And the human said, have you, have you tried our robot massage? [00:39:36] And, uh, I didn't know how to take that. [00:39:38] And I said, I, I've heard of it. [00:39:41] I know Becky tried it. [00:39:43] If you check Becky's, um, Becky Graham, you'll see, uh, there's a video of her, uh, getting [00:39:48] felt up by a robot. [00:39:50] Uh, I forget the name of the company, but it's, it's, uh, it's like a robot that tries to simulate [00:39:59] the experience of a human massaging you. [00:40:02] So it's, uh, you're on a bed, you're face down. [00:40:06] It's, uh, got arms that kind of go back and forth, uh, on a track and they, they push and [00:40:13] whatnot. [00:40:13] And it kind of reminds me of the white birthing robot from star Wars episode three at the end [00:40:21] when, when Luke and Leah are being born, it does everything short of make the cooing [00:40:26] sounds to get the babies to calm down. [00:40:28] You know, like I, you do have a tablet and you can, you can pick out these pre-baked Spotify [00:40:34] playlists while it's pushing on you. [00:40:36] Anyway, all that to say, I signed up, um, mostly cause it was free. [00:40:41] So I had a 30 minute trial and, uh, the fact is trying to imitate humans was really interesting [00:40:49] to me because I had just spent a month in Japan, uh, getting, uh, what'd you call it? [00:40:54] Uh, massage chairs, our hotel chain that we stay at has always has massage chairs and even [00:41:01] bad massage chairs in Japan are pretty intense. [00:41:03] Uh, uh, but, but good ones are just like, you know, you go in there and it's just like, [00:41:09] I'm sure there's been, you've probably seen a horror movie image, right? [00:41:13] Where it's like, you sit in a chair and then like 25 hands grab all the parts of your body [00:41:18] simultaneously and that is meant to be horrific. [00:41:20] But if those hands, if there was some nice music playing and it was illuminated and those [00:41:25] hands were massaging you simultaneously all over your body, maybe it would be pretty, pretty [00:41:29] great. [00:41:29] And so that's what a Japanese massage chair is like. [00:41:33] Cause they, they don't have this arbitrary conceit that a massage must happen in a format [00:41:39] that resembles how it would happen if a single human on a bed surface was rubbing your tiddly [00:41:45] bits, which is what this robot is. [00:41:49] Right. [00:41:49] And so it's trying to think of another analog, right? [00:41:55] Like where we, we kind of retain the artifice of the way that it used to be before we automated [00:42:00] it. [00:42:00] And, and in some, sometimes we do that to keep people being comfortable like that rich [00:42:05] Corinthian leather. [00:42:06] It's like, we wanted to look like a traditional calendar. [00:42:08] So people know what they're looking at instead of just a bunch of boxes. [00:42:11] It's like, Oh yeah, this looks like a placemat style calendar that I would have had on my desk. [00:42:15] And then eventually that ages out. [00:42:16] And the younger people are like, I've never seen a calendar on a desk, even though my dad [00:42:20] grew up with one, you know? [00:42:24] So maybe that's it, right? [00:42:25] Like, like sometimes that's why we would have a robo massage that like, you know, pressures [00:42:31] and needs you, you know, kind of with just the two arms up and down in particular points, [00:42:35] sometimes at the same time, sometimes just one arm, you know, it's, it's, it's less efficient [00:42:41] is my immediate frustration. [00:42:43] Cause it's like, you could have 45 fucking arms going to town all over my body and I'd [00:42:49] get way more work done in 30 minutes. [00:42:52] Right. [00:42:52] Cause I'm just trying to min max my existence, but instead by, by, by, by imitating a human [00:42:59] massage, like nothing is really gained because I can't see it. [00:43:03] I'm facedown. [00:43:04] I'm looking at a silly tablet and watching imagery, imagery of forests and, and, and ocean waves [00:43:10] and whatnot, and I'm kind of getting a, you can look at a weird overhead view of what [00:43:14] your body is looking at, looking like right then, you know, like it scans your body and [00:43:19] then has like a little illustration of like, here's where I'm pushing you. [00:43:21] Here I go. [00:43:22] It's, it seems more to me like they designed this, you look at this unit and it's just like, [00:43:31] this has got to cost at least 15 grand. [00:43:34] This is an expensive, complicated piece of equipment. [00:43:38] It feels like a lack of imagination, uh, to, to somebody had the idea, let's take human [00:43:47] masseuses out of the equation and just make a robo masseuse thing that we could put in spas [00:43:53] when, uh, you'd actually have a better experience. [00:43:56] It would be cheaper. [00:43:57] And there's like more prior art at Panasonic or these other companies in Japan. [00:44:01] If you just made a, you know, massage chair, but that would be boring, I guess. [00:44:08] Uh, and massage chairs, like you, you hear the word massage chair right now as you're listening. [00:44:13] And if you haven't had like a real one, you know, at a Japanese Denki-yasan on the third [00:44:17] floor, where all the salary men on their way home tell their wives, oh, I got a, I got a big meeting [00:44:24] with the boss and then they go to, they go to Yamada Denki or they go to Yodabashi camera. [00:44:28] And then they just, you know, they take their briefcase and they set it down next to one of the [00:44:33] trial units of the massage chair. [00:44:34] And then they, they, they, they, they go into this little like sensory deprivation pod and [00:44:39] they get all their bits smushed simultaneously and they got a remote control and they can [00:44:45] say, just do it hard. [00:44:46] And then they can forget their worries for, for 15 minutes until, uh, one of the staff has [00:44:52] to remind them that, uh, they don't live there and that they have to go home now. [00:44:56] If you haven't had that experience, uh, you probably, when you hear a massage chair, think [00:45:02] of like those $2, you know, leather chairs that are, you know, just like our just normal [00:45:08] fucking chairs that may be vibrate, like the vibrating bed equivalent that you see at an [00:45:12] airport. [00:45:12] Um, this is not what I'm talking about. [00:45:15] So get your head out of there and, and go Google, you know, for high end Japanese massage [00:45:22] chair, and you might get some idea. [00:45:24] Uh, also I, uh, in the course of a 30 minute massage, I encountered so many fucking Android [00:45:32] tablet bugs. [00:45:33] I, I didn't, I gave them a lot of feedback cause they, this is sort of a trial that they're [00:45:37] doing. [00:45:37] They wanted to want to know how, what I thought. [00:45:40] And I gave them a lot of this perspective and feedback about like, well, you know, this [00:45:44] skeuomorphic design, yada, yada. [00:45:45] But I didn't even touch any of the software stuff. [00:45:49] Cause like there's an absolutely nothing that they're going to be able to do with that much [00:45:52] less like they won't even be able to communicate this back to the company in a way that's helpful, [00:45:55] but it was, you know, it would freeze or the display would become non-responsive. [00:46:01] One time I had the music just turn itself all the way up. [00:46:05] The, um, the, so many things about this design are meant to make you feel comfortable are [00:46:13] meant to make you feel safe. [00:46:14] Like if, if you, it moves at all, or if it detects anything is off at all, it basically [00:46:20] like will, will disengage entirely and reposition itself. [00:46:23] And then you have to actively resume the massage. [00:46:26] And then it's got to put the little flappy doos back over you. [00:46:30] Like it's really worried about people flipping out about this robot pressing up against them. [00:46:36] And it extends to, to like, you know, you pick your firmness, like light, medium firm. [00:46:41] And I clicked firm. [00:46:42] And then there, you could see there was like a little like pressure bar on the right. [00:46:47] And that even though I'd clicked the firm preset, I wasn't at a hundred percent pressure. [00:46:52] And I was like, well, that, that won't do. [00:46:54] And so I jacked it up to a hundred percent right out of the gate. [00:46:56] And the whole time, 30 minutes, like you could, uh, [00:46:59] Hmm. [00:47:01] It, I knew that a massage was happening. [00:47:05] Like I knew when contact was being made, but like, it was not a massage. [00:47:08] It was, it was somebody kind of like, like, like back rub would be generous. [00:47:14] It was like somebody like took an open palm hand and just pressed it. [00:47:18] Just, just, just an obnoxiously against different parts of my body and no firmness beyond that. [00:47:26] So you got a robo massage. [00:47:29] It's limited in what it can do. [00:47:33] Cause it's trying to imitate a human. [00:47:34] It's very worried about liability, which is why I imagine the max firmness is light pressure. [00:47:39] Uh, and it's fussy and it's buggy. [00:47:42] And of course it can only do very limited regions of the body. [00:47:45] Like if I was a massage therapist, I'd be like, Hey, sweet. [00:47:49] You know, I'm going to keep having a job longer than all these programmer juckle fucks. [00:47:52] You're going to get replaced by a Claude and open AI. [00:47:56] So I'm, I'm, I'm, I'm confident that a massage therapist is going to be a, a lucrative, you [00:48:03] know, going concern as a career for a little while programming. [00:48:08] I'm not so sure of, but most of us listening have already made our choice, whether we're [00:48:14] going to be massage therapists or programmers. [00:48:16] So we're just going to have to see how this, how this plays out. [00:48:19] All right. [00:48:20] Well, that's all, that's everything going on in my life. [00:48:23] So let's, uh, well, let's follow up on stuff that had been going on in my life and is now [00:48:30] continuing or is once again, I started to realize that there's a, there's a certain theme to this [00:48:37] show. [00:48:37] Hmm. [00:48:38] All right. [00:48:46] There's basically two major areas of follow-up today. [00:48:51] Um, but somehow the two of them take up 11 bullet points in my notes. [00:48:59] So I'll try to be expeditious. [00:49:02] The first is I bought a, uh, M4 pro MacBook pro, I guess an Apple nomenclature, a MacBook pro [00:49:13] left parentheses, 2024, right parentheses with M4 pro. [00:49:19] I think is probably maybe the 2024 is at the end. [00:49:22] Maybe they don't put the date now that they have the chip name. [00:49:25] In any case, I needed a computer that was built for Apple intelligence, which is how they also, [00:49:32] they crammed that in the fucking name. [00:49:34] Um, and like the, every subheader says Apple intelligence on it, which, you know, I mean, [00:49:40] if you're, if you're a marketing dude, it's the thing that, you know, like you gotta, every [00:49:48] year is a struggle to goose people into, to buying computers. [00:49:51] And, uh, it's been a while since they've had anything new to say that your computer can do. [00:49:56] So it makes sense, but come on. [00:49:59] It can't even make Genmoji yet. [00:50:02] Uh, just if you've, if you've downloaded it, used 18.2 iOS or iPadOS, uh, go turn on the, [00:50:13] um, you know, the AI feature, if it's available in your region and language, and then you open [00:50:19] the image playground app and you click through there and let it download all of the image [00:50:24] playground shit, uh, in particular, the image playground itself, where you can take a person [00:50:30] and a place and kind of like, you know, create sort of a, uh, a witch's brew of bad imagery [00:50:35] and then, and then have a keep swiping to the right as, as they just all look bad that I have [00:50:43] no, no need for, but Genmoji, or at least the promise of Genmoji, I like quite a lot. [00:50:49] I enjoy, you know, um, typing in little like name, like, so we were at the parks, uh, with [00:50:57] our friends last week and it was a Jollywood Knights event, which is also Gatsby themed. [00:51:06] There's a reason why ordering 1920s era costumes on Amazon in Orlando was like not an overnight. [00:51:13] It was like a two, three day leg because this, this Jollywood Knights 1920s era themed, uh, [00:51:21] ticketed event at Hollywood studios has been going on. And it was one of those nights. And so some [00:51:26] flapper lady in line, she had a purse that had a phone handle on it. And her husband, who now that [00:51:34] I think back on this was dressed very similarly to how I dressed myself last night. So something tells [00:51:39] me he was sort of a long for the ride in this, she picked up the phone handle off of her purse and [00:51:46] handed it to Becky. And then he, you could sort of see him on the phone being a bad ventriloquist [00:51:53] and talking to her on the phone. So like his cell phone was somehow communicating to the purse phone. [00:51:59] It was very, it reminded me of get smart, you know, like that spy TV show from the sixties that was on [00:52:05] Nick at night in the eighties or nineties when I would have watched it. Uh, of course it didn't [00:52:10] work. And then we were just in line and it was like, sorry, we're in line. It didn't work. And then, [00:52:14] and then of course the way that lines work, right. As you turn left, turn right. And now it's up, [00:52:18] here's the same people again. And so they're like, all right, try again. So she picks up the purse [00:52:23] phone and here's the guy talk. And she's like, yes, this is indeed a telephone. That is a purse. [00:52:28] My reaction, my contribution to this experience was to try to generate a Genmoji for the group [00:52:35] that I was with. That was like purse phone. And, uh, wouldn't you know it, uh, it struggled to like, [00:52:43] I was like purse with a phone handle on top. And it was, it gave me like one with like a, [00:52:49] like a locker combination lock instead of a rotary dial in the middle. It was all, it was not, [00:52:54] not good. And, and I think like a lot of these Genmoji, in addition to being bad and not good, [00:53:01] they are when they, there's, they have to be so detailed because usually it's people mashing up [00:53:07] different concepts. They have to be so detailed that when in line with texts, you have to squint [00:53:12] and you can barely see what they are. And then if they're as a tap back, you have no hope of knowing [00:53:16] what they are. Like if it's of a person, for example, like it's, you're going to get like 80% shirt [00:53:21] and then like 10% head. So you're not going to be able to tell who's what. Uh, so those need work [00:53:27] and no one wants my Genmoji. My, my brother has formally requested. I stopped sending them and, [00:53:32] uh, I will, I will take that request under advisement. Anyway, uh, bought a MacBook pro. Um, [00:53:42] Oh, I've got a, I've got a parenthetical as a C notes. All right, well, here's eight more bullet [00:53:50] points. I'm going to rattle through these. So Becky, actually, it was her idea. She wanted to [00:53:54] get me this. We were in Japan. She's like, Hey, you know, I heard you talking about the nanotexture [00:53:57] display. And like, of course, you know, the, the, the brighter screen and us being in Orlando, [00:54:01] you never use a computer outside or out of the house. So she wanted to buy it. And she said, [00:54:06] it was just really complicated. I didn't want to fuck up. I didn't want to get you the wrong set of [00:54:09] options. I asked Aaron and Aaron didn't know either. He said he hadn't really been on top of it. [00:54:16] Uh, and I was like, honey, that's so I didn't say like, bless your heart. I, it was a such a sweet [00:54:23] gesture. And it is true that I've been curious about it. Um, but I didn't feel like, uh, I had [00:54:30] to get one right this minute. Uh, and, and honestly, the, the, the 14 inch MacBook pro is still too heavy. [00:54:36] I, I, I, I lifted tonal my, my weightlifting robot, uh, reported in my tonal wrapped because [00:54:46] everything has to do a goddamn wrapped dingus to try to share in social media as if like, you know, [00:54:52] one assumes that all these wrapped posts just go to the goddamn bottom of every algorithm because [00:54:57] they're all the same. But in any case, it showed me a little wrapped video and it said, I wait, [00:55:02] I, I lifted one and a half million pounds last year or over the course of 2024. And I was like, [00:55:07] that's a lot of weight that I lifted. I, yesterday I did the equivalent of like, you know, 250, [00:55:12] 275 pound deadlift barbell deadlift. And that was hard, but not too hard. It's the max weight that, [00:55:20] that tonal can do. Um, I, I, I, I like to think I'm pretty strong now. Uh, that four pound fucking [00:55:31] MacBook pro is backbreakingly heavy, no matter where I am, I'll pick it up and like, that is denser than [00:55:40] it looks. It's a, it's like when you pick up a baby, that's like a little bit too dense, you know, [00:55:46] and you're just like, Oh wow. I was expecting this to be more fun. This is just going to give [00:55:51] me pelvic floor problems. If I do this for more than exactly 30 seconds and then hand it back to [00:55:57] its mother who surely has pelvic floor issues. Um, I don't want to be carrying around this MacBook pro. [00:56:05] I don't want to carry it with my arms. I don't want to carry it in a bag. I don't want to carry it [00:56:09] into the car. I don't want to carry it, you know, uh, in a Starbucks. I want to hire a Porter to [00:56:16] bring it around to me, you know, from place to place. Maybe, maybe they could also saddle up and [00:56:23] have a, uh, vision pro. So that's what I really want. Uh, at least until, and unless Apple releases [00:56:30] the 12 inch MacBook pro, uh, that we were promised in our early years. [00:56:34] Anyway, when Becky said that it was hard to configure and figure out what she'd want to order [00:56:43] or what I would want her to order. And as a result would have made a pretty lousy gift because [00:56:49] the likelihood of her getting it right. Where if you look at the number of configurations for these [00:56:53] seeing this thing, like astronomically small, I actually spent, I sat down, I look, I, I said, [00:57:01] I didn't need the thing. And then I come home and then within a day and a half, uh, my MacBook air is [00:57:07] crying because it's out of storage to the point where like I composed an email and I hit send on the email [00:57:12] and then Apple mail reported, yo, we just barfed on all this and just deleted all your shit. Cause we [00:57:17] ran out of disk space, no warning. And in modern day Mac OS, you don't get to know how much disk space [00:57:23] you have because all of it is like optimized storage. So like whether it's your iCloud drive [00:57:29] or it's your Apple photos, once the system is under any sort of, um, storage stress, it'll, [00:57:35] it's supposed to detect that and start deleting shit. Your phone does this too. So sometimes like [00:57:41] you're like, like I was importing a bunch of raw images on the phone and it said, Oh, you're out of [00:57:45] storage. And then I knew, because I know how it works under the hood, even though it exposes zero [00:57:49] controls or visibility as to what is going the fuck on. I knew that when it ran out of storage, [00:57:54] the right solution was sit and wait for 30 seconds while it deletes shit in the background and then [00:57:59] just hit import again. Right. Well, I, that didn't work in this case. Like I actually went and deleted [00:58:05] like a hundred gigabytes of garbage. It's a small SSD. It's a 512 gigabyte MacBook air. I deleted all this [00:58:11] stuff, but, um, from my iCloud drive on another computer, because this one was finder was completely [00:58:17] unresponsive. Uh, and it never got better because it had suspended all iCloud drive syncing as a, [00:58:24] probably like some sort of like memory safeguard or storage safeguard to like make sure I didn't, [00:58:27] it didn't fuck up anything in the cloud. And so like even going, I'm not going to, [00:58:33] most of that storage was in my iCloud drive, which is how it got full while I was overseas. [00:58:38] And when I came back, I, I didn't have like, I could, I could have gone through and like run [00:58:47] RM dash RF from the terminal and deleted stuff from the iCloud drive to like as a, as an emergency break, [00:58:52] like get, get this SSD empty enough that the operating system can run and then figure it out. [00:59:00] But then of course it would have synced all of those deletions up to the cloud and deleted the [00:59:03] same things off of my other computers. So this is a tractable problem. And I, I, I ultimately did solve [00:59:10] it, but I, I realize now why Apple markets so much of its pro devices to photos and video people, [00:59:20] because photos and videos take up a shit ton of space. Uh, they have different performance [00:59:26] characteristics than programming and, and the, their needs in many ways are higher than what you need. [00:59:33] If you're just writing Ruby code, right? Uh, it just so happens that Swift, the programming language [00:59:38] that they wrote is also like, we'll, we'll take advantage of all of these cores during compilation [00:59:42] in a way that like a lot of local development in other languages won't. [00:59:45] But in my last year of doing a lot more video work, doing a lot more audio work, I can definitely [00:59:52] understand now like, Oh yeah, like the, the MacBook air actually is inappropriate for a lot of the [00:59:57] workflows of the things that I do. So that experience, I came to Becky and I was like, look, I know I said [01:00:05] I didn't need this, but I think I might need this. Um, where need is in very, you know, very gentle [01:00:12] text. It's, it's a thin font variant to say, I need this. What I mean to say is like, I, it would save [01:00:19] me a lot of time and stress and headache and, uh, uh, rework to have a better computer, a more [01:00:26] capacious computer. And of course you can't upgrade the storage and your existing max. So here we are. [01:00:32] Um, but anyway, I was in the configurator for the new MacBook pro. And the first decision you got to [01:00:36] make is do I want a regular M4 chip, which I did not, or one of the pro ones, which is a, you know, [01:00:43] 12 or 14 core. I want to say a chip, uh, which is a huge upgrade over the M3 pro the M3 pro had a way [01:00:53] more efficiency cores and the M4 pro has more performance score. So it's like a, it's doing [01:00:57] much better in synthetic benchmarking that that's impressive. It's a big year over year change or the [01:01:02] M4 max, which is, you know, uh, an incremental improvement over the M3 max, but to the extent [01:01:10] that it's better than the pro it's like, you know, got another meat and quote unquote media [01:01:14] e
An airhacks.fm conversation with Phillip Krueger (@phillipkruger) about: early programming experiences with Visual Basic and Java, transition from actuarial science to computer science, first job at a bank working with Java Swing and RMI over CORBA, experience with J2EE and XML technologies, working with XML and XSLT, development of open-source Swing components, work on dotMobi sites for mobile phones in Africa, creation of API extensions for Java EE and MicroProfile, involvement in the MicroProfile GraphQL specification, joining Red Hat and working on quarkus, development of SmallRye GraphQL, improvements to OpenAPI support in Quarkus, work on Quarkus Dev UI, discussion about the evolution of Java application servers and frameworks, comparison of REST and GraphQL, thoughts on Java development culture in South Africa Phillip Krueger on twitter: @phillipkruger
We have announced our first speaker, friend of the show Dylan Patel, and topic slates for Latent Space LIVE! at NeurIPS. Sign up for IRL/Livestream and to debate!We are still taking questions for our next big recap episode! Submit questions and messages on Speakpipe here for a chance to appear on the show!The vibe shift we observed in July - in favor of Claude 3.5 Sonnet, first introduced in June — has been remarkably long lived and persistent, surviving multiple subsequent updates of 4o, o1 and Gemini versions, for Anthropic's Claude to end 2024 as the preferred model for AI Engineers and even being the exclusive choice for new code agents like bolt.new (our next guest on the pod!), which unlocked so much performance from Claude Sonnet that it went from $0 to $4m ARR in 4 weeks when it launched last month.Anthropic has now raised an additional $4b from Amazon and made an incredibly well received update of Claude 3.5 Sonnet (and Haiku), making significant improvements in performance over its predecessors:Solving SWE-BenchAs part of the October Sonnet release, Anthropic teased a blink-and-you'll miss it result:The updated Claude 3.5 Sonnet shows wide-ranging improvements on industry benchmarks, with particularly strong gains in agentic coding and tool use tasks. On coding, it improves performance on SWE-bench Verified from 33.4% to 49.0%, scoring higher than all publicly available models—including reasoning models like OpenAI o1-preview and specialized systems designed for agentic coding. It also improves performance on TAU-bench, an agentic tool use task, from 62.6% to 69.2% in the retail domain, and from 36.0% to 46.0% in the more challenging airline domain. The new Claude 3.5 Sonnet offers these advancements at the same price and speed as its predecessor.This was followed up by a blogpost a week later from today's guest, Erik Schluntz, the engineer who implemented and scored this SOTA result using a simple, non-overengineered version of the SWE-Agent framework (you can see the submissions here). We have previously covered the SWE-Bench story extensively:* Speaking with SWEBench/SWEAgent authors at ICLR* Speaking with Cosine Genie, the previous SOTA (43.8%) on SWEBench Verified (with brief update at DevDay 2024)* Speaking with Shunyu Yao on SWEBench and the ReAct paradigm driving SWE-AgentOne of the notable inclusions in this blogpost are the tools that Erik decided to give Claude, e.g. the “Edit Tool”:The tools teased in the SWEBench submission/blogpost were then polished up and released with Computer Use…And you can also see even more computer use tools given in the new Model Context Protocol servers:Claude Computer UseBecause it is one of the best received AI releases of the year, we recommend watching the 2 minute Computer Use intro (and related demos) in its entirety:Eric also worked on Claude's function calling, tool use, and computer use APIs, so we discuss that in the episode.Erik [00:53:39]: With computer use, just give the thing a browser that's logged into what you want to integrate with, and it's going to work immediately. And I see that reduction in friction as being incredibly exciting. Imagine a customer support team where, okay, hey, you got this customer support bot, but you need to go integrate it with all these things. And you don't have any engineers on your customer support team. But if you can just give the thing a browser that's logged into your systems that you need it to have access to, now, suddenly, in one day, you could be up and rolling with a fully integrated customer service bot that could go do all the actions you care about. So I think that's the most exciting thing for me about computer use, is reducing that friction of integrations to almost zero.As you'll see, this is very top of mind for Erik as a former Robotics founder who's company basically used robots to interface with human physical systems like elevators.Full Video episodePlease like and subscribe!Show Notes* Eric Schluntz* “Raising the bar on SWE-Bench Verified”* Cobalt Robotics* SWE-Bench* SWE-Bench Verified* Human Eval & other benchmarks* Anthropic Workbench* Aider* Cursor* Fireworks AI* E2B* Amanda Askell* Toyota Research* Physical Intelligence (Pi)* Chelsea Finn* Josh Albrecht* Eric Jang* 1X* Dust* Cosine Episode* Bolt* Adept Episode* TauBench* LMSys EpisodeTimestamps* [00:00:00] Introductions* [00:03:39] What is SWE-Bench?* [00:12:22] SWE-Bench vs HumanEval vs others* [00:15:21] SWE-Agent architecture and runtime* [00:21:18] Do you need code indexing?* [00:24:50] Giving the agent tools* [00:27:47] Sandboxing for coding agents* [00:29:16] Why not write tests?* [00:30:31] Redesigning engineering tools for LLMs* [00:35:53] Multi-agent systems* [00:37:52] Why XML so good?* [00:42:57] Thoughts on agent frameworks* [00:45:12] How many turns can an agent do?* [00:47:12] Using multiple model types* [00:51:40] Computer use and agent use cases* [00:59:04] State of AI robotics* [01:04:24] Robotics in manufacturing* [01:05:01] Hardware challenges in robotics* [01:09:21] Is self-driving a good business?TranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO at Decibel Partners. And today we're in the new studio with my usual co-host, Shawn from Smol AI.Swyx [00:00:14]: Hey, and today we're very blessed to have Erik Schluntz from Anthropic with us. Welcome.Erik [00:00:19]: Hi, thanks very much. I'm Erik Schluntz. I'm a member of technical staff at Anthropic, working on tool use, computer use, and Swebench.Swyx [00:00:27]: Yeah. Well, how did you get into just the whole AI journey? I think you spent some time at SpaceX as well? Yeah. And robotics. Yeah. There's a lot of overlap between like the robotics people and the AI people, and maybe like there's some interlap or interest between language models for robots right now. Maybe just a little bit of background on how you got to where you are. Yeah, sure.Erik [00:00:50]: I was at SpaceX a long time ago, but before joining Anthropic, I was the CTO and co-founder of Cobalt Robotics. We built security and inspection robots. These are sort of five foot tall robots that would patrol through an office building or a warehouse looking for anything out of the ordinary. Very friendly, no tasers or anything. We would just sort of call a remote operator if we saw anything. We have about 100 of those out in the world, and had a team of about 100. We actually got acquired about six months ago, but I had left Cobalt about a year ago now, because I was starting to get a lot more excited about AI. I had been writing a lot of my code with things like Copilot, and I was like, wow, this is actually really cool. If you had told me 10 years ago that AI would be writing a lot of my code, I would say, hey, I think that's AGI. And so I kind of realized that we had passed this level, like, wow, this is actually really useful for engineering work. That got me a lot more excited about AI and learning about large language models. So I ended up taking a sabbatical and then doing a lot of reading and research myself and decided, hey, I want to go be at the core of this and joined Anthropic.Alessio [00:01:53]: And why Anthropic? Did you consider other labs? Did you consider maybe some of the robotics companies?Erik [00:02:00]: So I think at the time I was a little burnt out of robotics, and so also for the rest of this, any sort of negative things I say about robotics or hardware is coming from a place of burnout, and I reserve my right to change my opinion in a few years. Yeah, I looked around, but ultimately I knew a lot of people that I really trusted and I thought were incredibly smart at Anthropic, and I think that was the big deciding factor to come there. I was like, hey, this team's amazing. They're not just brilliant, but sort of like the most nice and kind people that I know, and so I just felt like I could be a really good culture fit. And ultimately, I do care a lot about AI safety and making sure that I don't want to build something that's used for bad purposes, and I felt like the best chance of that was joining Anthropic.Alessio [00:02:39]: And from the outside, these labs kind of look like huge organizations that have these obscureSwyx [00:02:44]: ways to organize.Alessio [00:02:45]: How did you get, you joined Anthropic, did you already know you were going to work on of the stuff you publish or you kind of join and then you figure out where you land? I think people are always curious to learn more.Erik [00:02:57]: Yeah, I've been very happy that Anthropic is very bottoms up and sort of very sort of receptive to whatever your interests are. And so I joined sort of being very transparent of like, hey, I'm most excited about code generation and AI that can actually go out and sort of touch the world or sort of help people build things. And, you know, those weren't my initial initial projects. I also came in and said, hey, I want to do the most valuable possible thing for this company and help Anthropic succeed. And, you know, like, let me find the balance of those. So I was working on lots of things at the beginning, you know, function calling, tool use. And then sort of as it became more and more relevant, I was like, oh, hey, like, let's it's time to go work on encoding agents and sort of started looking at SWE-Bench as sort of a really good benchmark for that.Swyx [00:03:39]: So let's get right into SWE-Bench. That's one of the many claims to fame. I feel like there's just been a series of releases related with Cloud 3.5 Sonnet around about two or three months ago, 3.5 Sonnet came out and it was it was a step ahead in terms of a lot of people immediately fell in love with it for coding. And then last month you released a new updated version of Cloud Sonnet. We're not going to talk about the training for that because that's still confidential. But I think Anthropic's done a really good job, like applying the model to different things. So you took the lead on SWE-Bench, but then also we're going to talk a little bit about computer use later on. So maybe just give us a context about why you looked at SWE-Bench Verified and you actually came up with a whole system for building agents that would maximally use the model well. Yeah.Erik [00:04:28]: So I'm on a sub team called Product Research. And basically the idea of product research is to really understand what end customers care about and want in the models and then work to try to make that happen. So we're not focused on sort of these more abstract general benchmarks like math problems or MMLU, but we really care about finding the things that are really valuable and making sure the models are great at those. And so because I've been interested in coding agents, I knew that this would be a really valuable thing. And I knew there were a lot of startups and our customers trying to build coding agents with our models. And so I said, hey, this is going to be a really good benchmark to be able to measure that and do well on it. And I wasn't the first person at Anthropic to find SWE-Bench, and there are lots of people that already knew about it and had done some internal efforts on it. It fell to me to sort of both implement the benchmark, which is very tricky, and then also to sort of make sure we had an agent and basically like a reference agent, maybe I'd call it, that could do very well on it. Ultimately, we want to provide how we implemented that reference agent so that people can build their own agents on top of our system and get sort of the most out of it as possible. So with this blog post we released on SWE-Bench, we released the exact tools and the prompt that we gave the model to be able to do well.Swyx [00:05:46]: For people who don't know, who maybe haven't dived into SWE-Bench, I think the general perception is they're like tasks that a software engineer could do. I feel like that's an inaccurate description because it is basically, one, it's a subset of like 12 repos. It's everything they could find that every issue with like a matching commit that could be tested. So that's not every commit. And then SWE-Bench verified is further manually filtered by OpenAI. Is that an accurate description and anything you'd change about that? Yes.Erik [00:06:14]: SWE-Bench is, it certainly is a subset of all tasks. It's first of all, it's only Python repos, so already fairly limited there. And it's just 12 of these popular open source repos. And yes, it's only ones where there were tests that passed at the beginning and also new tests that were introduced that test the new feature that's added. So it is, I think, a very limited subset of real engineering tasks. But I think it's also very valuable because even though it's a subset, it is true engineering tasks. And I think a lot of other benchmarks are really kind of these much more artificial setups of even if they're related to coding, they're more like coding interview style questions or puzzles that I think are very different from day-to-day what you end up doing. I don't know how frequently you all get to use recursion in your day-to-day job, but whenever I do, it's like a treat. And I think it's almost comical, and a lot of people joke about this in the industry, is how different interview questions are.Swyx [00:07:13]: Dynamic programming. Yeah, exactly.Erik [00:07:15]: Like, you code. From the day-to-day job. But I think one of the most interesting things about SWE-Bench is that all these other benchmarks are usually just isolated puzzles, and you're starting from scratch. Whereas SWE-Bench, you're starting in the context of an entire repository. And so it adds this entirely new dimension to the problem of finding the relevant files. And this is a huge part of real engineering, is it's actually pretty rare that you're starting something totally greenfield. You need to go and figure out where in a codebase you're going to make a change and understand how your work is going to interact with the rest of the systems. And I think SWE-Bench does a really good job of presenting that problem.Alessio [00:07:51]: Why do we still use human eval? It's like 92%, I think. I don't even know if you can actually get to 100% because some of the data is not actuallySwyx [00:07:59]: solvable.Alessio [00:08:00]: Do you see benchmarks like that, they should just get sunsetted? Because when you look at the model releases, it's like, oh, it's like 92% instead of like 89%, 90% on human eval versus, you know, SWE-Bench verified is you have 49%, right? Which is like, before 45% was state of the art, but maybe like six months ago it was like 30%, something like that. So is that a benchmark that you think is going to replace human eval, or do you think they're just going to run in parallel?Erik [00:08:27]: I think there's still need for sort of many different varied evals. Like sometimes you do really care about just sort of greenfield code generation. And so I don't think that everything needs to go to sort of an agentic setup.Swyx [00:08:39]: It would be very expensive to implement.Erik [00:08:41]: The other thing I was going to say is that SWE-Bench is certainly hard to implement and expensive to run because each task, you have to parse, you know, a lot of the repo to understand where to put your code. And a lot of times you take many tries of writing code, running it, editing it. It can use a lot of tokens compared to something like human eval. So I think there's definitely a space for these more traditional coding evals that are sort of easy to implement, quick to run, and do get you some signal. Maybe hopefully there's just sort of harder versions of human eval that get created.Alessio [00:09:14]: How do we get SWE-Bench verified to 92%? Do you think that's something where it's like line of sight to it, or it's like, you know, we need a whole lot of things to go right? Yeah, yeah.Erik [00:09:23]: And actually, maybe I'll start with SWE-Bench versus SWE-Bench verified, which is I think something I missed earlier. So SWE-Bench is, as we described, this big set of tasks that were scraped.Swyx [00:09:33]: Like 12,000 or something?Erik [00:09:34]: Yeah, I think it's 2,000 in the final set. But a lot of those, even though a human did them, they're actually impossible given the information that comes with the task. The most classic example of this is the test looks for a very specific error string. You know, like assert message equals error, something, something, something. And unless you know that's exactly what you're looking for, there's no way the model is going to write that exact same error message, and so the tests are going to fail. So SWE-Bench verified was actually made in partnership with OpenAI, and they hired humans to go review all these tasks and pick out a subset to try to remove any obstacle like this that would make the tasks impossible. So in theory, all of these tasks should be fully doable by the model. And they also had humans grade how difficult they thought the problems would be. Between less than 15 minutes, I think 15 minutes to an hour, an hour to four hours, and greater than four hours. So that's kind of this interesting sort of how big the problem is as well. To get to SWE-Bench verified to 90%, actually, maybe I'll also start off with some of the remaining failures that I see when running our model on SWE-Bench. I'd say the biggest cases are the model sort of operates at the wrong level of abstraction. And what I mean by that is the model puts in maybe a smaller band-aid when really the task is asking for a bigger refactor. And some of those, you know, is the model's fault, but a lot of times if you're just sort of seeing the GitHub issue, it's not exactly clear which way you should do. So even though these tasks are possible, there's still some ambiguity in how the tasks are described. That being said, I think in general, language models frequently will produce a smaller diff when possible, rather than trying to do a big refactor. I think another area, at least the agent we created, didn't have any multimodal abilities, even though our models are very good at vision. So I think that's just a missed opportunity. And if I read through some of the traces, there's some funny things where, especially the tasks on matplotlib, which is a graphing library, the test script will save an image and the model will just say, okay, it looks great, you know, without looking at it. So there's certainly extra juice to squeeze there of just making sure the model really understands all the sides of the input that it's given, including multimodal. But yeah, I think like getting to 92%. So this is something that I have not looked at, but I'm very curious about. I want someone to look at, like, what is the union of all of the different tasks that have been solved by at least one attempt at SWE-Bench Verified. There's a ton of submissions to the benchmark, and so I'd be really curious to see how many of those 500 tasks at least someone has solved. And I think, you know, there's probably a bunch that none of the attempts have ever solved. And I think it'd be interesting to look at those and say, hey, is there some problem with these? Like, are these impossible? Or are they just really hard and only a human could do them?Swyx [00:12:22]: Yeah, like specifically, is there a category of problems that are still unreachable by any LLM agent? Yeah, yeah. And I think there definitely are.Erik [00:12:28]: The question is, are those fairly inaccessible or are they just impossible because of the descriptions? But I think certainly some of the tasks, especially the ones that the human graders reviewed as like taking longer than four hours are extremely difficult. I think we got a few of them right, but not very many at all in the benchmark.Swyx [00:12:49]: And did those take less than four hours?Erik [00:12:51]: They certainly did less than, yeah, than four hours.Swyx [00:12:54]: Is there a correlation of length of time with like human estimated time? You know what I mean? Or do we have sort of more of X paradox type situations where it's something super easy for a model, but hard for a human?Erik [00:13:06]: I actually haven't done the stats on that, but I think that'd be really interesting to see of like how many tokens does it take and how is that correlated with difficulty? What is the likelihood of success with difficulty? I think actually a really interesting thing that I saw, one of my coworkers who was also working on this named Simon, he was focusing just specifically on the very hard problems, the ones that are said to take longer than four hours. And he ended up sort of creating a much more detailed prompt than I used. And he got a higher score on the most difficult subset of problems, but a lower score overall on the whole benchmark. And the prompt that I made, which is sort of much more simple and bare bones, got a higher score on the overall benchmark, but lower score on the really hard problems. And I think some of that is the really detailed prompt made the model sort of overcomplicate a lot of the easy problems, because honestly, a lot of the suite bench problems, they really do just ask for a bandaid where it's like, hey, this crashes if this is none, and really all you need to do is put a check if none. And so sometimes trying to make the model think really deeply, it'll think in circles and overcomplicate something, which certainly human engineers are capable of as well. But I think there's some interesting thing of the best prompt for hard problems might not be the best prompt for easy problems.Alessio [00:14:19]: How do we fix that? Are you supposed to fix it at the model level? How do I know what prompt I'm supposed to use?Swyx [00:14:25]: Yeah.Erik [00:14:26]: And I'll say this was a very small effect size, and so I think this isn't worth obsessing over. I would say that as people are building systems around agents, I think the more you can separate out the different kinds of work the agent needs to do, the better you can tailor a prompt for that task. And I think that also creates a lot of like, for instance, if you were trying to make an agent that could both solve hard programming tasks, and it could just write quick test files for something that someone else had already made, the best way to do those two tasks might be very different prompts. I see a lot of people build systems where they first sort of have a classification, and then route the problem to two different prompts. And that's sort of a very effective thing, because one, it makes the two different prompts much simpler and smaller, and it means you can have someone work on one of the prompts without any risk of affecting the other tasks. So it creates like a nice separation of concerns. Yeah.Alessio [00:15:21]: And the other model behavior thing you mentioned, they prefer to generate like shorter diffs. Why is that? Like, is there a way? I think that's maybe like the lazy model question that people have is like, why are you not just generating the whole code instead of telling me to implement it?Swyx [00:15:36]: Are you saving tokens? Yeah, exactly. It's like conspiracy theory. Yeah. Yeah.Erik [00:15:41]: Yeah. So there's two different things there. One is like the, I'd say maybe like doing the easier solution rather than the hard solution. And I'd say the second one, I think what you're talking about is like the lazy model is like when the model says like dot, dot, dot, code remains the same.Swyx [00:15:52]: Code goes here. Yeah. I'm like, thanks, dude.Erik [00:15:55]: But honestly, like that just comes as like people on the internet will do stuff like that. And like, dude, if you're talking to a friend and you ask them like to give you some example code, they would definitely do that. They're not going to reroll the whole thing. And so I think that's just a matter of like, you know, sometimes you actually do just, just want like the relevant changes. And so I think it's, this is something where a lot of times like, you know, the models aren't good at mind reading of like which one you want. So I think that like the more explicit you can be in prompting to say, Hey, you know, give me the entire thing, no, no elisions versus just give me the relevant changes. And that's something, you know, we want to make the models always better at following those kinds of instructions.Swyx [00:16:32]: I'll drop a couple of references here. We're recording this like a day after Dario, Lex Friedman just dropped his five hour pod with Dario and Amanda and the rest of the crew. And Dario actually made this interesting observation that like, we actually don't want, we complain about models being too chatty in text and then not chatty enough in code. And so like getting that right is kind of a awkward bar because, you know, you, you don't want it to yap in its responses, but then you also want it to be complete in, in code. And then sometimes it's not complete. Sometimes you just want it to diff, which is something that Enthopic has also released with a, you know, like the, the fast edit stuff that you guys did. And then the other thing I wanted to also double back on is the prompting stuff. You said, you said it was a small effect, but it was a noticeable effect in terms of like picking a prompt. I think we'll go into suite agent in a little bit, but I kind of reject the fact that, you know, you need to choose one prompt and like have your whole performance be predicated on that one prompt. I think something that Enthopic has done really well is meta prompting, prompting for a prompt. And so why can't you just develop a meta prompt for, for all the other prompts? And you know, if it's a simple task, make a simple prompt, if it's a hard task, make a hard prompt. Obviously I'm probably hand-waving a little bit, but I will definitely ask people to try the Enthopic Workbench meta prompting system if they haven't tried it yet. I went to the Build Day recently at Enthopic HQ, and it's the closest I've felt to an AGI, like learning how to operate itself that, yeah, it's, it's, it's really magical.Erik [00:17:57]: Yeah, no, Claude is great at writing prompts for Claude.Swyx [00:18:00]: Right, so meta prompting. Yeah, yeah.Erik [00:18:02]: The way I think about this is that humans, even like very smart humans still use sort of checklists and use sort of scaffolding for themselves. Surgeons will still have checklists, even though they're incredible experts. And certainly, you know, a very senior engineer needs less structure than a junior engineer, but there still is some of that structure that you want to keep. And so I always try to anthropomorphize the models and try to think about for a human sort of what is the equivalent. And that's sort of, you know, how I think about these things is how much instruction would you give a human with the same task? And do you, would you need to give them a lot of instruction or a little bit of instruction?Alessio [00:18:36]: Let's talk about the agent architecture maybe. So first, runtime, you let it run until it thinks it's done or it reaches 200k context window.Swyx [00:18:45]: How did you come up? What's up with that?Erik [00:18:47]: Yeah.Swyx [00:18:48]: Yeah.Erik [00:18:49]: I mean, this, so I'd say that a lot of previous agent work built sort of these very hard coded and rigid workflows where the model is sort of pushed through certain flows of steps. And I think to some extent, you know, that's needed with smaller models and models that are less smart. But one of the things that we really wanted to explore was like, let's really give Claude the reins here and not force Claude to do anything, but let Claude decide, you know, how it should approach the problem, what steps it should do. And so really, you know, what we did is like the most extreme version of this is just give it some tools that it can call and it's able to keep calling the tools, keep thinking, and then yeah, keep doing that until it thinks it's done. And that's sort of the most, the most minimal agent framework that we came up with. And I think that works very well. I think especially the new Sonnet 3.5 is very, very good at self-correction, has a lot of like grit. Claude will try things that fail and then try, you know, come back and sort of try different approaches. And I think that's something that you didn't see in a lot of previous models. Some of the existing agent frameworks that I looked at, they had whole systems built to try to detect loops and see, oh, is the model doing the same thing, you know, more than three times, then we have to pull it out. And I think like the smarter the models are, the less you need that kind of extra scaffolding. So yeah, just giving the model tools and letting it keep sample and call tools until it thinks it's done was the most minimal framework that we could think of. And so that's what we did.Alessio [00:20:18]: So you're not pruning like bad paths from the context. If it tries to do something, it fails. You just burn all these tokens.Swyx [00:20:25]: Yes.Erik [00:20:26]: I would say the downside of this is that this is sort of a very token expensive way to doSwyx [00:20:29]: this. But still, it's very common to prune bad paths because models get stuck. Yeah.Erik [00:20:35]: But I'd say that, yeah, 3.5 is not getting stuck as much as previous models. And so, yeah, we wanted to at least just try the most minimal thing. Now, I would say that, you know, this is definitely an area of future research, especially if we talk about these problems that are going to take a human more than four hours. Those might be things where we're going to need to go prune bad paths to let the model be able to accomplish this task within 200k tokens. So certainly I think there's like future research to be done in that area, but it's not necessary to do well on these benchmarks.Swyx [00:21:06]: Another thing I always have questions about on context window things, there's a mini cottage industry of code indexers that have sprung up for large code bases, like the ones in SweetBench. You didn't need them? We didn't.Erik [00:21:18]: And I think I'd say there's like two reasons for this. One is like SweetBench specific and the other is a more general thing. The more general thing is that I think Sonnet is very good at what we call agentic search. And what this basically means is letting the model decide how to search for something. It gets the results and then it can decide, should it keep searching or is it done? Does it have everything it needs? So if you read through a lot of the traces of the SweetBench, the model is calling tools to view directories, list out things, view files. And it will do a few of those until it feels like it's found the file where the bug is. And then it will start working on that file. And I think like, again, this is all, everything we did was about just giving Claude the full reins. So there's no hard-coded system. There's no search system that you're relying on getting the correct files into context. This just totally lets Claude do it.Swyx [00:22:11]: Or embedding things into a vector database. Exactly. Oops. No, no.Erik [00:22:17]: This is very, very token expensive. And so certainly, and it also takes many, many turns. And so certainly if you want to do something in a single turn, you need to do RAG and just push stuff into the first prompt.Alessio [00:22:28]: And just to make it clear, it's using the Bash tool, basically doing LS, looking at files and then doing CAD for the following context. It can do that.Erik [00:22:35]: But it's file editing tool also has a command in it called view that can view a directory. It's very similar to LS, but it just sort of has some nice sort of quality of life improvements. So I think it'll only do an LS sort of two directories deep so that the model doesn't get overwhelmed if it does this on a huge file. I would say actually we did more engineering of the tools than the overall prompt. But the one other thing I want to say about this agentic search is that for SWE-Bench specifically, a lot of the tasks are bug reports, which means they have a stack trace in them. And that means right in that first prompt, it tells you where to go. And so I think this is a very easy case for the model to find the right files versus if you're using this as a general coding assistant where there isn't a stack trace or you're asking it to insert a new feature, I think there it's much harder to know which files to look at. And that might be an area where you would need to do more of this exhaustive search where an agentic search would take way too long.Swyx [00:23:33]: As someone who spent the last few years in the JS world, it'd be interesting to see SWE-Bench JS because these stack traces are useless because of so much virtualization that we do. So they're very, very disconnected with where the code problems are actually appearing.Erik [00:23:50]: That makes me feel better about my limited front-end experience, as I've always struggled with that problem.Swyx [00:23:55]: It's not your fault. We've gotten ourselves into a very, very complicated situation. And I'm not sure it's entirely needed. But if you talk to our friends at Vercel, they will say it is.Erik [00:24:04]: I will say SWE-Bench just released SWE-Bench Multimodal, which I believe is either entirely JavaScript or largely JavaScript. And it's entirely things that have visual components of them.Swyx [00:24:15]: Are you going to tackle that? We will see.Erik [00:24:17]: I think it's on the list and there's interest, but no guarantees yet.Swyx [00:24:20]: Just as a side note, it occurs to me that every model lab, including Enthopic, but the others as well, you should have your own SWE-Bench, whatever your bug tracker tool. This is a general methodology that you can use to track progress, I guess.Erik [00:24:34]: Yeah, sort of running on our own internal code base.Swyx [00:24:36]: Yeah, that's a fun idea.Alessio [00:24:37]: Since you spend so much time on the tool design, so you have this edit tool that can make changes and whatnot. Any learnings from that that you wish the AI IDEs would take in? Is there some special way to look at files, feed them in?Erik [00:24:50]: I would say the core of that tool is string replace. And so we did a few different experiments with different ways to specify how to edit a file. And string replace, basically, the model has to write out the existing version of the string and then a new version, and that just gets swapped in. We found that to be the most reliable way to do these edits. Other things that we tried were having the model directly write a diff, having the model fully regenerate files. That one is actually the most accurate, but it takes so many tokens, and if you're in a very big file, it's cost prohibitive. There's basically a lot of different ways to represent the same task. And they actually have pretty big differences in terms of model accuracy. I think Eider, they have a really good blog where they explore some of these different methods for editing files, and they post results about them, which I think is interesting. But I think this is a really good example of the broader idea that you need to iterate on tools rather than just a prompt. And I think a lot of people, when they make tools for an LLM, they kind of treat it like they're just writing an API for a computer, and it's sort of very minimal. It's sort of just the bare bones of what you'd need, and honestly, it's so hard for the models to use those. Again, I come back to anthropomorphizing these models. Imagine you're a developer, and you just read this for the very first time, and you're trying to use it. You can do so much better than just sort of the bare API spec of what you'd often see. Include examples in the description. Include really detailed explanations of how things work. And I think that, again, also think about what is the easiest way for the model to represent the change that it wants to make. For file editing, as an example, writing a diff is actually... Let's take the most extreme example. You want the model to literally write a patch file. I think patch files have at the very beginning numbers of how many total lines change. That means before the model has actually written the edit, it needs to decide how many numbers or how many lines are going to change.Swyx [00:26:52]: Don't quote me on that.Erik [00:26:54]: I think it's something like that, but I don't know if that's exactly the diff format. But you can certainly have formats that are much easier to express without messing up than others. And I like to think about how much human effort goes into designing human interfaces for things. It's incredible. This is entirely what FrontEnd is about, is creating better interfaces to kind of do the same things. And I think that same amount of attention and effort needs to go into creating agent computer interfaces.Swyx [00:27:19]: It's a topic we've discussed, ACI or whatever that looks like. I would also shout out that I think you released some of these toolings as part of computer use as well. And people really liked it. It's all open source if people want to check it out. I'm curious if there's an environment element that complements the tools. So how do you... Do you have a sandbox? Is it just Docker? Because that can be slow or resource intensive. Do you have anything else that you would recommend?Erik [00:27:47]: I don't think I can talk about sort of public details or about private details about how we implement our sandboxing. But obviously, we need to have sort of safe, secure, and fast sandboxes for training for the models to be able to practice writing code and working in an environment.Swyx [00:28:03]: I'm aware of a few startups working on agent sandboxing. E2B is a close friend of ours that Alessio has led around in, but also I think there's others where they're focusing on snapshotting memory so that it can do time travel for debugging. Computer use where you can control the mouse or keyboard or something like that. Whereas here, I think that the kinds of tools that we offer are very, very limited to coding agent work cases like bash, edit, you know, stuff like that. Yeah.Erik [00:28:30]: I think the computer use demo that we released is an extension of that. It has the same bash and edit tools, but it also has the computer tool that lets it get screenshots and move the mouse and keyboard. Yeah. So I definitely think there's sort of more general tools there. And again, the tools we released as part of SweetBench were, I'd say they're very specific for like editing files and doing bash, but at the same time, that's actually very general if you think about it. Like anything that you would do on a command line or like editing files, you can do with those tools. And so we do want those tools to feel like any sort of computer terminal work could be done with those same tools rather than making tools that were like very specific for SweetBench like run tests as its own tool, for instance. Yeah.Swyx [00:29:15]: You had a question about tests.Alessio [00:29:16]: Yeah, exactly. I saw there's no test writer tool. Is it because it generates the code and then you're running it against SweetBench anyway, so it doesn't really need to write the test or?Swyx [00:29:26]: Yeah.Erik [00:29:27]: So this is one of the interesting things about SweetBench is that the tests that the model's output is graded on are hidden from it. That's basically so that the model can't cheat by looking at the tests and writing the exact solution. And I'd say typically the model, the first thing it does is it usually writes a little script to reproduce the error. And again, most SweetBench tasks are like, hey, here's a bug that I found. I run this and I get this error. So the first thing the model does is try to reproduce that. So it's kind of been rerunning that script as a mini test. But yeah, sometimes the model will like accidentally introduce a bug that breaks some other tests and it doesn't know about that.Alessio [00:30:05]: And should we be redesigning any tools? We kind of talked about this and like having more examples, but I'm thinking even things of like Q as a query parameter in many APIs, it's like easier for the model to like re-query than read the Q. I'm sure it learned the Q by this point, but like, is there anything you've seen like building this where it's like, hey, if I were to redesign some CLI tools, some API tool, I would like change the way structure to make it better for LLMs?Erik [00:30:31]: I don't think I've thought enough about that off the top of my head, but certainly like just making everything more human friendly, like having like more detailed documentation and examples. I think examples are really good in things like descriptions, like so many, like just using the Linux command line, like how many times I do like dash dash help or look at the man page or something. It's like, just give me one example of like how I actually use this. Like I don't want to go read through a hundred flags. Just give me the most common example. But again, so you know, things that would be useful for a human, I think are also very useful for a model.Swyx [00:31:03]: Yeah. I mean, there's one thing that you cannot give to code agents that is useful for human is this access to the internet. I wonder how to design that in, because one of the issues that I also had with just the idea of a suite bench is that you can't do follow up questions. You can't like look around for similar implementations. These are all things that I do when I try to fix code and we don't do that. It's not, it wouldn't be fair, like it'd be too easy to cheat, but then also it's kind of not being fair to these agents because they're not operating in a real world situation. Like if I had a real world agent, of course I'm giving it access to the internet because I'm not trying to pass a benchmark. I don't have a question in there more, more just like, I feel like the most obvious tool access to the internet is not being used.Erik [00:31:47]: I think that that's really important for humans, but honestly the models have so much general knowledge from pre-training that it's, it's like less important for them. I feel like versioning, you know, if you're working on a newer thing that was like, they came after the knowledge cutoff, then yes, I think that's very important. I think actually this, this is like a broader problem that there is a divergence between Sweebench and like what customers will actually care about who are working on a coding agent for real use. And I think one of those there is like internet access and being able to like, how do you pull in outside information? I think another one is like, if you have a real coding agent, you don't want to have it start on a task and like spin its wheels for hours because you gave it a bad prompt. You want it to come back immediately and ask follow up questions and like really make sure it has a very detailed understanding of what to do, then go off for a few hours and do work. So I think that like real tasks are going to be much more interactive with the agent rather than this kind of like one shot system. And right now there's no benchmark that, that measures that. And maybe I think it'd be interesting to have some benchmark that is more interactive. I don't know if you're familiar with TauBench, but it's a, it's a customer service benchmark where there's basically one LLM that's playing the user or the customer that's getting support and another LLM that's playing the support agent and they interact and try to resolve the issue.Swyx [00:33:08]: Yeah. We talked to the LMSIS guys. Awesome. And they also did MTBench for people listening along. So maybe we need MTSWE-Bench. Sure. Yeah.Erik [00:33:16]: So maybe, you know, you could have something where like before the SWE-Bench task starts, you have like a few back and forths with kind of like the, the author who can answer follow up questions about what they want the task to do. And of course you'd need to do that where it doesn't cheat and like just get the exact, the exact thing out of the human or out of the sort of user. But I think that would be a really interesting thing to see. If you look at sort of existing agent work, like a Repl.it's coding agent, I think one of the really great UX things they do is like first having the agent create a plan and then having the human approve that plan or give feedback. I think for agents in general, like having a planning step at the beginning, one, just having that plan will improve performance on the downstream task just because it's kind of like a bigger chain of thought, but also it's just such a better UX. It's way easier for a human to iterate on a plan with a model rather than iterating on the full task that sort of has a much slower time through each loop. If the human has approved this implementation plan, I think it makes the end result a lot more sort of auditable and trustable. So I think there's a lot of things sort of outside of SweetBench that will be very important for real agent usage in the world. Yeah.Swyx [00:34:27]: I will say also, there's a couple of comments on names that you dropped. Copilot also does the plan stage before it writes code. I feel like those approaches have generally been less Twitter successful because it's not prompt to code, it's prompt plan code. You know, so there's a little bit of friction in there, but it's not much. Like it's, it actually, it's, it, you get a lot for what it's worth. I also like the way that Devin does it, where you can sort of edit the plan as it goes along. And then the other thing with Repl.it, we had a, we hosted a sort of dev day pregame with Repl.it and they also commented about multi-agents. So like having two agents kind of bounce off of each other. I think it's a similar approach to what you're talking about with kind of the few shot example, just as in the prompts of clarifying what the agent wants. But typically I think this would be implemented as a tool calling another agent, like a sub-agent I don't know if you explored that, do you like that idea?Erik [00:35:20]: I haven't explored this enough, but I've definitely heard of people having good success with this. Of almost like basically having a few different sort of personas of agents, even if they're all the same LLM. I think this is one thing with multi-agent that a lot of people will kind of get confused by is they think it has to be different models behind each thing. But really it's sort of usually the same, the same model with different prompts. And yet having one, having them have different personas to kind of bring different sort of thoughts and priorities to the table. I've seen that work very well and sort of create a much more thorough and thought outSwyx [00:35:53]: response.Erik [00:35:53]: I think the downside is just that it adds a lot of complexity and it adds a lot of extra tokens. So I think it depends what you care about. If you want a plan that's very thorough and detailed, I think it's great. If you want a really quick, just like write this function, you know, you probably don't want to do that and have like a bunch of different calls before it does this.Alessio [00:36:11]: And just talking about the prompt, why are XML tags so good in Cloud? I think initially people were like, oh, maybe you're just getting lucky with XML. But I saw obviously you use them in your own agent prompts, so they must work. And why is it so model specific to your family?Erik [00:36:26]: Yeah, I think that there's, again, I'm not sure how much I can say, but I think there's historical reasons that internally we've preferred XML. I think also the one broader thing I'll say is that if you look at certain kinds of outputs, there is overhead to outputting in JSON. If you're trying to output code in JSON, there's a lot of extra escaping that needs to be done, and that actually hurts model performance across the board. Versus if you're in just a single XML tag, there's none of that sort of escaping thatSwyx [00:36:58]: needs to happen.Erik [00:36:58]: That being said, I haven't tried having it write HTML and XML, which maybe then you start running into weird escaping things there. I'm not sure. But yeah, I'd say that's some historical reasons, and there's less overhead of escaping.Swyx [00:37:12]: I use XML in other models as well, and it's just a really nice way to make sure that the thing that ends is tied to the thing that starts. That's the only way to do code fences where you're pretty sure example one start, example one end, that is one cohesive unit.Alessio [00:37:30]: Because the braces are nondescriptive. Yeah, exactly.Swyx [00:37:33]: That would be my simple reason. XML is good for everyone, not just Cloud. Cloud was just the first one to popularize it, I think.Erik [00:37:39]: I do definitely prefer to read XML than read JSON.Alessio [00:37:43]: Any other details that are maybe underappreciated? I know, for example, you had the absolute paths versus relative. Any other fun nuggets?Erik [00:37:52]: I think that's a good sort of anecdote to mention about iterating on tools. Like I said, spend time prompt engineering your tools, and don't just write the prompt, but write the tool, and then actually give it to the model and read a bunch of transcripts about how the model tries to use the tool. I think by doing that, you will find areas where the model misunderstands a tool or makes mistakes, and then basically change the tool to make it foolproof. There's this Japanese term, pokayoke, about making tools mistake-proof. You know, the classic idea is you can have a plug that can fit either way, and that's dangerous, or you can make it asymmetric so that it can't fit this way, it has to go like this, and that's a better tool because you can't use it the wrong way. So for this example of absolute paths, one of the things that we saw while testing these tools is, oh, if the model has done CD and moved to a different directory, it would often get confused when trying to use the tool because it's now in a different directory, and so the paths aren't lining up. So we said, oh, well, let's just force the tool to always require an absolute path, and then that's easy for the model to understand. It knows sort of where it is. It knows where the files are. And then once we have it always giving absolute paths, it never messes up even, like, no matter where it is because it just, if you're using an absolute path, it doesn't matter whereSwyx [00:39:13]: you are.Erik [00:39:13]: So iterations like that, you know, let us make the tool foolproof for the model. I'd say there's other categories of things where we see, oh, if the model, you know, opens vim, like, you know, it's never going to return. And so the tool is stuck.Swyx [00:39:28]: Did it get stuck? Yeah. Get out of vim. What?Erik [00:39:31]: Well, because the tool is, like, it just text in, text out. It's not interactive. So it's not like the model doesn't know how to get out of vim. It's that the way that the tool is, like, hooked up to the computer is not interactive. Yes, I mean, there is the meme of no one knows how to get out of vim. You know, basically, we just added instructions in the tool of, like, hey, don't launch commands that don't return.Swyx [00:39:54]: Yeah, like, don't launch vim.Erik [00:39:55]: Don't launch whatever. If you do need to do something, you know, put an ampersand after it to launch it in the background. And so, like, just, you know, putting kind of instructions like that just right in the description for the tool really helps the model. And I think, like, that's an underutilized space of prompt engineering, where, like, people might try to do that in the overall prompt, but just put that in the tool itself so the model knows that it's, like, for this tool, this is what's relevant.Swyx [00:40:20]: You said you worked on the function calling and tool use before you actually started this vBench work, right? Was there any surprises? Because you basically went from creator of that API to user of that API. Any surprises or changes you would make now that you have extensively dog-fooded in a state-of-the-art agent?Erik [00:40:39]: I want us to make, like, maybe, like, a little bit less verbose SDK. I think some way, like, right now, it just takes, I think we sort of force people to do the best practices of writing out sort of these full JSON schemas, but it would be really nice if you could just pass in a Python function as a tool. I think that could be something nice.Swyx [00:40:58]: I think that there's a lot of, like, Python- There's helper libraries. ... structure, you know. I don't know if there's anyone else that is specializing for Anthropic. Maybe Jeremy Howard's and Simon Willis's stuff. They all have Cloud-specific stuff that they are working on. Cloudette. Cloudette, exactly. I also wanted to spend a little bit of time with SuiteAgent. It seems like a very general framework. Like, is there a reason you picked it apart from it's the same authors as vBench, or?Erik [00:41:21]: The main thing we wanted to go with was the same authors as vBench, so it just felt sort of like the safest, most neutral option. And it was, you know, very high quality. It was very easy to modify, to work with. I would say it also actually, their underlying framework is sort of this, it's like, youSwyx [00:41:39]: know, think, act, observe.Erik [00:41:40]: That they kind of go through this loop, which is like a little bit more hard-coded than what we wanted to do, but it's still very close. That's still very general. So it felt like a good match as sort of the starting point for our agent. And we had already sort of worked with and talked with the SWE-Bench people directly, so it felt nice to just have, you know, we already know the authors. This will be easy to work with.Swyx [00:42:00]: I'll share a little bit of like, this all seems disconnected, but once you figure out the people and where they go to school, it all makes sense. So it's all Princeton. Yeah, the SWE-Bench and SuiteAgent.Erik [00:42:11]: It's a group out of Princeton.Swyx [00:42:12]: Yeah, and we had Shun Yu on the pod, and he came up with the React paradigm, and that's think, act, observe. That's all React. So they're all friends. Yep, yeah, exactly.Erik [00:42:22]: And you know, if you actually read our traces of our submission, you can actually see like think, act, observe in our logs. And we just didn't even change the printing code. So it's like doing still function calls under the hood, and the model can do sort of multiple function calls in a row without thinking in between if it wants to. But yeah, so a lot of similarities and a lot of things we inherited from SuiteAgent just as a starting point for the framework.Alessio [00:42:47]: Any thoughts about other agent frameworks? I think there's, you know, the whole gamut from very simple to like very complex.Swyx [00:42:53]: Autogen, CooEI, LandGraph. Yeah, yeah.Erik [00:42:56]: I think I haven't explored a lot of them in detail. I would say with agent frameworks in general, they can certainly save you some like boilerplate. But I think there's actually this like downside of making agents too easy, where you end up very quickly like building a much more complex system than you need. And suddenly, you know, instead of having one prompt, you have five agents that are talking to each other and doing a dialogue. And it's like, because the framework made that 10 lines to do, you end up building something that's way too complex. So I think I would actually caution people to like try to start without these frameworks if you can, because you'll be closer to the raw prompts and be able to sort of directly understand what's going on. I think a lot of times these frameworks also, by trying to make everything feel really magical, you end up sort of really hiding what the actual prompt and output of the model is, and that can make it much harder to debug. So certainly these things have a place, and I think they do really help at getting rid of boilerplate, but they come with this cost of obfuscating what's really happening and making it too easy to very quickly add a lot of complexity. So yeah, I would recommend people to like try it from scratch, and it's like not that bad.Alessio [00:44:08]: Would you rather have like a framework of tools? Do you almost see like, hey, it's maybe easier to get tools that are already well curated, like the ones that you build, if I had an easy way to get the best tool from you, andSwyx [00:44:21]: like you maintain the definition?Alessio [00:44:22]: Or yeah, any thoughts on how you want to formalize tool sharing?Erik [00:44:26]: Yeah, I think that's something that we're certainly interested in exploring, and I think there is space for sort of these general tools that will be very broadly applicable. But at the same time, most people that are building on these, they do have much more specific things that they're trying to do. You know, I think that might be useful for hobbyists and demos, but the ultimate end applications are going to be bespoke. And so we just want to make sure that the model's great at any tool that it uses. But certainly something we're exploring.Alessio [00:44:52]: So everything bespoke, no frameworks, no anything.Swyx [00:44:55]: Just for now, for now.Erik [00:44:56]: Yeah, I would say that like the best thing I've seen is people building up from like, build some good util functions, and then you can use those as building blocks. Yeah, yeah.Alessio [00:45:05]: I have a utils folder, or like all these scripts. My framework is like def, call, and tropic. And then I just put all the defaults.Swyx [00:45:12]: Yeah, exactly. There's a startup hidden in every utils folder, you know? No, totally not. Like, if you use it enough, like it's a startup, you know? At some point. I'm kind of curious, is there a maximum length of turns that it took? Like, what was the longest run? I actually don't.Erik [00:45:27]: I mean, it had basically infinite turns until it ran into a 200k context. I should have looked this up. I don't know. And so for some of those failed cases where it eventually ran out of context, I mean, it was over 100 turns. I'm trying to remember like the longest successful run, but I think it was definitely over 100 turns that some of the times.Swyx [00:45:48]: Which is not that much. It's a coffee break. Yeah.Erik [00:45:52]: But certainly, you know, these things can be a lot of turns. And I think that's because some of these things are really hard, where it's going to take, you know, many tries to do it. And if you think about like, think about a task that takes a human four hours to do. Think about how many different files you read, and like times you edit a file in four hours. That's a lot more than 100.Alessio [00:46:10]: How many times you open Twitter because you get distracted. But if you had a lot more compute, what's kind of like the return on the extra compute now? So like, you know, if you had thousands of turns or like whatever, like how much better would it get?Erik [00:46:23]: Yeah, this I don't know. And I think this is, I think sort of one of the open areas of research in general with agents is memory and sort of how do you have something that can do work beyond its context length where you're just purely appending. So you mentioned earlier things like pruning bad paths. I think there's a lot of interesting work around there. Can you just roll back but summarize, hey, don't go down this path? There be dragons. Yeah, I think that's very interesting that you could have something that that uses way more tokens without ever using at a time more than 200k. So I think that's very interesting. I think the biggest thing is like, can you make the model sort of losslessly summarize what it's learned from trying different approaches and bring things back? I think that's sort of the big challenge.Swyx [00:47:11]: What about different models?Alessio [00:47:12]: So you have Haiku, which is like, you know, cheaper. So you're like, well, what if I have a Haiku to do a lot of these smaller things and then put it back up?Erik [00:47:20]: I think Cursor might have said that they actually have a separate model for file editing.Swyx [00:47:25]: I'm trying to remember.Erik [00:47:25]: I think they were on maybe the Lex Fridman podcast where they said they have a bigger model, like write what the code should be and then a different model, like apply it. So I think there's a lot of interesting room for stuff like that. Yeah, fast supply.Swyx [00:47:37]: We actually did a pod with Fireworks that they worked with on. It's speculative decoding.Erik [00:47:41]: But I think there's also really interesting things about like, you know, paring down input tokens as well, especially sometimes the models trying to read like a 10,000 line file. That's a lot of tokens. And most of it is actually not going to be relevant. I think it'd be really interesting to like delegate that to Haiku. Haiku read this file and just pull out the most relevant functions. And then, you know, Sonnet reads just those and you save 90% on tokens. I think there's a lot of really interesting room for things like that. And again, we were just trying to do sort of the simplest, most minimal thing and show that it works. I'm really hoping that people, sort of the agent community builds things like that on top of our models. That's, again, why we released these tools. We're not going to go and do lots more submissions to SWE-Bench and try to prompt engineer this and build a bigger system. We want people to like the ecosystem to do that on top of our models. But yeah, so I think that's a really interesting one.Swyx [00:48:32]: It turns out, I think you did do 3.5 Haiku with your tools and it scored a 40.6. Yes.Erik [00:48:38]: So it did very well. It itself is actually very smart, which is great. But we haven't done any experiments with this combination of the two models. But yeah, I think that's one of the exciting things is that how well Haiku 3.5 did on SWE-Bench shows that sort of even our smallest, fastest model is very good at sort of thinking agentically and working on hard problems. Like it's not just sort of for writing simple text anymore.Alessio [00:49:02]: And I know you're not going to talk about it, but like Sonnet is not even supposed to be the best model, you know? Like Opus, it's kind of like we left it at three back in the corner intro. At some point, I'm sure the new Opus will come out. And if you had Opus Plus on it, that sounds very, very good.Swyx [00:49:19]: There's a run with SuiteAgent plus Opus, but that's the official SWE-Bench guys doing it.Erik [00:49:24]: That was the older, you know, 3.0.Swyx [00:49:25]: You didn't do yours. Yeah. Okay. Did you want to? I mean, you could just change the model name.Erik [00:49:31]: I think we didn't submit it, but I think we included it in our model card.Swyx [00:49:35]: Okay.Erik [00:49:35]: We included the score as a comparison. Yeah.Swyx [00:49:38]: Yeah.Erik [00:49:38]: And Sonnet and Haiku, actually, I think the new ones, they both outperformed the original Opus. Yeah. I did see that.Swyx [00:49:44]: Yeah. It's a little bit hard to find. Yeah.Erik [00:49:47]: It's not an exciting score, so we didn't feel like they need to submit it to the benchmark.Swyx [00:49:52]: We can cut over to computer use if we're okay with moving on to topics on this, if anything else. I think we're good.Erik [00:49:58]: I'm trying to think if there's anything else SWE-Bench related.Swyx [00:50:02]: It doesn't have to be also just specifically SWE-Bench, but just your thoughts on building agents, because you are one of the few people that have reached this leaderboard on building a coding agent. This is the state of the art. It's surprisingly not that hard to reach with some good principles. Right. There's obviously a ton of low-hanging fruit that we covered. Your thoughts on if you were to build a coding agent startup, what next?Erik [00:50:24]: I think the really interesting question for me, for all the startups out there, is this kind of divergence between the benchmarks and what real customers will want. So I'm curious, maybe the next time you have a coding agent startup on the podcast, you should ask them that. What are the differences that they're starting to make? Tomorrow.Swyx [00:50:40]: Oh, perfect, perfect. Yeah.Erik [00:50:41]: I'm actually very curious what they will see, because I also have seen, I feel like it's slowed down a little bit if I don't see the startups submitting to SWE-Bench that much anymore.Swyx [00:50:52]: Because of the traces, the trace. So we had Cosign on, they had a 50-something on full, on SWE-Bench full, which is the hardest one, and they were rejected because they didn't want to submit their traces. Yep. IP, you know? Yeah, that makes sense, that makes sense. Actually, tomorrow we're talking to Bolt, which is a cloud customer. You guys actually published a case study with them. I assume you weren't involved with that, but they were very happy with Cloud. Cool. One of the biggest launches of the year. Yeah, totally. We actually happened to b
Show DescriptionHow important is the DX of software vs how important is the person showing off the software, Douglas Crockford and JSON, remembering XML, trying to write better HTML for email, new TC39 proposal, workshopping t-shirts, and what do you do if you want a little bit of database on your website? Listen on Website →Links Web Unleashed 2024 - FITC New High Contrast Syntax Highlighting Themes – CodePen Douglas Crockford JSON JSON Feed Slow Horses JavaScript Compiler Proposal ECMAScript 2024 Updates Contentful Strapi Sanity Content System Heroku Cloudflare Turso Netlify Blobs bolt.new SponsorsBluehostFind unique domains, web hosting, and WordPress tools, all in one place. Empower your business or digital agency with Bluehost.
Bienvenue dans le deux-cent-quatre-vingt-troisième épisode de CacaoCast! Dans cet épisode, Philippe Casgrain et Philippe Guitard discutent des sujets suivants: NotebookML - Un podcast à partir de texte? Hummingbird 2.0 - Maintenant avec un vrai site web SimpleCalendar - Pour présenter vos calendriers OCXML - Génerez du XML en quelques lignes de Swift Mocking URLSession - Pour vos tests unitaires Goshdarnappleversionnumbers - Ça devient compliqué toutes ces versions Ecoutez cet épisode
While learning the technical ins and outs of ServiceNow are important for your career, there's more to being a great ServiceNow dev than knowing how to hack XML files or all the JavaScript APIs. In this episode, Lauren and I have a chat about some key moments in our lives that changed our careers. Topics 00:00 Welcome 03:50 Lauren's #5 08:26 Chuck's #5 11:31 Lauren's #4 16:49 Chuck's #4 21:09 Lauren's #3 26:26 Chuck's #3 31:14 Lauren's #2 35:21 Chuck's #2 38:51 Lauren's #1 43:39 Chuck's #1 48:16 Outro Links Book: The Art of Simple Living Manager Tools/Career Tools podcasts Check out the other ServiceNow podcasts.See omnystudio.com/listener for privacy information.
Today Anthropic's Zach Witten takes us on a deep dive into Anthropic's cutting-edge AI models—Claude Haiku, Sonnet, and Opus—exploring their safety-first approach to generative AI and sharing essential tips for prompt engineering.Topics Include:Introductions, about Anthropic3 models: Haiku, Sonnet and OpusScaling laws for hardware, data and computeCompeting to be safest AI solutions, safety-first organizationLeader in jailbreak resistanceInterpretability features and breakthroughs for AI modelsBasics of prompt engineeringImproving prompts with ClaudeDetails matter – small changes to spelling, context will greatly improve resultsSystem prompt – role setting will improve results (i.e. “You are an expert mathematician…” for math queryBe clear and direct – use XML tags where possibleEncourage Claude to think step-by-step – answering fast comes with accuracy riskUse examples to provide additional clarity to ClaudeBonus tips for image-based prompt engineeringQ&A 1) Who wrote the meta-prompts in the cookbook?Q&A 2) Guidance for writing prompts for prompt generatorQ&A 3) Best practices for tabular and structured dataQ&A 4) Maintaining “tone” across hundreds/thousands of responsesQ&A 5) Reverse engineering a prompt
Episode 83: In this episode of Critical Thinking - Bug Bounty Podcast Joel and Justin are brainstorming new features and improvements for Caido, such as the implementation of a 403 bypassing workflow, a text expander, Tracing Cookies, and more.Follow us on twitter at: @ctbbpodcastWe're new to this podcasting thing, so feel free to send us any feedback here: info@criticalthinkingpodcast.ioShoutout to YTCracker for the awesome intro music!------ Links ------Follow your hosts Rhynorater & Teknogeek on twitter:https://twitter.com/0xteknogeekhttps://twitter.com/rhynorater------ Ways to Support CTBBPodcast ------Hop on the CTBB Discord at https://ctbb.show/discord!We also do Discord subs at $25, $10, and $5 - premium subscribers get access to private masterclasses, exploits, tools, scripts, un-redacted bug reports, etc.Resources:Post from Gareth Heyeshttps://x.com/garethheyes/status/1811084674988474417Wiki List of XML and HTMLhttps://en.wikipedia.org/wiki/List_of_XML_and_HTML_character_entity_references#List_of_character_entity_references_in_HTMLHackerOne Leaderboard Changeshttps://x.com/scarybeasts/status/1810813103354892666Espansohttps://espanso.org/Critical Thinkers Discordctbb.show/criticalthinkersOauth Scanhttps://portswigger.net/bappstore/8ef2db1173e8432c8797831c2e730727Timestamps:(00:00:00) Introduction(00:03:12) News(00:13:20) Into the Brainstorm(00:13:41) 403 Bypasser(00:20:34) "Expaido"(00:31:34) Trace Cookies(00:42:01) Highlight Decoding Expansion and AI integrations(00:49:08) OAuth Testing, API Highlighter, and Note-taking
WBSRocks: Business Growth with ERP and Digital Transformation
Send us a Text Message.Prior to the introduction of supply chain business networks, industries relied on research and survey-based methods for supply chain planning. Companies in the data business frequently encountered substantial errors, resulting in inefficiencies across the supply chain. The establishment of networks posed challenges due to variations in communication standards and the difficulty of convincing the entire industry to adopt a unified platform. Although business-to-business communication adhered to standards such as XML or EDI, they provided limited connectivity and acknowledgment without centralized repositories to facilitate industry-wide supply chains.In this episode, our host, Sam Gupta, discusses the top 10 supply chain business network platforms in 2024. He also discusses several variables that influence the rankings of supply chain business network platforms. Finally, he shares the pros and cons of each supply chain business network platform.For more information on growth strategies for SMBs using ERP and digital transformation, visit our community at wbs. rocks or elevatiq.com. To ensure that you never miss an episode of the WBS podcast, subscribe on your favorite podcasting platform.
SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast
Why yq? Adventurs in XML https://isc.sans.edu/diary/Why%20yq%3F%20%20Adventures%20in%20XML/30930 Black Basta Uses Quick Assist https://www.microsoft.com/en-us/security/blog/2024/05/15/threat-actors-misusing-quick-assist-in-social-engineering-attacks-leading-to-ransomware/ Various Chrome 0-Day Vulnerabilities https://chromereleases.googleblog.com/2024/05/stable-channel-update-for-desktop_15.html Android Theft Protection Improvement https://blog.google/products/android/android-theft-protection/ Critical Git Update https://github.blog/2024-05-14-securing-git-addressing-5-new-vulnerabilities/
SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast
Why yq? Adventurs in XML https://isc.sans.edu/diary/Why%20yq%3F%20%20Adventures%20in%20XML/30930 Black Basta Uses Quick Assist https://www.microsoft.com/en-us/security/blog/2024/05/15/threat-actors-misusing-quick-assist-in-social-engineering-attacks-leading-to-ransomware/ Various Chrome 0-Day Vulnerabilities https://chromereleases.googleblog.com/2024/05/stable-channel-update-for-desktop_15.html Android Theft Protection Improvement https://blog.google/products/android/android-theft-protection/ Critical Git Update https://github.blog/2024-05-14-securing-git-addressing-5-new-vulnerabilities/
Relax and dance with a drowsy dancing bear on the Irish & Celtic Music Podcast #660. Subscribe now! The Drowsy Lads, Fialla, Blackthorn, Charlene Adzima, Ed Miller, Enda Reilly, Conor Mallon, Fig for a Kiss, The Crowfoot Rakes, Luas, Sharon Shannon, Drumspyder, Elias Alexander & Ramblxr, The Low Kings GET CELTIC MUSIC NEWS IN YOUR INBOX The Celtic Music Magazine is a quick and easy way to plug yourself into more great Celtic culture. Enjoy seven weekly news items for Celtic music and culture online. Subscribe now and get 34 Celtic MP3s for Free. VOTE IN THE CELTIC TOP 20 FOR 2024 This is our way of finding the best songs and artists each year. You can vote for as many songs and tunes that inspire you in each episode. Your vote helps me create next year's Best Celtic music of 2024 episode. You have just three weeks to vote this year. Vote Now! You can follow our playlist on Spotify to listen to those top voted tracks as they are added every 2 - 3 weeks. It also makes it easier for you to add these artists to your own playlists. You can also check out our Irish & Celtic Music Videos. THIS WEEK IN CELTIC MUSIC 0:06 - The Drowsy Lads “Dancing Bear Set: Lads of Laois / Eileen Curran / The Reconciliation” from Single 5:13 - WELCOME 6:47 - Fialla “Forgotten Daze” from Home & Away 10:35 - Blackthorn “Don't Come Again” from Here's To You 14:05 - Charlene Adzima “The Initiation Reel” from The Initiation 17:02 - Ed Miller “More Than Just a Dram” from Lolander 21:23 - FEEDBACK 26:02 - Enda Reilly “Off the Grid For a While” from Single 29:38 - Conor Mallon “Unearthed” from Unearthed 33:50 - Fig for a Kiss “Frobisher Bay” from Wherever You Go 37:45 - The Crowfoot Rakes “Johnny Jump Up” from Off She Goes 42:18 - THANKS 43:22 - Luas “Ard Tí Cuain” from Single 47:57 - Sharon Shannon “The Bag of Cats” from Each Little Thing 52:26 - Drumspyder “The Mooncoin” from Green Mantle 56:30 - Elias Alexander & Ramblxr “FIDDLE DISCO!” from Single 1:00:11 - CLOSING 1:00:58 - The Low Kings “Paddy's Round” from Single 1:04:27 - CREDITS The Irish & Celtic Music Podcast was produced by Marc Gunn, The Celtfather and our Patrons on Patreon. The show was edited by Mitchell Petersen with Graphics by Miranda Nelson Designs. Visit our website to follow the show. You'll find links to all of the artists played in this episode. Todd Wiley is the editor of the Celtic Music Magazine. Subscribe to get 34 Celtic MP3s for Free. Plus, you'll get 7 weekly news items about what's happening with Celtic music and culture online. Best of all, you will connect with your Celtic heritage. Please tell one friend about this podcast. Word of mouth is the absolute best way to support any creative endeavor. Finally, remember. Reduce, reuse, recycle, and think about how you can make a positive impact on your environment. Promote Celtic culture through music at http://celticmusicpodcast.com/. WELCOME THE IRISH & CELTIC MUSIC PODCAST * Helping you celebrate Celtic culture through music. I am Marc Gunn. This podcast is for fans of Celtic music. All styles. From the traditional reels and jigs, to pub songs, Celtic rock and even occasionally electronic. We are here to build a diverse Celtic community and help the incredible artists who so generously share their music with you. If you hear music you love, please email artists to let them know you heard them on the Irish & Celtic Music Podcast. Musicians depend on your generosity to keep making music. So please find a way to support them. Buy a CD, Album Pin, Shirt, Digital Download, or join their communities on Patreon. You can find a link to all of the artists in the shownotes, along with show times, when you visit our website at celticmusicpodcast.com. If you are a Celtic musician or in a Celtic band, then please submit your band to be played on the podcast. You don't have to send in any music or an EPK. Plus, you will get a free eBook called Celtic Musicians Guide to Digital Musicand learn how to follow the podcast. It's 100% free. Just email follow@bestcelticmusic and of course, listeners can learn how to subscribe to the podcast and get a free music - only episode. I can't believe I am less than one month away from my Celtic Invasion of the Isle of Man. I've a great trip planned. And hopefully, I'll come back with audio recordings to share on this podcast and my Celtic Invasion Vacations podcast. THANK YOU PATRONS OF THE PODCAST! You are amazing. It is because of your generosity that you get to hear so much great Celtic music each and every week. Your kindness pays for our engineer, graphic designer, Celtic Music Magazine editor, promotion of the podcast, and allows me to buy the music I play here. It also pays for my time creating the show each and every week. As a patron, you get ad - free and music - only episodes before regular listeners, vote in the Celtic Top 20, stand - alone stories, you get a private feed to listen to the show or you can listen through the Patreon app. All that for as little as $1 per episode. A special thanks to our new and continued Patrons of the Podcast: Patricia or Trish, David Willer HERE IS YOUR THREE STEP PLAN TO SUPPORT THE PODCAST Go to our Patreon page. Decide how much you want to pledge every week, $1, $5, $25. Make sure to cap how much you want to spend per month. Keep listening to the Irish & Celtic Music Podcast to celebrate Celtic culture through music. You can become a generous Patron of the Podcast on Patreon at SongHenge.com. TRAVEL WITH CELTIC INVASION VACATIONS Every year, I take a small group of Celtic music fans on the relaxing adventure of a lifetime. We don't see everything. Instead, we stay in one area. We get to know the region through its culture, history, and legends. You can join us with an auditory and visual adventure through podcasts and videos. Learn more about the invasion at http://celticinvasion.com/ #celticmusic #irishmusic #celticmusicpodcast I WANT YOUR FEEDBACK What are you doing today while listening to the podcast? Please email me. I'd love to see a picture of what you're doing while listening or of a band that you saw recently. Email me at follow@bestcelticmusic. Brandi Carpenter emailed from Spokane, WA: “Hi Marc, I've been listening to your podcast for many years now and love it so much!! My family has switched to Spotify and I'm wondering why I can't find your Irish and Celtic Music Podcast on it...will it make it on there eventually? I don't have an Apple phone so I'm sad that my current podcast hookup (Google) is closing down. Thank you for all your hard work!” Ben Q emailed: “Hi Marc, Big fan & patron with an Android problem - Google Podcasts has gone away, annoyingly (just a few short years after they killed Google Listen and forced everyone to migrate there). I see that the "Listen on Google Podcasts" button on celticmusicpodcast.com now redirects to podcastaddict.com, so I guess that this sunsetting of that podcast app is probably not news to you. However, I've tentatively replaced Google Podcasts as my podcast app, not with podcastaddict, but with what Google / Android suggested as a replacement, YouTube Music. I imported all of my podcasts, and they are all working the same way as before the change, except for The Irish & Celtic Music Podcast. Searching YouTube Music for "irish & celtic music podcast" on YouTube Music brings up these search results, of which one looks like the right one, this result. However, that podcast doesn't seem to have the correct shows - the most recent one is from 2023; I'm not able to listen to Red - Haired Bully Boys #657 or Plant Your Boots For Freedom #656 at that link. I'm sure you're busy, but just wondering if you had any insight into this. It's worth mentioning that, if I understand correctly, this app doesn't require people's podcasts to be hosted on YouTube - I'm listening to the Savage Lovecast, for example, just as I usually do, and I don't think Dan has moved his show archive over to YouTube; I'm just using YouTube Music to listen to his regular, free podcast XML feed like usual. Anyway, thanks for taking the time and for all the work that you're doing.” Daniel Morrison emailed: “Hi Marc, I am finally making my first trip to Ireland. My Grandparents of both sides are Ireland. Castle Bar Mayo, Tuam Galway for Dad's my Grandparents. Kenmare Kerry for Ma's my Grandparents Going on Package deal for 7 days. Your Podcast has inspired Me so much to make the trip.” Christine Manbeck emailed a photo responding to my St Patrick's Day question: “Guinness Chocolate cake; parade; Screaming Orphans concert”