High-level programming language
POPULARITY
Categories
Send a text and I may answer it on next episode (I cannot reply from this service
RJJ Software's Software Development Service This episode of The Modern .NET Show is supported, in part, by RJJ Software's Software Development Services, whether your company is looking to elevate its UK operations or reshape its US strategy, we can provide tailored solutions that exceed expectations. Show Notes "So on my side it was actually, the interesting experience was that I kind of used it one way, because it was mainly about reading the Python code, the JavaScript code, and, let's say like, the Go implementations, trying to understand what are the concepts, what are the ways about how it has been implemented by the different teams. And then, you know, switching mentally into the other direction of writing than the code in C#."— Jochen Kirstaetter Welcome friends to The Modern .NET Show; the premier .NET podcast, focusing entirely on the knowledge, tools, and frameworks that all .NET developers should have in their toolbox. We are the go-to podcast for .NET developers worldwide, and I am your host: Jamie “GaProgMan” Taylor. In this episode, Jochen Kirstaetter joined us to talk about his .NET SDK for interacting with Google's Gemini suite of LLMs. Jochen tells us that he started his journey by looking at the existing .NET SDK, which didn't seem right to him, and wrote his own using the HttpClient and HttpClientFactory classes and REST. "I provide a test project with a lot of tests. And when you look at the simplest one, is that you get your instance of the Generative AI type, which you pass in either your API key, if you want to use it against Google AI, or you pass in your project ID and location if you want to use it against Vertex AI. Then you specify which model that you like to use, and you specify the prompt, and the method that you call is then GenerateContent and you get the response back. So effectively with four lines of code you have a full integration of Gemini into your .NET application."— Jochen Kirstaetter Along the way, we discuss the fact that Jochen had to look into the Python, JavaScript, and even Go SDKs to get a better understanding of how his .NET SDK should work. We discuss the “Pythonistic .NET” and “.NETy Python” code that developers can accidentally end up writing, if they're not careful when moving from .NET to Python and back. And we also talk about Jochen's use of tests as documentation for his SDK. Anyway, without further ado, let's sit back, open up a terminal, type in `dotnet new podcast` and we'll dive into the core of Modern .NET. Supporting the Show If you find this episode useful in any way, please consider supporting the show by either leaving a review (check our review page for ways to do that), sharing the episode with a friend or colleague, buying the host a coffee, or considering becoming a Patron of the show. Full Show Notes The full show notes, including links to some of the things we discussed and a full transcription of this episode, can be found at: https://dotnetcore.show/season-7/google-gemini-in-net-the-ultimate-guide-with-jochen-kirstaetter/ Jason's Links: JoKi's MVP Profile JoKi's Google Developer Expert Profile JoKi's website Other Links: Generative AI for .NET Developers with Amit Bahree curl Noda Time with Jon Skeet Google Cloud samples repo on GitHub Google's Gemini SDK for Python Google's Gemini SDK for JavaScript Google's Gemini SDK for Go Vertex AI JoKi's base NuGet package: Mscc.GenerativeAI JoKi's NuGet package: Mscc.GenerativeAI.Google System.Text.Json gcloud CLI .NET Preprocessor directives .NET Target Framework Monikers QUIC protocol IAsyncEnumerable Microsoft.Extensions.AI Supporting the show: Leave a rating or review Buy the show a coffee Become a patron Getting in Touch: Via the contact page Joining the Discord Remember to rate and review the show on Apple Podcasts, Podchaser, or wherever you find your podcasts, this will help the show's audience grow. Or you can just share the show with a friend. And don't forget to reach out via our Contact page. We're very interested in your opinion of the show, so please get in touch. You can support the show by making a monthly donation on the show's Patreon page at: https://www.patreon.com/TheDotNetCorePodcast. Music created by Mono Memory Music, licensed to RJJ Software for use in The Modern .NET Show
Founder of VoidZero and founder of Vue and Vite Evan You joins us to talk about the evolution of JavaScript tooling, the success of Vite, and what's coming next with VitePlus — a unified toolchain aiming to simplify dev workflows. We also touch on Nitro, multi-runtime support, and where AI might (or might not) fit into the mix. Links https://evanyou.me https://x.com/youyuxi https://bsky.app/profile/evanyou.me https://github.com/yyx990803 We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Let us know by sending an email to our producer, Emily, at emily.kochanekketner@logrocket.com (mailto:emily.kochanekketner@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understand where your users are struggling by trying it for free at [LogRocket.com]. Try LogRocket for free today.(https://logrocket.com/signup/?pdr) Special Guest: Evan You.
How fast is your app, really? Alexandre Moureaux joins Jamon, Robin, and Mazen to talk about Flashlight, a tool for scoring mobile performance and spotting bottlenecks in production. If you care about React Native performance, this one's for you.Show NotesFlashlightPerformance issues: the usual suspects - A. Moureaux | React Native EU 2022Alexandre Moureaux – Lighthouse for mobile apps | App.js Conf 2023Example Ignite Flashlight ReportConnect With Us!Alexandre Moureaux: @almouroJamon Holmgren: @jamonholmgrenRobin Heinze: @robinheinzeMazen Chami: @mazenchamiReact Native Radio: @ReactNativeRdioThis episode is brought to you by Infinite Red!Infinite Red is an expert React Native consultancy located in the USA. With nearly a decade of React Native experience and deep roots in the React Native community (hosts of Chain React and the React Native Newsletter, core React Native contributors, creators of Ignite and Reactotron, and much, much more), Infinite Red is the best choice for helping you build and deploy your next React Native app.
Deep Dive into Drupal Performance Testing with Gander | Tag1 Team TalksWelcome to Tag1 Team Talks! In this episode, Michael Meyers, Managing Director at Tag1 Consulting, interviews Nathaniel 'Catch' Catchpole, a prolific contributor to the Drupal platform and Gander Project Lead. Together, they explore Tag1's incorporation of Gander as a foundational element in the next-gen content management systems, developed in collaboration with the Google Chrome team. Learn about Gander's role as Drupal's official performance testing framework and its impact on Drupal core development. Catch shares real-world success stories, dives into the technical workings of Gander, and discusses ongoing and future performance optimizations. From reducing JavaScript load sizes to improving cache performance, Catch offers invaluable insights into making Drupal faster and more efficient. Don't miss this in-depth exploration of Gander and its transformative impact on Drupal CMS!00:00 Introduction to Tag1 Team Talks00:35 Meet Nathaniel Catchpole: Gander Project Lead01:40 Introduction to Gander: Drupal's Performance Testing Framework01:57 How Gander Works: Technical Insights04:49 Gander's Impact on Drupal Core Development07:16 Real-World Success Stories with Gander16:59 Recent Developments and Improvements in Gander23:51 Future Roadmap for Gander33:38 How to Get Involved with Gander34:30 Conclusion
Get ready to explore how generative AI is transforming development in Oracle APEX. In this episode, hosts Lois Houston and Nikita Abraham are joined by Oracle APEX experts Apoorva Srinivas and Toufiq Mohammed to break down the innovative features of APEX 24.1. Learn how developers can use APEX Assistant to build apps, generate SQL, and create data models using natural language prompts. Oracle APEX: Empowering Low Code Apps with AI: https://mylearn.oracle.com/ou/course/oracle-apex-empowering-low-code-apps-with-ai/146047/ Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. -------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Nikita: Welcome back to another episode of the Oracle University Podcast! I'm Nikita Abraham, Team Lead of Editorial Services with Oracle University, and I'm joined by Lois Houston, Director of Innovation Programs. Lois: Hi everyone! In our last episode, we spoke about Oracle APEX and AI. We covered the data and AI -centric challenges businesses are up against and explored how AI fits in with Oracle APEX. Niki, what's in store for today? Nikita: Well, Lois, today we're diving into how generative AI powers Oracle APEX. With APEX 24.1, developers can use the Create Application Wizard to tell APEX what kind of application they want to build based on available tables. Plus, APEX Assistant helps create, refine, and debug SQL code in natural language. 01:16 Lois: Right. Today's episode will focus on how generative AI enhances development in APEX. We'll explore its architecture, the different AI providers, and key use cases. Joining us are two senior product managers from Oracle—Apoorva Srinivas and Toufiq Mohammed. Thank you both for joining us today. We'll start with you, Apoorva. Can you tell us a bit about the generative AI service in Oracle APEX? Apoorva: It is nothing but an abstraction to the popular commercial Generative AI products, like OCI Generative AI, OpenAI, and Cohere. APEX makes use of the existing REST infrastructure to authenticate using the web credentials with Generative AI Services. Once you configure the Generative AI Service, it can be used by the App Builder, AI Assistant, and AI Dynamic Actions, like Show AI Assistant and Generate Text with AI, and also the APEX_AI PL/SQL API. You can enable or disable the Generative AI Service on the APEX instance level and on the workspace level. 02:31 Nikita: Ok. Got it. So, Apoorva, which AI providers can be configured in the APEX Gen AI service? Apoorva: First is the popular OpenAI. If you have registered and subscribed for an OpenAI API key, you can just enter the API key in your APEX workspace to configure the Generative AI service. APEX makes use of the chat completions endpoint in OpenAI. Second is the OCI Generative AI Service. Once you have configured an OCI API key on Oracle Cloud, you can make use of the chat models. The chat models are available from Cohere family and Meta Llama family. The third is the Cohere. The configuration of Cohere is similar to OpenAI. You need to have your Cohere OpenAI key. And it provides a similar chat functionality using the chat endpoint. 03:29 Lois: What is the purpose of the APEX_AI PL/SQL public API that we now have? How is it used within the APEX ecosystem? Apoorva: It models the chat operation of the popular Generative AI REST Services. This is the same package used internally by the chat widget of the APEX Assistant. There are more procedures around consent management, which you can configure using this package. 03:58 Lois: Apoorva, at a high level, how does generative AI fit into the APEX environment? Apoorva: APEX makes use of the existing REST infrastructure—that is the web credentials and remote server—to configure the Generative AI Service. The inferencing is done by the backend Generative AI Service. For the Generative AI use case in APEX, such as NL2SQL and creation of an app, APEX performs the prompt enrichment. 04:29 Nikita: And what exactly is prompt enrichment? Apoorva: Let's say you provide a prompt saying "show me the average salary of employees in each department." APEX will take this prompt and enrich it by adding in more details. It elaborates on the prompt by mentioning the requirements, such as Oracle SQL syntax statement, and providing some metadata from the data dictionary of APEX. Once the prompt enrichment is complete, it is then passed on to the LLM inferencing service. Therefore, the SQL query provided by the AI Assistant is more accurate and in context. 05:15 Unlock the power of AI Vector Search with our new course and certification. Get more accurate search results, handle complex datasets easily, and supercharge your data-driven decisions. From now to May 15, 2025, we are waiving the certification exam fee (valued at $245). Visit mylearn.oracle.com to enroll. 05:41 Nikita: Welcome back! Let's talk use cases. Apoorva, can you share some ways developers can use generative AI with APEX? Apoorva: SQL is an integral part of building APEX apps. You use SQL everywhere. You can make use of the NL2SQL feature in the code editor by using the APEX Assistant to generate SQL queries while building the apps. The second is the prompt-based app creation. With APEX Assistant, you can now generate fully functional APEX apps by providing prompts in natural language. Third is the AI Assistant, which is a chat widget provided by APEX in all the code editors and for creation of apps. You can chat with the AI Assistant by providing your prompts and get responses from the Generative AI Services. 06:37 Lois: Without getting too technical, can you tell us how to create a data model using AI? Apoorva: A SQL Workshop utility called Create Data Model Using AI uses AI to help you create your own data model. The APEX Assistant generates a script to create tables, triggers, and constraints in either Oracle SQL or Quick SQL format. You can also insert sample data into these tables. But before you use this feature, you must create a generative AI service and enable the Used by App Builder setting. If you are using the Oracle SQL format, when you click on Create SQL Script, APEX generates the script and brings you to this script editor page. Whereas if you are using the Quick SQL format, when you click on Review Quick SQL, APEX generates the Quick SQL code and brings you to the Quick SQL page. 07:39 Lois: And to see a detailed demo of creating a custom data model with the APEX Assistant, visit mylearn.oracle.com and search for the "Oracle APEX: Empowering Low Code Apps with AI" course. Apoorva, what about creating an APEX app from a prompt. What's that process like? Apoorva: APEX 24.1 introduces a new feature where you can generate an application blueprint based on a prompt using natural language. The APEX Assistant leverages the APEX Dictionary Cache to identify relevant tables while suggesting the pages to be created for your application. You can iterate over the application design by providing further prompts using natural language and then generating an application based on your needs. Once you are satisfied, you can click on Create Application, which takes you to the Create Application Wizard in APEX, where you can further customize your application, such as application icon and other features, and finally, go ahead to create your application. 08:53 Nikita: Again, you can watch a demo of this on MyLearn. So, check that out if you want to dive deeper. Lois: That's right, Niki. Thank you for these great insights, Apoorva! Now, let's turn to Toufiq. Toufiq, can you tell us more about the APEX Assistant feature in Oracle APEX. What is it and how does it work? Toufiq: APEX Assistant is available in Code Editors in the APEX App Builder. It leverages generative AI services as the backend to answer your questions asked in natural language. APEX Assistant makes use of the APEX dictionary cache to identify relevant tables while generating SQL queries. Using the Query Builder mode enables Assistant. You can generate SQL queries from natural language for Form, Report, and other region types which support SQL queries. Using the general assistance mode, you can generate PL/SQL JavaScript, HTML, or CSS Code, and seek further assistance from generative AI. For example, you can ask the APEX Assistant to optimize the code, format the code for better readability, add comments, etc. APEX Assistant also comes with two quick actions, Improve and Explain, which can help users improve and understand the selected code. 10:17 Nikita: What about the Show AI Assistant dynamic action? I know that it provides an AI chat interface, but can you tell us a little more about it? Toufiq: It is a native dynamic action in Oracle APEX which renders an AI chat user interface. It leverages the generative AI services that are configured under Workspace utilities. This AI chat user interface can be rendered inline or as a dialog. This dynamic action also has configurable system prompt and welcome message attributes. 10:52 Lois: Are there attributes you can configure to leverage even more customization? Toufiq: The first attribute is the initial prompt. The initial prompt represents a message as if it were coming from the user. This can either be a specific item value or a value derived from a JavaScript expression. The next attribute is use response. This attribute determines how the AI Assistant should return responses. The term response refers to the message content of an individual chat message. You have the option to capture this response directly into a page item, or to process it based on more complex logic using JavaScript code. The final attribute is quick actions. A quick action is a predefined phrase that, once clicked, will be sent as a user message. Quick actions defined here show up as chips in the AI chat interface, which a user can click to send the message to Generative AI service without having to manually type in the message. 12:05 Lois: Thank you, Toufiq and Apoorva, for joining us today. Like we were saying, there's a lot more you can find in the “Oracle APEX: Empowering Low Code Apps with AI” course on MyLearn. So, make sure you go check that out. Nikita: Join us next week for a discussion on how to integrate APEX with OCI AI Services. Until then, this is Nikita Abraham… Lois: And Lois Houston signing off! 12:28 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
Mientras la mayoría de los desarrolladores actuales se orienta a lenguajes modernos como JavaScript o Python, muchas instituciones financieras siguen operando sus sistemas críticos con tecnologías consideradas “legacy”, como COBOL. Este lenguaje, creado hace más de 60 años, continúa siendo el motor silencioso detrás de millones de transacciones diarias. Sin embargo, la falta de profesionales capacitados para mantener estos entornos representa un desafío creciente, especialmente por la salida progresiva de perfiles senior que han liderado estos sistemas durante décadas.En este contexto, Interbank y el instituto de formación digital Codeable anunciaron el lanzamiento del Interbank COBOL Academy, un programa educativo intensivo que busca formar a la nueva generación de ingenieros especializados en sistemas bancarios legacy. La iniciativa responde a una necesidad estructural: renovar el capital humano que domina tecnologías clave como COBOL, entornos Mainframe y arquitectura bancaria core, cuya estabilidad y seguridad siguen siendo imprescindibles para la operación financiera del país.
In this episode, we dive into an engaging conversation with Kelvin, where we explore his approach to full-stack JavaScript development and the power of using simple, stable technologies to speed up app development.Kelvin shares his exciting project, "Project 50," where he's challenging himself to build 50 apps in 50 days, highlighting the importance of leveraging "boring" stacks to streamline the development process. We also touch on his journey in teaching web development through free resources and screencasts, aiming to make it easier for developers to build real-world apps quickly. Along the way, we discuss the value of strategy games like chess and Go, and how they help foster critical thinking and continuous learning. It's a great mix of tech, strategy, and entertainment, making this episode a must-listen for developers and anyone looking to level up their skills. Tune in for a fun and insightful discussion!Become a supporter of this podcast: https://www.spreaker.com/podcast/javascript-jabber--6102064/support.
Guillermo Rauch is the founder and CEO of Vercel, creators of v0 (one of the most popular AI app building tools), and the mind behind foundational JavaScript frameworks like Next.js and Socket.io. An open source pioneer and legendary engineer, Guillermo has built tools that power some of the internet's most innovative products, including Midjourney, Grok, and Notion. His mission is to democratize product creation, expanding the pool of potential builders from 5 million developers to over 100 million people worldwide. In this episode, you'll learn:1. How AI will radically speed up product development—and the three critical skills PMs and engineers should master now to stay ahead2. Why the future of building apps is shifting toward prompts instead of code, and how that affects traditional product teams3. Specific ways to improve your design “taste,” plus practical tips to consistently create beautiful, user-loved products4. How Guillermo built a powerful app in under two hours for $20 (while flying and using plane Wi-Fi) that would normally take weeks and thousands of dollars in engineering time5. The exact strategies Vercel uses internally to leverage AI tools like v0 and Cursor, enabling their team of 600 to ship faster and better than ever before6. Guillermo's actionable advice on increasing your product quality through rapid iteration, real-world user feedback, and creating intentional “exposure hours” for your team—Brought to you by:• WorkOS—Modern identity platform for B2B SaaS, free up to 1 million MAUs• Vanta — Automate compliance. Simplify security• LinkedIn Ads—Reach professionals and drive results for your business—Where to find Guillermo Rauch:• X: https://x.com/rauchg• LinkedIn: https://www.linkedin.com/in/rauchg/• Website: https://rauchg.com/—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Introduction to Guillermo Rauch(04:43) v0's mission(07:03) The impact and growth of v0(15:54) The future of product development with AI(19:05) Empowering engineers and product builders(24:01) Skills for the future: coding, math, and eloquence(35:05) v0 in action: real-world applications(36:40) Tips for using v0 effectively(45:46) Core skills for building AI apps(49:44) Live demo(59:45) Understanding how AI thinks(01:04:35) AI integration and future prospects(01:07:22) Building taste(01:13:43) Limitations of v0(01:16:54) Improving the design of your product(01:20:09) The secret to product quality(01:22:35) Vercel's AI-driven development(01:25:43) Guillermo's vision for the future—Referenced:• v0: https://v0.dev/• Vercel: https://vercel.com/• GitHub: https://github.com/• Cursor: https://www.cursor.com/• Next.js Framework: https://nextjs.org/• Claude: https://claude.ai/new• Grok: https://x.ai/• Midjourney: https://www.midjourney.com• SocketIO: https://socket.io/• Notion's lost years, its near collapse during Covid, staying small to move fast, the joy and suffering of building horizontal, more | Ivan Zhao (CEO and co-founder): https://www.lennysnewsletter.com/p/inside-notion-ivan-zhao• Notion: https://www.notion.com/• Automattic: https://automattic.com/• Inside Bolt: From near-death to ~$40m ARR in 5 months—one of the fastest-growing products in history | Eric Simons (founder & CEO of StackBlitz): https://www.lennysnewsletter.com/p/inside-bolt-eric-simons• v0 Community: https://v0.dev/chat/community• Figma: https://www.figma.com/• Git Commit: https://www.atlassian.com/git/tutorials/saving-changes/git-commit• What are Artifacts and how do I use them?: https://support.anthropic.com/en/articles/9487310-what-are-artifacts-and-how-do-i-use-them• Design Engineering at Vercel: https://vercel.com/blog/design-engineering-at-vercel• CSS: https://en.wikipedia.org/wiki/CSS• Tailwind: https://tailwindcss.com/• Wordcel / Shape Rotator / Mathcel: https://knowyourmeme.com/memes/wordcel-shape-rotator-mathcel• Steve Jobs's Ultimate Lesson for Companies: https://hbr.org/2011/08/steve-jobss-ultimate-lesson-fo• Bloom Hackathon: https://bloom.build/• Expenses Should Do Themselves | Saquon Barkley x Ramp (Super Bowl Ad): https://www.youtube.com/watch?v=p1Tgsy7D0Jg• Velocity over everything: How Ramp became the fastest-growing SaaS startup of all time | Geoff Charles (VP of Product): https://www.lennysnewsletter.com/p/velocity-over-everything-how-ramp• JavaScript: https://www.javascript.com/• React: https://react.dev/• Mapbox: https://www.mapbox.com/• Leaflet: https://leafletjs.com/• Escape hatches: https://react.dev/learn/escape-hatches• Supreme: https://supreme.com/• Shadcn: https://ui.shadcn.com/• Charles Schwab: https://www.schwab.com/• Fortune: https://fortune.com/• Semafor: https://www.semafor.com/• AI SDK: https://sdk.vercel.ai/• DeepSeek: https://www.deepseek.com/• Stripe: https://stripe.com/• Vercel templates: https://vercel.com/templates• GC AI: https://getgc.ai/• OpenEvidence: https://www.openevidence.com/• Paris Fashion Week: https://www.fhcm.paris/en/paris-fashion-week• Guillermo's post on X about making great products: https://x.com/rauchg/status/1887314115066274254• Everybody Can Cook billboard: https://www.linkedin.com/posts/evilrabbit_activity-7242975574242037760-uRW9/• Ratatouille: https://www.imdb.com/title/tt0382932/—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. Get full access to Lenny's Newsletter at www.lennysnewsletter.com/subscribe
Joël and fellow thoughtboter Aji Slater examine the unfamiliar world of Typescript (https://www.typescriptlang.org/) and various ways of working within it's system. They lay out the pros and cons of Typescript over other environments such as Ruby and Elm and discuss their experience of adopting LLM partners to assist in their workflows. Utilising Chat GPT and Claude to verify code and trim down syntax, all while trying to appease the type checker. Discover the little tips, tricks and bad habits they picked up along the way while working with their LLM buddies in an effort to improve efficiency. — Check out Ruby2D (https://www.ruby2d.com) for all your 2D app needs! You can connect with Aji via LinkedIn (https://www.linkedin.com/in/doodlingdev/), or check out some of the topics he's written about over on his thoughtbot blog (https://thoughtbot.com/blog/authors/aji-slater). Your host for this episode has been Joël Quenneville (https://www.linkedin.com/in/joel-quenneville-96b18b58/). If you would like to support the show, head over to our GitHub page (https://github.com/sponsors/thoughtbot), or check out our website (https://bikeshed.thoughtbot.com). Got a question or comment about the show? Why not write to our hosts: hosts@bikeshed.fm This has been a thoughtbot (https://thoughtbot.com/) podcast. Stay up to date by following us on social media - YouTube (https://www.youtube.com/@thoughtbot/streams) - LinkedIn (https://www.linkedin.com/company/150727/) - Mastodon (https://thoughtbot.social/@thoughtbot) - Bluesky (https://bsky.app/profile/thoughtbot.com) © 2025 thoughtbot, inc.
Mercedes Bernard, Staff Software Engineer at Kit, joins Robby to talk about what it really means to write code that lasts—and who it should be written for.In this episode of Maintainable, Mercedes shares a thoughtful and practical perspective on working with legacy codebases, managing technical debt, and creating a team culture that values maintainability without fear or shame. Her guiding principle? Well-maintained software is friendly software—code that is understandable and approachable, especially for early-career developers.Together, they discuss how to audit and stabilize older systems, avoid full rewrites, and create consistent developer experiences in large applications. Mercedes reflects on her decade in consulting and how that shaped her approach to navigating incomplete documentation, missing historical context, and multiple competing patterns in a codebase. She breaks down different types of technical debt, explains why not all of it is inherently bad, and offers strategies for advocating for maintenance work across engineering and product teams.The conversation also touches on architecture patterns like job fan-out, measuring performance regressions, reducing infrastructure load, and building momentum for improvements even when leadership isn't actively prioritizing them.If you've ever felt overwhelmed by a messy project or struggled to justify maintenance work, this episode will leave you with a fresh mindset—and a few practical tactics—for making code more sustainable and inclusive.Episode Highlights[00:01:08] Defining Well-Maintained SoftwareMercedes explains her top metric: software that feels friendly, especially to early-career developers navigating the codebase for the first time.[00:03:00] What Friendly Code Actually Looks LikeShe shares why consistency, discoverability, and light documentation (like class comments or UML snippets) can make a huge difference.[00:05:00] Assessing Code Like a House TourMercedes introduces her metaphor of giving a house tour to evaluate code: does everything feel like it's in the right place—or is the stove in the cabinet?[00:06:53] Consulting Mindset: Being a Guest in the CodebaseWith a decade of consulting experience, Mercedes shares how she navigates legacy systems when historical context is long gone.[00:10:40] Stabilizing a Startup's Tangled ArchitectureShe walks through an in-depth case study where she helped a client with multiple abandoned services get back to stability—without a rewrite.[00:17:00] The Power of a One-Line FixMercedes shares how a missing check caused a job to fan out 30 million no-op background jobs a day—and how one line of code reduced that by 75%.[00:23:40] Why State Checks Belong EverywhereShe explains how defense-in-depth patterns help avoid job queue flooding and protect system resources early in the fan-out process.[00:24:59] Reframing Technical DebtNot all debt is bad. Mercedes outlines three types—intentional, evolutionary, and time-based—and how to approach each one differently.[00:28:00] Why Teams Fall Behind Without Realizing ItMercedes and Robby talk about communication gaps between engineers and product stakeholders—and why it's not always clear when tech debt starts piling up.[00:34:00] Quantifying Developer FrictionMercedes recommends expressing technical debt in terms of lost time, slow features, and increased cost rather than vague frustrations.[00:42:00] Getting Momentum Without PermissionHer advice to individual contributors: start small. Break down your frustrations into bite-sized RFCs or tickets and show the impact.[00:45:40] Letting the Team Drive StandardsMercedes encourages team-led conventions over top-down declarations, and explains why having any decision is better than indecision.[00:47:54] Recommended ReadingShe shares a surprising favorite: The Secret Life of Groceries, a systems-thinking deep dive into the grocery industry by Benjamin Lorr.Resources & Links
Josh Cirre joins us to discuss his transition from the JavaScript ecosystem to Laravel, revealing why PHP frameworks can offer a compelling alternative for full-stack development. We explore the "identity crisis" many frontend developers face when needing robust backend solutions, how Laravel's batteries-included approach compares to piecing together JavaScript services, and the trade-offs between serverless and traditional hosting environments. Josh also shares insights on Laravel's developer experience, front-end integration options, and his thoughts on what JavaScript frameworks could learn from Laravel's approach to abstraction and infrastructure.Show Notes0:00 - Intro1:02 - Sponsor: Wix Studio1:46 - Introduction to Laravel2:25 - Josh's Journey from Frontend to Backend5:40 - Building the Same Project Across Frameworks6:32 - Josh's Breakthrough with Laravel8:20 - Laravel's Frontend Options10:25 - React Server Components Comparison12:00 - Livewire and Volt13:41 - Josh's Course on Laracasts14:08 - Laravel's DX and Ecosystem16:46 - MVC Structure Explained for JavaScript Developers18:25 - Type Safety Between PHP and JavaScript21:12 - Laravel Pain Points and Criticisms22:40 - Laravel Team's Response to Feedback24:50 - Laravel's Limitations and Use Cases26:10 - Laravel's Developer Products27:20 - Option Paralysis in Laravel30:46 - Laravel's Driver System33:14 - Web Dev Challenge Experience33:38 - TanStack Start Exploration34:50 - Server Functions in TanStack37:38 - Infrastructure Agnostic Development41:02 - Serverless vs. Serverful Cost Comparison44:50 - JavaScript Framework Evolution46:46 - Framework Ecosystems Comparison48:25 - Picks and Plugs Links Mentioned in the EpisodeLaravel - PHP frameworkTanStack Start - React meta-framework Josh created a YouTube video aboutLivewire - Laravel's HTML-over-the-wire front-end frameworkInertia.js - Framework for creating single-page appsVolt - Single file component system for LivewireLaravel Cloud - Managed hosting solution for Laravel applicationsHerd - Laravel's tool for setting up PHP development environmentsForge - Laravel's server management toolEnvoyer - Laravel's zero-downtime deployment toolLaracasts - Where Josh has a course on LivewireJosh Cirre's YouTube channelHTMX - Frontend library Josh compared to LivewireWeb Dev Challenge with Jason Lengstorf (featuring Josh and Amy)Josh Cirre's BlueSky account (@joshcirre)Amy's BlueSky accountBrad's BlueSky account Additional ResourcesLaravel DocumentationSvelte's new starter kit (mentioned as a good example)Nightwatch - Latest product from LaravelLaravel Vapor - Serverless deployment platform for LaravelTheo's Laravel exploration (discussed in the criticism section)Laravel BreezeLaravel JetstreamLaravel Fortify (authentication package mentioned)Adonis.js (JavaScript framework compared to Laravel)Anker USB powered hub (Josh's pick)Grether's Sugar Free Black Currant Pastilles (Josh's pick)JBL Portable Speaker (Amy's pick)
Te enseño como me convertí en millonario gracias a la programación
Brandon Liu is an open source developer and creator of the Protomaps basemap project. We talk about how static maps help developers build sites that last, the PMTiles file format, the role of OpenStreetMap, and his experience funding and running an open source project full time. Protomaps Protomaps PMTiles (File format used by Protomaps) Self-hosted slippy maps, for novices (like me) Why Deploy Protomaps on a CDN User examples Flickr Pinball Map Toilet Map Related projects OpenStreetMap (Dataset protomaps is based on) Mapzen (Former company that released details on what to display based on zoom levels) Mapbox GL JS (Mapbox developed source available map rendering library) MapLibre GL JS (Open source fork of Mapbox GL JS) Other links HTTP range requests (MDN) Hilbert curve Transcript You can help correct transcripts on GitHub. Intro [00:00:00] Jeremy: I'm talking to Brandon Liu. He's the creator of Protomaps, which is a way to easily create and host your own maps. Let's get into it. [00:00:09] Brandon: Hey, so thanks for having me on the podcast. So I'm Brandon. I work on an open source project called Protomaps. What it really is, is if you're a front end developer and you ever wanted to put maps on a website or on a mobile app, then Protomaps is sort of an open source solution for doing that that I hope is something that's way easier to use than, um, a lot of other open source projects. Why not just use Google Maps? [00:00:36] Jeremy: A lot of people are gonna be familiar with Google Maps. Why should they worry about whether something's open source? Why shouldn't they just go and use the Google maps API? [00:00:47] Brandon: So Google Maps is like an awesome thing it's an awesome product. Probably one of the best tech products ever right? And just to have a map that tells you what restaurants are open and something that I use like all the time especially like when you're traveling it has all that data. And the most amazing part is that it's free for consumers but it's not necessarily free for developers. Like if you wanted to embed that map onto your website or app, that usually has an API cost which still has a free tier and is affordable. But one motivation, one basic reason to use open source is if you have some project that doesn't really fit into that pricing model. You know like where you have to pay the cost of Google Maps, you have a side project, a nonprofit, that's one reason. But there's lots of other reasons related to flexibility or customization where you might want to use open source instead. Protomaps examples [00:01:49] Jeremy: Can you give some examples where people have used Protomaps and where that made sense for them? [00:01:56] Brandon: I follow a lot of the use cases and I also don't know about a lot of them because I don't have an API where I can track a hundred percent of the users. Some of them use the hosted version, but I would say most of them probably use it on their own infrastructure. One of the cool projects I've been seeing is called Toilet Map. And what toilet map is if you're in the UK and you want find a public restroom then it maps out, sort of crowdsourced all of the public restrooms. And that's important for like a lot of people if they have health issues, they need to find that information. And just a lot of different projects in the same vein. There's another one called Pinball Map which is sort of a hobby project to find all the pinball machines in the world. And they wanted to have a customized map that fit in with their theme of pinball. So these sorts of really cool indie projects are the ones I'm most excited about. Basemaps vs Overlays [00:02:57] Jeremy: And if we talk about, like the pinball map as an example, there's this concept of a basemap and then there's the things that you lay on top of it. What is a basemap and then is the pinball locations is that part of it or is that something separate? [00:03:12] Brandon: It's usually something separate. The example I usually use is if you go to a real estate site, like Zillow, you'll open up the map of Seattle and it has a bunch of pins showing all the houses, and then it has some information beneath it. That information beneath it is like labels telling, this neighborhood is Capitol Hill, or there is a park here. But all that information is common to a lot of use cases and it's not specific to real estate. So I think usually that's the distinction people use in the industry between like a base map versus your overlay. The overlay is like the data for your product or your company while the base map is something you could get from Google or from Protomaps or from Apple or from Mapbox that kind of thing. PMTiles for hosting the basemap and overlays [00:03:58] Jeremy: And so Protomaps in particular is responsible for the base map, and that information includes things like the streets and the locations of landmarks and things like that. Where is all that information coming from? [00:04:12] Brandon: So the base map information comes from a project called OpenStreetMap. And I would also, point out that for Protomaps as sort of an ecosystem. You can also put your overlay data into a format called PMTiles, which is sort of the core of what Protomaps is. So it can really do both. It can transform your data into the PMTiles format which you can host and you can also host the base map. So you kind of have both of those sides of the product in one solution. [00:04:43] Jeremy: And so when you say you have both are you saying that the PMTiles file can have, the base map in one file and then you would have the data you're laying on top in another file? Or what are you describing there? [00:04:57] Brandon: That's usually how I recommend to do it. Oftentimes there'll be sort of like, a really big basemap 'cause it has all of that data about like where the rivers are. Or while, if you want to put your map of toilets or park benches or pickleball courts on top, that's another file. But those are all just like assets you can move around like JSON or CSV files. Statically Hosted [00:05:19] Jeremy: And I think one of the things you mentioned was that your goal was to make Protomaps or the, the use of these PMTiles files easy to use. What does that look like for, for a developer? I wanna host a map. What do I actually need to, to put on my servers? [00:05:38] Brandon: So my usual pitch is that basically if you know how to use S3 or cloud storage, that you know how to deploy a map. And that, I think is the main sort of differentiation from most open source projects. Like a lot of them, they call themselves like, like some sort of self-hosted solution. But I've actually avoided using the term self-hosted because I think in most cases that implies a lot of complexity. Like you have to log into a Linux server or you have to use Kubernetes or some sort of Docker thing. What I really want to emphasize is the idea that, for Protomaps, it's self-hosted in the same way like CSS is self-hosted. So you don't really need a service from Amazon to host the JSON files or CSV files. It's really just a static file. [00:06:32] Jeremy: When you say static file that means you could use any static web host to host your HTML file, your JavaScript that actually renders the map. And then you have your PMTiles files, and you're not running a process or anything, you're just putting your files on a static file host. [00:06:50] Brandon: Right. So I think if you're a developer, you can also argue like a static file server is a server. It's you know, it's the cloud, it's just someone else's computer. It's really just nginx under the hood. But I think static storage is sort of special. If you look at things like static site generators, like Jekyll or Hugo, they're really popular because they're a commodity or like the storage is a commodity. And you can take your blog, make it a Jekyll blog, hosted on S3. One day, Amazon's like, we're charging three times as much so you can move it to a different cloud provider. And that's all vendor neutral. So I think that's really the special thing about static storage as a primitive on the web. Why running servers is a problem for resilience [00:07:36] Jeremy: Was there a prior experience you had? Like you've worked with maps for a very long time. Were there particular difficulties you had where you said I just gotta have something that can be statically hosted? [00:07:50] Brandon: That's sort of exactly why I got into this. I've been working sort of in and around the map space for over a decade, and Protomaps is really like me trying to solve the same problem I've had over and over again in the past, just like once and forever right? Because like once this problem is solved, like I don't need to deal with it again in the future. So I've worked at a couple of different companies before, mostly as a contractor, for like a humanitarian nonprofit for a design company doing things like, web applications to visualize climate change. Or for even like museums, like digital signage for museums. And oftentimes they had some sort of data visualization component, but always sort of the challenge of how to like, store and also distribute like that data was something that there wasn't really great open source solutions. So just for map data, that's really what motivated that design for Protomaps. [00:08:55] Jeremy: And in those, those projects in the past, were those things where you had to run your own server, run your own database, things like that? [00:09:04] Brandon: Yeah. And oftentimes we did, we would spin up an EC2 instance, for maybe one client and then we would have to host this server serving map data forever. Maybe the client goes away, or I guess it's good for business if you can sign some sort of like long-term support for that client saying, Hey, you know, like we're done with a project, but you can pay us to maintain the EC2 server for the next 10 years. And that's attractive. but it's also sort of a pain, because usually what happens is if people are given the choice, like a developer between like either I can manage the server on EC2 or on Rackspace or Hetzner or whatever, or I can go pay a SaaS to do it. In most cases, businesses will choose to pay the SaaS. So that's really like what creates a sort of lock-in is this preference for like, so I have this choice between like running the server or paying the SaaS. Like businesses will almost always go and pay the SaaS. [00:10:05] Jeremy: Yeah. And in this case, you either find some kind of free hosting or low-cost hosting just to host your files and you upload the files and then you're good from there. You don't need to maintain anything. [00:10:18] Brandon: Exactly, and that's really the ideal use case. so I have some users these, climate science consulting agencies, and then they might have like a one-off project where they have to generate the data once, but instead of having to maintain this server for the lifetime of that project, they just have a file on S3 and like, who cares? If that costs a couple dollars a month to run, that's fine, but it's not like S3 is gonna be deprecated, like it's gonna be on an insecure version of Ubuntu or something. So that's really the ideal, set of constraints for using Protomaps. [00:10:58] Jeremy: Yeah. Something this also makes me think about is, is like the resilience of sites like remaining online, because I, interviewed, Kyle Drake, he runs Neocities, which is like a modern version of GeoCities. And if I remember correctly, he was mentioning how a lot of old websites from that time, if they were running a server backend, like they were running PHP or something like that, if you were to try to go to those sites, now they're like pretty much all dead because there needed to be someone dedicated to running a Linux server, making sure things were patched and so on and so forth. But for static sites, like the ones that used to be hosted on GeoCities, you can go to the internet archive or other websites and they were just files, right? You can bring 'em right back up, and if anybody just puts 'em on a web server, then you're good. They're still alive. Case study of news room preferring static hosting [00:11:53] Brandon: Yeah, exactly. One place that's kind of surprising but makes sense where this comes up, is for newspapers actually. Some of the users using Protomaps are the Washington Post. And the reason they use it, is not necessarily because they don't want to pay for a SaaS like Google, but because if they make an interactive story, they have to guarantee that it still works in a couple of years. And that's like a policy decision from like the editorial board, which is like, so you can't write an article if people can't view it in five years. But if your like interactive data story is reliant on a third party, API and that third party API becomes deprecated, or it changes the pricing or it, you know, it gets acquired, then your journalism story is not gonna work anymore. So I have seen really good uptake among local news rooms and even big ones to use things like Protomaps just because it makes sense for the requirements. Working on Protomaps as an open source project for five years [00:12:49] Jeremy: How long have you been working on Protomaps and the parts that it's made up of such as PMTiles? [00:12:58] Brandon: I've been working on it for about five years, maybe a little more than that. It's sort of my pandemic era project. But the PMTiles part, which is really the heart of it only came in about halfway. Why not make a SaaS? [00:13:13] Brandon: So honestly, like when I first started it, I thought it was gonna be another SaaS and then I looked at it and looked at what the environment was around it. And I'm like, uh, so I don't really think I wanna do that. [00:13:24] Jeremy: When, when you say you looked at the environment around it what do you mean? Why did you decide not to make it a SaaS? [00:13:31] Brandon: Because there already is a lot of SaaS out there. And I think the opportunity of making something that is unique in terms of those use cases, like I mentioned like newsrooms, was clear. Like it was clear that there was some other solution, that could be built that would fit these needs better while if it was a SaaS, there are plenty of those out there. And I don't necessarily think that they're well differentiated. A lot of them all use OpenStreetMap data. And it seems like they mainly compete on price. It's like who can build the best three column pricing model. And then once you do that, you need to build like billing and metrics and authentication and like those problems don't really interest me. So I think, although I acknowledge sort of the indie hacker ethos now is to build a SaaS product with a monthly subscription, that's something I very much chose not to do, even though it is for sure like the best way to build a business. [00:14:29] Jeremy: Yeah, I mean, I think a lot of people can appreciate that perspective because it's, it's almost like we have SaaS overload, right? Where you have so many little bills for your project where you're like, another $5 a month, another $10 a month, or if you're a business, right? Those, you add a bunch of zeros and at some point it's just how many of these are we gonna stack on here? [00:14:53] Brandon: Yeah. And honestly. So I really think like as programmers, we're not really like great at choosing how to spend money like a $10 SaaS. That's like nothing. You know? So I can go to Starbucks and I can buy a pumpkin spice latte, and that's like $10 basically now, right? And it's like I'm able to make that consumer choice in like an instant just to spend money on that. But then if you're like, oh, like spend $10 on a SaaS that somebody put a lot of work into, then you're like, oh, that's too expensive. I could just do it myself. So I'm someone that also subscribes to a lot of SaaS products. and I think for a lot of things it's a great fit. Many open source SaaS projects are not easy to self host [00:15:37] Brandon: But there's always this tension between an open source project that you might be able to run yourself and a SaaS. And I think a lot of projects are at different parts of the spectrum. But for Protomaps, it's very much like I'm trying to move maps to being it is something that is so easy to run yourself that anyone can do it. [00:16:00] Jeremy: Yeah, and I think you can really see it with, there's a few SaaS projects that are successful and they're open source, but then you go to look at the self-hosting instructions and it's either really difficult to find and you find it, and then the instructions maybe don't work, or it's really complicated. So I think doing the opposite with Protomaps. As a user, I'm sure we're all appreciative, but I wonder in terms of trying to make money, if that's difficult. [00:16:30] Brandon: No, for sure. It is not like a good way to make money because I think like the ideal situation for an open source project that is open that wants to make money is the product itself is fundamentally complicated to where people are scared to run it themselves. Like a good example I can think of is like Supabase. Supabase is sort of like a platform as a service based on Postgres. And if you wanted to run it yourself, well you need to run Postgres and you need to handle backups and authentication and logging, and that stuff all needs to work and be production ready. So I think a lot of people, like they don't trust themselves to run database backups correctly. 'cause if you get it wrong once, then you're kind of screwed. So I think that fundamental aspect of the product, like a database is something that is very, very ripe for being a SaaS while still being open source because it's fundamentally hard to run. Another one I can think of is like tailscale, which is, like a VPN that works end to end. That's something where, you know, it has this networking complexity where a lot of developers don't wanna deal with that. So they'd happily pay, for tailscale as a service. There is a lot of products or open source projects that eventually end up just changing to becoming like a hosted service. Businesses going from open source to closed or restricted licenses [00:17:58] Brandon: But then in that situation why would they keep it open source, right? Like, if it's easy to run yourself well, doesn't that sort of cannibalize their business model? And I think that's really the tension overall in these open source companies. So you saw it happen to things like Elasticsearch to things like Terraform where they eventually change the license to one that makes it difficult for other companies to compete with them. [00:18:23] Jeremy: Yeah, I mean there's been a number of cases like that. I mean, specifically within the mapping community, one I can think of was Mapbox's. They have Mapbox gl. Which was a JavaScript client to visualize maps and they moved from, I forget which license they picked, but they moved to a much more restrictive license. I wonder what your thoughts are on something that releases as open source, but then becomes something maybe a little more muddy. [00:18:55] Brandon: Yeah, I think it totally makes sense because if you look at their business and their funding, it seems like for Mapbox, I haven't used it in a while, but my understanding is like a lot of their business now is car companies and doing in dash navigation. And that is probably way better of a business than trying to serve like people making maps of toilets. And I think sort of the beauty of it is that, so Mapbox, the story is they had a JavaScript renderer called Mapbox GL JS. And they changed that to a source available license a couple years ago. And there's a fork of it that I'm sort of involved in called MapLibre GL. But I think the cool part is Mapbox paid employees for years, probably millions of dollars in total to work on this thing and just gave it away for free. Right? So everyone can benefit from that work they did. It's not like that code went away, like once they changed the license. Well, the old version has been forked. It's going its own way now. It's quite different than the new version of Mapbox, but I think it's extremely generous that they're able to pay people for years, you know, like a competitive salary and just give that away. [00:20:10] Jeremy: Yeah, so we should maybe look at it as, it was a gift while it was open source, and they've given it to the community and they're on continuing on their own path, but at least the community running Map Libre, they can run with it, right? It's not like it just disappeared. [00:20:29] Brandon: Yeah, exactly. And that is something that I use for Protomaps quite extensively. Like it's the primary way of showing maps on the web and I've been trying to like work on some enhancements to it to have like better internationalization for if you are in like South Asia like not show languages correctly. So I think it is being taken in a new direction. And I think like sort of the combination of Protomaps and MapLibre, it addresses a lot of use cases, like I mentioned earlier with like these like hobby projects, indie projects that are almost certainly not interesting to someone like Mapbox or Google as a business. But I'm happy to support as a small business myself. Financially supporting open source work (GitHub sponsors, closed source, contracts) [00:21:12] Jeremy: In my previous interview with Tom, one of the main things he mentioned was that creating a mapping business is incredibly difficult, and he said he probably wouldn't do it again. So in your case, you're building Protomaps, which you've admitted is easy to self-host. So there's not a whole lot of incentive for people to pay you. How is that working out for you? How are you supporting yourself? [00:21:40] Brandon: There's a couple of strategies that I've tried and oftentimes failed at. Just to go down the list, so I do have GitHub sponsors so I do have a hosted version of Protomaps you can use if you don't want to bother copying a big file around. But the way I do the billing for that is through GitHub sponsors. If you wanted to use this thing I provide, then just be a sponsor. And that definitely pays for itself, like the cost of running it. And that's great. GitHub sponsors is so easy to set up. It just removes you having to deal with Stripe or something. 'cause a lot of people, their credit card information is already in GitHub. GitHub sponsors I think is awesome if you want to like cover costs for a project. But I think very few people are able to make that work. A thing that's like a salary job level. It's sort of like Twitch streaming, you know, there's a handful of people that are full-time streamers and then you look down the list on Twitch and it's like a lot of people that have like 10 viewers. But some of the other things I've tried, I actually started out, publishing the base map as a closed source thing, where I would sell sort of like a data package instead of being a SaaS, I'd be like, here's a one-time download, of the premium data and you can buy it. And quite a few people bought it I just priced it at like $500 for this thing. And I thought that was an interesting experiment. The main reason it's interesting is because the people that it attracts to you in terms of like, they're curious about your products, are all people willing to pay money. While if you start out everything being open source, then the people that are gonna be try to do it are only the people that want to get something for free. So what I discovered is actually like once you transition that thing from closed source to open source, a lot of the people that used to pay you money will still keep paying you money because like, it wasn't necessarily that that closed source thing was why they wanted to pay. They just valued that thought you've put into it your expertise, for example. So I think that is one thing, that I tried at the beginning was just start out, closed source proprietary, then make it open source. That's interesting to people. Like if you release something as open source, if you go the other way, like people are really mad if you start out with something open source and then later on you're like, oh, it's some other license. Then people are like that's so rotten. But I think doing it the other way, I think is quite valuable in terms of being able to find an audience. [00:24:29] Jeremy: And when you said it was closed source and paid to open source, do you still sell those map exports? [00:24:39] Brandon: I don't right now. It's something that I might do in the future, you know, like have small customizations of the data that are available, uh, for a fee. still like the core OpenStreetMap based map that's like a hundred gigs you can just download. And that'll always just be like a free download just because that's already out there. All the source code to build it is open source. So even if I said, oh, you have to pay for it, then someone else can just do it right? So there's no real reason like to make that like some sort of like paywall thing. But I think like overall if the project is gonna survive in the long term it's important that I'd ideally like to be able to like grow like a team like have a small group of people that can dedicate the time to growing the project in the long term. But I'm still like trying to figure that out right now. [00:25:34] Jeremy: And when you mentioned that when you went from closed to open and people were still paying you, you don't sell a product anymore. What were they paying for? [00:25:45] Brandon: So I have some contracts with companies basically, like if they need a feature or they need a customization in this way then I am very open to those. And I sort of set it up to make it clear from the beginning that this is not just a free thing on GitHub, this is something that you could pay for if you need help with it, if you need support, if you wanted it. I'm also a little cagey about the word support because I think like it sounds a little bit too wishy-washy. Pretty much like if you need access to the developers of an open source project, I think that's something that businesses are willing to pay for. And I think like making that clear to potential users is a challenge. But I think that is one way that you might be able to make like a living out of open source. [00:26:35] Jeremy: And I think you said you'd been working on it for about five years. Has that mostly been full time? [00:26:42] Brandon: It's been on and off. it's sort of my pandemic era project. But I've spent a lot of time, most of my time working on the open source project at this point. So I have done some things that were more just like I'm doing a customization or like a private deployment for some client. But that's been a minority of the time. Yeah. [00:27:03] Jeremy: It's still impressive to have an open source project that is easy to self-host and yet is still able to support you working on it full time. I think a lot of people might make the assumption that there's nothing to sell if something is, is easy to use. But this sort of sounds like a counterpoint to that. [00:27:25] Brandon: I think I'd like it to be. So when you come back to the point of like, it being easy to self-host. Well, so again, like I think about it as like a primitive of the web. Like for example, if you wanted to start a business today as like hosted CSS files, you know, like where you upload your CSS and then you get developers to pay you a monthly subscription for how many times they fetched a CSS file. Well, I think most developers would be like, that's stupid because it's just an open specification, you just upload a static file. And really my goal is to make Protomaps the same way where it's obvious that there's not really some sort of lock-in or some sort of secret sauce in the server that does this thing. How PMTiles works and building a primitive of the web [00:28:16] Brandon: If you look at video for example, like a lot of the tech for how Protomaps and PMTiles works is based on parts of the HTTP spec that were made for video. And 20 years ago, if you wanted to host a video on the web, you had to have like a real player license or flash. So you had to go license some server software from real media or from macromedia so you could stream video to a browser plugin. But now in HTML you can just embed a video file. And no one's like, oh well I need to go pay for my video serving license. I mean, there is such a thing, like YouTube doesn't really use that for DRM reasons, but people just have the assumption that video is like a primitive on the web. So if we're able to make maps sort of that same way like a primitive on the web then there isn't really some obvious business or licensing model behind how that works. Just because it's a thing and it helps a lot of people do their jobs and people are happy using it. So why bother? [00:29:26] Jeremy: You mentioned that it a tech that was used for streaming video. What tech specifically is it? [00:29:34] Brandon: So it is byte range serving. So when you open a video file on the web, So let's say it's like a 100 megabyte video. You don't have to download the entire video before it starts playing. It streams parts out of the file based on like what frames... I mean, it's based on the frames in the video. So it can start streaming immediately because it's organized in a way to where the first few frames are at the beginning. And what PMTiles really is, is it's just like a video but in space instead of time. So it's organized in a way where these zoomed out views are at the beginning and the most zoomed in views are at the end. So when you're like panning or zooming in the map all you're really doing is fetching byte ranges out of that file the same way as a video. But it's organized in, this tiled way on a space filling curve. IIt's a little bit complicated how it works internally and I think it's kind of cool but that's sort of an like an implementation detail. [00:30:35] Jeremy: And to the person deploying it, it just looks like a single file. [00:30:40] Brandon: Exactly in the same way like an mp3 audio file is or like a JSON file is. [00:30:47] Jeremy: So with a video, I can sort of see how as someone seeks through the video, they start at the beginning and then they go to the middle if they wanna see the middle. For a map, as somebody scrolls around the map, are you seeking all over the file or is the way it's structured have a little less chaos? [00:31:09] Brandon: It's structured. And that's kind of the main technical challenge behind building PMTiles is you have to be sort of clever so you're not spraying the reads everywhere. So it uses something called a hilbert curve, which is a mathematical concept of a space filling curve. Where it's one continuous curve that essentially lets you break 2D space into 1D space. So if you've seen some maps of IP space, it uses this crazy looking curve that hits all the points in one continuous line. And that's the same concept behind PMTiles is if you're looking at one part of the world, you're sort of guaranteed that all of those parts you're looking at are quite close to each other and the data you have to transfer is quite minimal, compared to if you just had it at random. [00:32:02] Jeremy: How big do the files get? If I have a PMTiles of the entire world, what kind of size am I looking at? [00:32:10] Brandon: Right now, the default one I distribute is 128 gigabytes, so it's quite sizable, although you can slice parts out of it remotely. So if you just wanted. if you just wanted California or just wanted LA or just wanted only a couple of zoom levels, like from zero to 10 instead of zero to 15, there is a command line tool that's also called PMTiles that lets you do that. Issues with CDNs and range queries [00:32:35] Jeremy: And when you're working with files of this size, I mean, let's say I am working with a CDN in front of my application. I'm not typically accustomed to hosting something that's that large and something that's where you're seeking all over the file. is that, ever an issue or is that something that's just taken care of by the browser and, and taken care of by, by the hosts? [00:32:58] Brandon: That is an issue actually, so a lot of CDNs don't deal with it correctly. And my recommendation is there is a kind of proxy server or like a serverless proxy thing that I wrote. That runs on like cloudflare workers or on Docker that lets you proxy those range requests into a normal URL and then that is like a hundred percent CDN compatible. So I would say like a lot of the big commercial installations of this thing, they use that because it makes more practical sense. It's also faster. But the idea is that this solution sort of scales up and scales down. If you wanted to host just your city in like a 10 megabyte file, well you can just put that into GitHub pages and you don't have to worry about it. If you want to have a global map for your website that serves a ton of traffic then you probably want a little bit more sophisticated of a solution. It still does not require you to run a Linux server, but it might require (you) to use like Lambda or Lambda in conjunction with like a CDN. [00:34:09] Jeremy: Yeah. And that sort of ties into what you were saying at the beginning where if you can host on something like CloudFlare Workers or Lambda, there's less time you have to spend keeping these things running. [00:34:26] Brandon: Yeah, exactly. and I think also the Lambda or CloudFlare workers solution is not perfect. It's not as perfect as S3 or as just static files, but in my experience, it still is better at building something that lasts on the time span of years than being like I have a server that is on this Ubuntu version and in four years there's all these like security patches that are not being applied. So it's still sort of serverless, although not totally vendor neutral like S3. Customizing the map [00:35:03] Jeremy: We've mostly been talking about how you host the map itself, but for someone who's not familiar with these kind of tools, how would they be customizing the map? [00:35:15] Brandon: For customizing the map there is front end style customization and there's also data customization. So for the front end if you wanted to change the water from the shade of blue to another shade of blue there is a TypeScript API where you can customize it almost like a text editor color scheme. So if you're able to name a bunch of colors, well you can customize the map in that way you can change the fonts. And that's all done using MapLibre GL using a TypeScript API on top of that for customizing the data. So all the pipeline to generate this data from OpenStreetMap is open source. There is a Java program using a library called PlanetTiler which is awesome, which is this super fast multi-core way of building map tiles. And right now there isn't really great hooks to customize what data goes into that. But that's something that I do wanna work on. And finally, because the data comes from OpenStreetMap if you notice data that's missing or you wanted to correct data in OSM then you can go into osm.org. You can get involved in contributing the data to OSM and the Protomaps build is daily. So if you make a change, then within 24 hours you should see the new base map. Have that change. And of course for OSM your improvements would go into every OSM based project that is ingesting that data. So it's not a protomap specific thing. It's like this big shared data source, almost like Wikipedia. OpenStreetMap is a dataset and not a map [00:37:01] Jeremy: I think you were involved with OpenStreetMap to some extent. Can you speak a little bit to that for people who aren't familiar, what OpenStreetMap is? [00:37:11] Brandon: Right. So I've been using OSM as sort of like a tools developer for over a decade now. And one of the number one questions I get from developers about what is Protomaps is why wouldn't I just use OpenStreetMap? What's the distinction between Protomaps and OpenStreetMap? And it's sort of like this funny thing because even though OSM has map in the name it's not really a map in that you can't... In that it's mostly a data set and not a map. It does have a map that you can see that you can pan around to when you go to the website but the way that thing they show you on the website is built is not really that easily reproducible. It involves a lot of c++ software you have to run. But OpenStreetMap itself, the heart of it is almost like a big XML file that has all the data in the map and global. And it has tagged features for example. So you can go in and edit that. It has a web front end to change the data. It does not directly translate into making a map actually. Protomaps decides what shows at each zoom level [00:38:24] Brandon: So a lot of the pipeline, that Java program I mentioned for building this basemap for protomaps is doing things like you have to choose what data you show when you zoom out. You can't show all the data. For example when you're zoomed out and you're looking at all of a state like Colorado you don't see all the Chipotle when you're zoomed all the way out. That'd be weird, right? So you have to make some sort of decision in logic that says this data only shows up at this zoom level. And that's really what is the challenge in optimizing the size of that for the Protomaps map project. [00:39:03] Jeremy: Oh, so those decisions of what to show at different Zoom levels those are decisions made by you when you're creating the PMTiles file with Protomaps. [00:39:14] Brandon: Exactly. It's part of the base maps build pipeline. and those are honestly very subjective decisions. Who really decides when you're zoomed out should this hospital show up or should this museum show up nowadays in Google, I think it shows you ads. Like if someone pays for their car repair shop to show up when you're zoomed out like that that gets surfaced. But because there is no advertising auction in Protomaps that doesn't happen obviously. So we have to sort of make some reasonable choice. A lot of that right now in Protomaps actually comes from another open source project called Mapzen. So Mapzen was a company that went outta business a couple years ago. They did a lot of this work in designing which data shows up at which Zoom level and open sourced it. And then when they shut down, they transferred that code into the Linux Foundation. So it's this totally open source project, that like, again, sort of like Mapbox gl has this awesome legacy in that this company funded it for years for smart people to work on it and now it's just like a free thing you can use. So the logic in Protomaps is really based on mapzen. [00:40:33] Jeremy: And so the visualization of all this... I think I understand what you mean when people say oh, why not use OpenStreetMaps because it's not really clear it's hard to tell is this the tool that's visualizing the data? Is it the data itself? So in the case of using Protomaps, it sounds like Protomaps itself has all of the data from OpenStreetMap and then it has made all the decisions for you in terms of what to show at different Zoom levels and what things to have on the map at all. And then finally, you have to have a separate, UI layer and in this case, it sounds like the one that you recommend is the Map Libre library. [00:41:18] Brandon: Yeah, that's exactly right. For Protomaps, it has a portion or a subset of OSM data. It doesn't have all of it just because there's too much, like there's data in there. people have mapped out different bushes and I don't include that in Protomaps if you wanted to go in and edit like the Java code to add that you can. But really what Protomaps is positioned at is sort of a solution for developers that want to use OSM data to make a map on their app or their website. because OpenStreetMap itself is mostly a data set, it does not really go all the way to having an end-to-end solution. Financials and the idea of a project being complete [00:41:59] Jeremy: So I think it's great that somebody who wants to make a map, they have these tools available, whether it's from what was originally built by Mapbox, what's built by Open StreetMap now, the work you're doing with Protomaps. But I wonder one of the things that I talked about with Tom was he was saying he was trying to build this mapping business and based on the financials of what was coming in he was stressed, right? He was struggling a bit. And I wonder for you, you've been working on this open source project for five years. Do you have similar stressors or do you feel like I could keep going how things are now and I feel comfortable? [00:42:46] Brandon: So I wouldn't say I'm a hundred percent in one bucket or the other. I'm still seeing it play out. One thing, that I really respect in a lot of open source projects, which I'm not saying I'm gonna do for Protomaps is the idea that a project is like finished. I think that is amazing. If a software project can just be done it's sort of like a painting or a novel once you write, finish the last page, have it seen by the editor. I send it off to the press is you're done with a book. And I think one of the pains of software is so few of us can actually do that. And I don't know obviously people will say oh the map is never finished. That's more true of OSM, but I think like for Protomaps. One thing I'm thinking about is how to limit the scope to something that's quite narrow to where we could be feature complete on the core things in the near term timeframe. That means that it does not address a lot of things that people want. Like search, like if you go to Google Maps and you search for a restaurant, you will get some hits. that's like a geocoding issue. And I've already decided that's totally outta scope for Protomaps. So, in terms of trying to think about the future of this, I'm mostly looking for ways to cut scope if possible. There are some things like better tooling around being able to work with PMTiles that are on the roadmap. but for me, I am still enjoying working on the project. It's definitely growing. So I can see on NPM downloads I can see the growth curve of people using it and that's really cool. So I like hearing about when people are using it for cool projects. So it seems to still be going okay for now. [00:44:44] Jeremy: Yeah, that's an interesting perspective about how you were talking about projects being done. Because I think when people look at GitHub projects and they go like, oh, the last commit was X months ago. They go oh well this is dead right? But maybe that's the wrong framing. Maybe you can get a project to a point where it's like, oh, it's because it doesn't need to be updated. [00:45:07] Brandon: Exactly, yeah. Like I used to do a lot of c++ programming and the best part is when you see some LAPACK matrix math library from like 1995 that still works perfectly in c++ and you're like, this is awesome. This is the one I have to use. But if you're like trying to use some like React component library and it hasn't been updated in like a year, you're like, oh, that's a problem. So again, I think there's some middle ground between those that I'm trying to find. I do like for Protomaps, it's quite dependency light in terms of the number of hard dependencies I have in software. but I do still feel like there is a lot of work to be done in terms of project scope that needs to have stuff added. You mostly only hear about problems instead of people's wins [00:45:54] Jeremy: Having run it for this long. Do you have any thoughts on running an open source project in general? On dealing with issues or managing what to work on things like that? [00:46:07] Brandon: Yeah. So I have a lot. I think one thing people point out a lot is that especially because I don't have a direct relationship with a lot of the people using it a lot of times I don't even know that they're using it. Someone sent me a message saying hey, have you seen flickr.com, like the photo site? And I'm like, no. And I went to flickr.com/map and it has Protomaps for it. And I'm like, I had no idea. But that's cool, if they're able to use Protomaps for this giant photo sharing site that's awesome. But that also means I don't really hear about when people use it successfully because you just don't know, I guess they, NPM installed it and it works perfectly and you never hear about it. You only hear about people's negative experiences. You only hear about people that come and open GitHub issues saying this is totally broken, and why doesn't this thing exist? And I'm like, well, it's because there's an infinite amount of things that I want to do, but I have a finite amount of time and I just haven't gone into that yet. And that's honestly a lot of the things and people are like when is this thing gonna be done? So that's, that's honestly part of why I don't have a public roadmap because I want to avoid that sort of bickering about it. I would say that's one of my biggest frustrations with running an open source project is how it's self-selected to only hear the negative experiences with it. Be careful what PRs you accept [00:47:32] Brandon: 'cause you don't hear about those times where it works. I'd say another thing is it's changed my perspective on contributing to open source because I think when I was younger or before I had become a maintainer I would open a pull request on a project unprompted that has a hundred lines and I'd be like, Hey, just merge this thing. But I didn't realize when I was younger well if I just merge it and I disappear, then the maintainer is stuck with what I did forever. You know if I add some feature then that person that maintains the project has to do that indefinitely. And I think that's very asymmetrical and it's changed my perspective a lot on accepting open source contributions. I wanna have it be open to anyone to contribute. But there is some amount of back and forth where it's almost like the default answer for should I accept a PR is no by default because you're the one maintaining it. And do you understand the shape of that solution completely to where you're going to support it for years because the person that's contributing it is not bound to those same obligations that you are. And I think that's also one of the things where I have a lot of trepidation around open source is I used to think of it as a lot more bazaar-like in terms of anyone can just throw their thing in. But then that creates a lot of problems for the people who are expected out of social obligation to continue this thing indefinitely. [00:49:23] Jeremy: Yeah, I can totally see why that causes burnout with a lot of open source maintainers, because you probably to some extent maybe even feel some guilt right? You're like, well, somebody took the time to make this. But then like you said you have to spend a lot of time trying to figure out is this something I wanna maintain long term? And one wrong move and it's like, well, it's in here now. [00:49:53] Brandon: Exactly. To me, I think that is a very common failure mode for open source projects is they're too liberal in the things they accept. And that's a lot of why I was talking about how that choice of what features show up on the map was inherited from the MapZen projects. If I didn't have that then somebody could come in and say hey, you know, I want to show power lines on the map. And they open a PR for power lines and now everybody who's using Protomaps when they're like zoomed out they see power lines are like I didn't want that. So I think that's part of why a lot of open source projects eventually evolve into a plugin system is because there is this demand as the project grows for more and more features. But there is a limit in the maintainers. It's like the demand for features is exponential while the maintainer amount of time and effort is linear. Plugin systems might reduce need for PRs [00:50:56] Brandon: So maybe the solution to smash that exponential down to quadratic maybe is to add a plugin system. But I think that is one of the biggest tensions that only became obvious to me after working on this for a couple of years. [00:51:14] Jeremy: Is that something you're considering doing now? [00:51:18] Brandon: Is the plugin system? Yeah. I think for the data customization, I eventually wanted to have some sort of programmatic API to where you could declare a config file that says I want ski routes. It totally makes sense. The power lines example is maybe a little bit obscure but for example like a skiing app and you want to be able to show ski slopes when you're zoomed out well you're not gonna be able to get that from Mapbox or from Google because they have a one size fits all map that's not specialized to skiing or to golfing or to outdoors. But if you like, in theory, you could do this with Protomaps if you changed the Java code to show data at different zoom levels. And that is to me what makes the most sense for a plugin system and also makes the most product sense because it enables a lot of things you cannot do with the one size fits all map. [00:52:20] Jeremy: It might also increase the complexity of the implementation though, right? [00:52:25] Brandon: Yeah, exactly. So that's like. That's really where a lot of the terrifying thoughts come in, which is like once you create this like config file surface area, well what does that look like? Is that JSON? Is that TOML, is that some weird like everything eventually evolves into some scripting language right? Where you have logic inside of your templates and I honestly do not really know what that looks like right now. That feels like something in the medium term roadmap. [00:52:58] Jeremy: Yeah and then in terms of bug reports or issues, now it's not just your code it's this exponential combination of whatever people put into these config files. [00:53:09] Brandon: Exactly. Yeah. so again, like I really respect the projects that have done this well or that have done plugins well. I'm trying to think of some, I think obsidian has plugins, for example. And that seems to be one of the few solutions to try and satisfy the infinite desire for features with the limited amount of maintainer time. Time split between code vs triage vs talking to users [00:53:36] Jeremy: How would you say your time is split between working on the code versus issue and PR triage? [00:53:43] Brandon: Oh, it varies really. I think working on the code is like a minority of it. I think something that I actually enjoy is talking to people, talking to users, getting feedback on it. I go to quite a few conferences to talk to developers or people that are interested and figure out how to refine the message, how to make it clearer to people, like what this is for. And I would say maybe a plurality of my time is spent dealing with non-technical things that are neither code or GitHub issues. One thing I've been trying to do recently is talk to people that are not really in the mapping space. For example, people that work for newspapers like a lot of them are front end developers and if you ask them to run a Linux server they're like I have no idea. But that really is like one of the best target audiences for Protomaps. So I'd say a lot of the reality of running an open source project is a lot like a business is it has all the same challenges as a business in terms of you have to figure out what is the thing you're offering. You have to deal with people using it. You have to deal with feedback, you have to deal with managing emails and stuff. I don't think the payoff is anywhere near running a business or a startup that's backed by VC money is but it's definitely not the case that if you just want to code, you should start an open source project because I think a lot of the work for an opensource project has nothing to do with just writing the code. It is in my opinion as someone having done a VC backed business before, it is a lot more similar to running, a tech company than just putting some code on GitHub. Running a startup vs open source project [00:55:43] Jeremy: Well, since you've done both at a high level what did you like about running the company versus maintaining the open source project? [00:55:52] Brandon: So I have done some venture capital accelerator programs before and I think there is an element of hype and energy that you get from that that is self perpetuating. Your co-founder is gungho on like, yeah, we're gonna do this thing. And your investors are like, you guys are geniuses. You guys are gonna make a killing doing this thing. And the way it's framed is sort of obvious to everyone that it's like there's a much more traditional set of motivations behind that, that people understand while it's definitely not the case for running an open source project. Sometimes you just wake up and you're like what the hell is this thing for, it is this thing you spend a lot of time on. You don't even know who's using it. The people that use it and make a bunch of money off of it they know nothing about it. And you know, it's just like cool. And then you only hear from people that are complaining about it. And I think like that's honestly discouraging compared to the more clear energy and clearer motivation and vision behind how most people think about a company. But what I like about the open source project is just the lack of those constraints you know? Where you have a mandate that you need to have this many customers that are paying by this amount of time. There's that sort of pressure on delivering a business result instead of just making something that you're proud of that's simple to use and has like an elegant design. I think that's really a difference in motivation as well. Having control [00:57:50] Jeremy: Do you feel like you have more control? Like you mentioned how you've decided I'm not gonna make a public roadmap. I'm the sole developer. I get to decide what goes in. What doesn't. Do you feel like you have more control in your current position than you did running the startup? [00:58:10] Brandon: Definitely for sure. Like that agency is what I value the most. It is possible to go too far. Like, so I'm very wary of the BDFL title, which I think is how a lot of open source projects succeed. But I think there is some element of for a project to succeed there has to be somebody that makes those decisions. Sometimes those decisions will be wrong and then hopefully they can be rectified. But I think going back to what I was talking about with scope, I think the overall vision and the scope of the project is something that I am very opinionated about in that it should do these things. It shouldn't do these things. It should be easy to use for this audience. Is it gonna be appealing to this other audience? I don't know. And I think that is really one of the most important parts of that leadership role, is having the power to decide we're doing this, we're not doing this. I would hope other developers would be able to get on board if they're able to make good use of the project, if they use it for their company, if they use it for their business, if they just think the project is cool. So there are other contributors at this point and I want to get more involved. But I think being able to make those decisions to what I believe is going to be the best project is something that is very special about open source, that isn't necessarily true about running like a SaaS business. [00:59:50] Jeremy: I think that's a good spot to end it on, so if people want to learn more about Protomaps or they wanna see what you're up to, where should they head? [01:00:00] Brandon: So you can go to Protomaps.com, GitHub, or you can find me or Protomaps on bluesky or Mastodon. [01:00:09] Jeremy: All right, Brandon, thank you so much for chatting today. [01:00:12] Brandon: Great. Thank you very much.
Pratul Kalia from Tramline joins Jamon, Robin, and Mazen to talk about mobile release automation, scaling to millions of users, and why skipping the tooling step might cost you later. A must-listen for devs navigating production launches.Show NotesTramlineReldexApdexTramline's synchronized buildsConnect With Us!Pratul Kalia: @prxtlJamon Holmgren: @jamonholmgrenRobin Heinze: @robinheinzeMazen Chami: @mazenchamiReact Native Radio: @ReactNativeRdioThis episode is brought to you by Infinite Red!Infinite Red is an expert React Native consultancy located in the USA. With nearly a decade of React Native experience and deep roots in the React Native community (hosts of Chain React and the React Native Newsletter, core React Native contributors, creators of Ignite and Reactotron, and much, much more), Infinite Red is the best choice for helping you build and deploy your next React Native app.
The above title does not do Dan Swift justice. Dan also has his own podcast, successful Youtube channel and he has released seven music albums. Talk about being unstoppable! I met Dan when I appeared as a guest on his podcast, Time We Discuss and I knew he would contribute to a fascinating story here. Dan grew up with an interest in music. For a time he thought he wanted to write music for video games. Along the way he left that idea behind and after graduating from college he began working at designing websites. He has made that into his fulltime career. As he grew as a website designer and later as a supervisor for a school system coordinating and creating the school sites Dan took an interest in accessibility of the web. We talk quite a bit about that during our time together. His observations are fascinating and right on where web access for persons with disabilities is concerned. We also talk about Dan's podcast including some stories of guests and what inspires Dan from his interviews. I hope you enjoy this episode as much as I. About the Guest: Originally wanting to write music for video games or become an audio engineer, Dan Swift graduated from a small Liberal Arts college with a degree in Music Composition (Bachelor of Arts) and Music Recording Technology (Bachelor of Music). Dan went on to release seven EP albums between 2003 and 2024. Most recently, "Parallels" dropped on Leap Day, 2024. Dan has always had a passion for shaking up genres between Eps writing classical, electronic, and modern rock music. While creating music has always been a passion, Dan took a more traditional professional path as a web developer. While on this path, Dan had a lot of experience with accessibility standards as it relates to the web and he values accessibility and equity for everyone both inside and outside the digital workspace. Having received his MBA during COVID, Dan went on to a leadership position where he continues to make a difference leading a team of tech-savvy web professionals. In early 2024, I created a podcast and YouTube channel called "Time We Discuss" which focuses on career exploration and discovery. The channel and podcast are meant for anyone that is feeling lost professionally and unsure of what is out there for them. Dan feels that it is important for people to discover their professional passion, whatever it is that lights them up on the inside, and chase it. So many people are unfulfilled in their careers, yet it doesn't have to be this way. When not working, Dan enjoys spending time with his wife and three kids. They are a very active family often going to various extracurricular events over the years including flag football, soccer, gymnastics, and school concerts. Dan's wife is very active with several nonprofit organizations including those for the betterment of children and homelessness. Dan enjoys playing the piano, listening to podcasts, and listening to music. Dan is very naturally curious and is a slave to a train of never-ending thoughts. Ways to connect with Dan: Time We Discuss on YouTube Time We Discuss on Spotify Time We Discuss on Twitter/X Time We Discuss on Instagram Time We Discuss on BlueSky Time We Discuss Website Dan Swift Music Website About the Host: Michael Hingson is a New York Times best-selling author, international lecturer, and Chief Vision Officer for accessiBe. Michael, blind since birth, survived the 9/11 attacks with the help of his guide dog Roselle. This story is the subject of his best-selling book, Thunder Dog. Michael gives over 100 presentations around the world each year speaking to influential groups such as Exxon Mobile, AT&T, Federal Express, Scripps College, Rutgers University, Children's Hospital, and the American Red Cross just to name a few. He is Ambassador for the National Braille Literacy Campaign for the National Federation of the Blind and also serves as Ambassador for the American Humane Association's 2012 Hero Dog Awards. https://michaelhingson.com https://www.facebook.com/michael.hingson.author.speaker/ https://twitter.com/mhingson https://www.youtube.com/user/mhingson https://www.linkedin.com/in/michaelhingson/ accessiBe Links https://accessibe.com/ https://www.youtube.com/c/accessiBe https://www.linkedin.com/company/accessibe/mycompany/ https://www.facebook.com/accessibe/ Thanks for listening! Thanks so much for listening to our podcast! If you enjoyed this episode and think that others could benefit from listening, please share it using the social media buttons on this page. Do you have some feedback or questions about this episode? Leave a comment in the section below! Subscribe to the podcast If you would like to get automatic updates of new podcast episodes, you can subscribe to the podcast on Apple Podcasts or Stitcher. You can subscribe in your favorite podcast app. You can also support our podcast through our tip jar https://tips.pinecast.com/jar/unstoppable-mindset . Leave us an Apple Podcasts review Ratings and reviews from our listeners are extremely valuable to us and greatly appreciated. They help our podcast rank higher on Apple Podcasts, which exposes our show to more awesome listeners like you. If you have a minute, please leave an honest review on Apple Podcasts. Transcription Notes: Michael Hingson ** 00:00 Access Cast and accessiBe Initiative presents Unstoppable Mindset. The podcast where inclusion, diversity and the unexpected meet. Hi, I'm Michael Hingson, Chief Vision Officer for accessiBe and the author of the number one New York Times bestselling book, Thunder dog, the story of a blind man, his guide dog and the triumph of trust. Thanks for joining me on my podcast as we explore our own blinding fears of inclusion unacceptance and our resistance to change. We will discover the idea that no matter the situation, or the people we encounter, our own fears, and prejudices often are our strongest barriers to moving forward. The unstoppable mindset podcast is sponsored by accessiBe, that's a c c e s s i capital B e. Visit www.accessibe.com to learn how you can make your website accessible for persons with disabilities. And to help make the internet fully inclusive by the year 2025. Glad you dropped by we're happy to meet you and to have you here with us. Michael Hingson ** 01:20 Well, hi everybody. Welcome once again. Wherever you may be, to unstoppable mindset, I am your host, Mike hingson, sometimes I say Michael hingson, and people have said, Well, is it Mike or Michael? And the answer is, it doesn't really matter. It took a master's degree in physics and 10 years in sales for me to realize that if I said Mike Hingson on the phone, people kept calling me Mr. Kingston, and I couldn't figure out why, so I started saying Michael Hingson, and they got the hinckson part right, but it doesn't matter to me. So anyway, Mike hingson, or Michael hingson, glad you're with us, wherever you are, and our guest today is Dan Swift, who has his own pine podcast, and it was actually through that podcast that we met, and I told him, but I wouldn't do it with him and be on his podcast unless he would be on unstoppable mindset. And here he is. Dan is a person who writes music, he's an engineer. He does a lot of work with web design and so on, and we're going to get into all that. So Dan, I want to welcome you to unstoppable mindset. We're really glad you're here. Dan Swift ** 02:25 Michael, it's a pleasure to be here. Thank you so much for inviting me. I am. I'm super excited. Michael Hingson ** 02:30 Well, looking forward to getting to spend more time with you. We did yours time to discuss, and now we get this one. So it's always kind of fun. So, and Dan is in Pennsylvania, so we're talking across the continent, which is fine. It's amazing what we can do with electronics these days, telling us not like the good old days of the covered wagon. What can I say? So, So Dan, why don't you tell us a little bit about kind of the early Dan, growing up and all that. Dan Swift ** 02:57 Oh, geez. How far Michael Hingson ** 02:58 back to go? Oh, as far as you want to go, Dan Swift ** 03:02 Well, okay, so I am, I am the youngest of five. Grew up just outside of Philadelphia as being the youngest. You know, there are certain perks that go along with that. I get to experience things that my parents would have previous said no to the older siblings. And you know how it is with with, you know, if you have more than one kid, technically, you get a little more relaxed as you have more but then I also had the other benefit of, you know, hearing the expression, there are young ears in the room, I will tell you later. So I kind of got some of that too. But I grew up outside of Philadelphia, had a passion for music. Pretty early on. I was never good at any sports. Tried a number of things. And when I landed on music, I thought, you know, this is this is something that I can do. I seem to have a natural talent for it. And I started, I tried playing the piano when I was maybe eight or nine years old. That didn't pan out. Moved on to the trumpet when I was nine or 10. Eventually ended up picking up guitar, bass, guitar, double bass revisited piano later in life, but that's the musical side of things. Also, when I was young, you know, I had a passion for role playing games, Dungeons and Dragons, was really big when I was a teenager, so I was super excited for that. Yeah, that's, that's kind of those, those memories kind of forced me, or kind of shaped me into the person that I am today. I'm very light hearted, very easy going, and I just try to enjoy life. Michael Hingson ** 04:30 I played some computer games when computers came along and I started fiddling with them, the games I usually played were text based games. I've never really played Dungeons and Dragons and some of those. And I I'm sure that there are accessible versions of of some of that, but I remember playing games like adventure. You remember? Have you heard of adventure? I have, yeah. So that was, that was fun. Info con made. Well, they had Zork, which was really the same as adventure, but they. At a whole bunch of games. And those are, those are fun. And I think all of those games, I know a lot of adults would probably say kids spend too much time on some of them, but some of these games, like the the text based games, I thought really were very good at expanding one's mind, and they made you think, which is really what was important to me? Yeah, I Dan Swift ** 05:21 completely agree with that too. Because you'd be put in these situations where, you know true, you're trying to solve some kind of puzzle, and you're trying to think, Okay, well, that didn't work, or that didn't work, and you try all these different things, then you decide to leave and come back to and you realize later, like you didn't have something that you needed to progress forward, or something like that. But, but it really gets the brain going, trying to create with these, uh, come up with these creative solutions to progress the game forward. Yeah, which Michael Hingson ** 05:43 and the creative people who made them in the first place? What did they? Yeah, they, I don't know where they, where they spent their whole time that they had nothing to do but to create these games. But hey, it worked. It sure. Did you know you do it well. So you went off to college. Where'd you go? Sure, Dan Swift ** 06:02 I went to a small liberal arts college, Lebanon Valley College in Pennsylvania. It's near, it's near Hershey. It was, it was weird in that my the entire school was about half the size of my entire high school. So that was very, very weird. And then you talk to these other people. And it's like, my high school was, you know, very large by comparison. But for me, it was like, well, high school, that's what I knew. But yeah, it was I went to, I went to 11 Valley College near Hershey. I studied, I was a double major. I studied music composition and music recording, Michael Hingson ** 06:35 okay, and, oh, I've got to go back and ask before we continue that. So what were some of the real perks you got as a kid that your your older siblings didn't get? Dan Swift ** 06:45 Oh, geez, okay. I mean, Michael Hingson ** 06:49 couldn't resist, yeah, probably, probably Dan Swift ** 06:51 some of the more cliche things. I probably got to spend the night at a friend's house earlier than my oldest brother. For instance, I know my parents were a little more concerned about finances. So I know my oldest brother didn't get a chance to go away to college. He did community college instead. And then, kind of, my sister was a very similar thing. And then once we got, like, about halfway down, you know, me and my two other brothers, we all had the opportunity to go away to college. So I think that was, that was definitely one of the perks. If I was the oldest, I was the oldest, I probably wouldn't have had that opportunity with my family. Got Michael Hingson ** 07:24 it well, so you went off and you got a matt a bachelor's in music, composition and music recording. So that brought you to what you were interested in, part, which was the engineering aspect of it. But that certainly gave you a pretty well rounded education. Why those two why composition and recording? Sure. Dan Swift ** 07:43 So if we talk about the music first at that time, so this is like the the late 90s, early 2000s any kind of digital music that was out there really was, was MIDI based, and anyone that was around that time and paying attention, it was like these very like, like that music kind of sound to it. So there wasn't a whole lot going on with MIDI. I'm sorry, with music as far as how great it sounded, or I shouldn't say, how great it sounded, the the instruments that are triggered by MIDI, they didn't sound all that great. But around that time, there was this game that came out, Final Fantasy seven, and I remember hearing the music for that, and it was all, it was all electronic, and it was just blown away by how fantastic it sounded. And And around that time, I thought, you know, it'd be really cool to get into writing music for video games. And that was something I really kind of toyed with. So that was kind of in the back of my head. But also, at the time, I was in a band, like a rock band, and I thought, you know, I'm going to school. They have this opportunity to work as a music engineer, which is something I really wanted to do at the time. And I thought, free studio time. My band will be here. This will be awesome. And it wasn't until I got there that I discovered that they also had the music composition program. It was a I was only there maybe a week or two, and once I discovered that, I was like, Well, this is gonna be great, you know, I'll learn to write. Know, I'll learn to write music. I can write for video games. I'll get engineering to go with it. This is gonna be fantastic. Speaking Michael Hingson ** 09:07 of electronic music, did you ever see a science fiction movie called The Forbidden Planet? I did not. Oh, it's music. It's, it's not really music in the sense of what what we call, but it's all electronic. You gotta, you gotta find it. I'm sure you can find it somewhere. It's called the Forbidden Planet. Walter pigeon is in it. But the music and the sounds fit the movie, although it's all electronic, and electronic sounding pretty interesting. Dan Swift ** 09:37 Now, is that from, I know, like in the 50s, 60s, there was a lot of experiments. Okay, yeah, Michael Hingson ** 09:45 yeah, and, but again, it fit the movie, which was the important part. So it certainly wasn't music like John Williams today and and in the 80s and all that. But again, for the movie, it fit. Very well, which is kind of cool. Yeah, Dan Swift ** 10:02 I'll definitely have to check that out. I remember when I was in school, we talked about like that, that avant garde kind of style of the the 50s, 60s. And there was a lot of weird stuff going on with electronics, electronic music. Um, so I'm very curious to see, uh, to check this out, yeah, yeah. Michael Hingson ** 10:14 You have to let me know what, what you find, what you think about it, when you get to chance to watch it, absolutely or actually, I I may have a copy. If I do, I'll put it in a dropbox folder and send you a link. Fantastic. So you graduated. Now, when did you graduate? Dan Swift ** 10:32 Sure, so I graduated in 2003 okay, Michael Hingson ** 10:35 so you graduated, and then what did you do? So, Dan Swift ** 10:41 backing up about maybe 612, months prior to that, I decided I did not want to be a I didn't want to write music for video games. I also did not want to work in a recording studio. And the reason for this was for music. It was, I didn't it was, it was something I really, really enjoyed, and I didn't want to be put in a position where I had to produce music on demand. I didn't want to I didn't want to do that. I didn't want to lose my hobby, lose my passion in that way. So I decided that was out. And then also, when it came to working in a studio, if I wanted to be the engineer that I really wanted to be, I would have to be in a place where the music scene was really happening. So I'd have to be in like Philadelphia or Los Angeles or Nashville or deep in Philly or something like that. And I do not like the cities. I don't feel comfortable in the city. So I was like, that's not really for me either. I could work in like a suburb studio. But I was like, not, not for me. I don't, not for me. So when I graduated college, I ended up doing freelance web work. I had met through, through a mutual friend I was I was introduced to by a mutual friend, to a person that was looking for a new web designer, developer. They lost their person, and they were looking for someone to take over with that. And at the time, I did a little bit of experience doing that, from when I was in high school, kind of picked it up on the side, just kind of like as a hobby. But I was like, Ah, I'll give this a shot. So I started actually doing that freelance for a number of years after graduation. I also worked other jobs that was, like, kind of like nowhere, like dead end kind of jobs. I did customer service work for a little bit. I was a teacher with the American Cross for a little bit, a little bit of this and that, just trying to find my way. But at the same time, I was doing freelance stuff, and nothing related to music and nothing related to technology, Michael Hingson ** 12:29 well, so you learned HTML coding and all that other stuff that goes along with all that. I gather, I Dan Swift ** 12:35 sure did, I sure didn't. At the time, CSS was just kind of popular, yeah, so that. And then I learned, I learned JavaScript a little bit. And, you know, I had a very healthy attitude when it when it came to accepting new clients and projects, I always tried to learn something new. Anytime someone gave me a new a new request came in, it was like, Okay, well, I already know how to do this by doing it this way. But how can I make this better? And that was really the way that I really propelled myself forward in the in the digital, I should say, when it comes to development or design. Michael Hingson ** 13:05 Okay, so you ended up really seriously going into website development and so on. Dan Swift ** 13:15 I did. So I continued doing freelance. And then about five years after I graduated, I started working as an audio visual technician, and also was doing computer tech stuff as part of the role as well. And while I was there, I ended up developing some web applications for myself to use that I could use to interact with our like projectors and stuff like that. Because they were on, they were all in the network, so I could interact with them using my wait for it, iPod Touch, there you go. So that was, you know, I kind of like started to blend those two together. I was really interested in the web at the time, you know, because I was still doing the freelance, I really wanted to move forward and kind of find a full time position doing that. So I ended up pursuing that more and just trying to refine those skills. And it wasn't until about about five years later, I ended up working as a full time web developer, and then kind of moved forward from Michael Hingson ** 14:09 there, iPod Touch, what memories? And there are probably bunches of people who don't even know what that is today. That Dan Swift ** 14:16 is so true, and at the time that was cutting edge technology, Michael Hingson ** 14:21 yeah, it was not accessible. So I didn't get to own one, because was later than that that Steve Jobs was finally kind of pushed with the threat of a lawsuit into making things accessible. And then they did make the iPhone, the iPod, the Mac and so on, and iTunes U and other things like that, accessible. And of course, what Steve Jobs did, what Apple did, which is what Microsoft eventually sort of has done as well, but he built accessibility into the operating system. So anybody who has an Apple device today. Troy actually has a device that can be made accessible by simply turning on the accessibility mode. Of course, if you're going to turn it on, you better learn how to use it, because the gestures are different. But it took a while, but, but that did happen. But by that time, I, you know, I had other things going on, and so I never did get an iPod and and wasn't able to make it work, but that's okay. But it's like the CD has gone away and the iPod has gone away, and so many things and DVDs have gone away. Dan Swift ** 15:31 Yes, so true. So true. You know, just as soon as we start to get used to them Michael Hingson ** 15:35 gone. I think there is, well, maybe it's close. There was a blockbuster open up in Oregon. But again, Blockbuster Video, another one, and I think somebody's trying to bring them back, but I do see that vinyl records are still being sold in various places by various people. Michael Buble just put out a new album, The Best of Buble, and it's available, among other things, in vinyl. So the old turntables, the old record players, and you can actually buy his album as a record and play it, which is kind of cool. Yeah, they've been Dan Swift ** 16:07 very big with marketing, too. It's been kind of a marketing, I don't want to say gimmick, but in that realm, you kind of like, hey, you know, this is also available in vinyl, and you try to get the people that are like the audio files to really check it out. I never really took the vinyl personally, but I know plenty of people that have sworn by it. Well, Michael Hingson ** 16:25 I've heard a number of people say that the audio actually is better on vinyl than typical MP three or other similar file formats. Yep, Dan Swift ** 16:35 yep. I had a friend growing up, and actually, I shouldn't say growing up, so I was already, like, in college or post college, but a buddy of mine, Craig, he was all about vinyl, and he had, he had the nice, the amplifier, and the nice, I think even, like, a certain kind of needle that you would get for the record player. And you know, you'd have to sit in the sweet spot to really enjoy it, and and I respect that, but um, for me, it was like, I didn't, I didn't hear that much of a difference between a CD and vinyl. Um, not very. Didn't have the opportunity to AB test them. But now I will say comparing a CD to like an mp three file, for instance, even a high quality mp three file, I can tell the difference on that Sure. I would never, you know, I'd use the MP threes for convenience. But if I were to have it my way, man, I'd have the uncompressed audio, no doubt about it, yeah, Michael Hingson ** 17:27 wave forms, yep, yep, yeah. Obviously that's that's going to give you the real quality. Of course, it takes a lot more memory, but nevertheless, if you've got the space it, it really makes a lot of sense to do because mp three isn't going to be nearly as high a level quality. Dan Swift ** 17:43 Absolutely, absolutely true. And that the way I rationalize it to myself. It's like, well, if I'm going to be though in the car or probably walking around and listening to music, I'm going to be getting all kinds of sounds from outside. Anyway, it kind of offsets the poor quality of the MP justify it. Michael Hingson ** 17:56 That's true. Well, you know when and mp three is convenient if you want to put a bunch of stuff in a well on a memory card and be able to play it all, because if you have uncompressed audio, it does take a lot more space, and you can't put as much on a card, or you got to get a much bigger card. And now we're getting pretty good sized memory cards. But still, the reality is that that for most purposes, not all mp three will suffice. Dan Swift ** 18:26 That is true. That is true. And I think too, you have a that the next battle is going to be mp three or a streaming, Michael Hingson ** 18:33 yeah, yeah, that's going to be fun, isn't it? Yeah? Boy. What a world well. So one of the things I noticed in reading your bio and so on is that you got involved to a great degree in dealing with accessibility on the web. Tell me about that. Dan Swift ** 18:55 Absolutely. Michael, so I've very strong opinions of accessibility. And this really comes back to, you know, I was, I was at my job, and I was only there as a full time developer. I wasn't there all that long, maybe a year, maybe two, and my supervisor came over to me and she said, you know, we want to start to make things more accessible. And this is like, this is like, 1012, years ago at this point, and I was like, okay, you know, and I did my little bit of research, and there wasn't a whole lot going on at the time. I don't think WCAG was a thing back then. It may have been. I can't remember if 508 was a thing at the in the Bible. It was okay, yeah. So I was doing my research, and, you know, you learn about the alt tags, and it's like, okay, well, we're doing that, okay. Then you learn about forms, and it's like, okay, well, they need to have labels, okay, but, but the turning point was this, Michael, we had a person on staff that was blind, and I was put in touch with this person, and I asked them to review like, different, different web applications. Applications we made, or forms or web pages. And the one day, I can't remember if he volunteered or if I asked, but essentially the request was, can this person come into our physical space and review stuff for us in person? And that experience was life changing for me, just watching him navigate our different web pages or web applications or forms, and seeing how he could go through it, see what was a problem, what was not a problem, was just an incredible experience. And I said this before, when given the opportunity to talk about this, I say to other developers and designers, if you ever have even the slightest opportunity to interact with someone, if they if, if you meet someone and they are using, let me, let me rephrase that, if you have the opportunity to watch someone that is blind using a navigate through the web, take, take that opportunity. Is just an amazing, amazing experience, and you draw so much from it. As a developer or designer, so very strong opinions about it, I'm all about inclusivity and making things equal for everyone on the web, and that was just my introductory experience about a dozen years ago. Michael Hingson ** 21:07 And so what have you done with it all since? Sure, so Dan Swift ** 21:11 with our website, we went from having about a million success criterion failures, and we've gotten it all the way down to, I think my last check, I think was maybe about 10,000 so it was huge, huge change. It's hard to get everything as because as content changes and newspaper, as new pages come online, it's hard to keep everything 100% accessible, but we know what to look for. You know, we're looking for the right contrast. We're looking for, you know, the all tags. We're looking for hierarchy with the headers. We're making sure our forms are accessible. We're making sure there aren't any keyboard traps, you know, things that most people, most web visitors, don't even think about, you know, or developers even thinking about, until you know, you need to think about them Michael Hingson ** 22:00 well and other things as well, such as with other kinds of disabilities. If you're a person with epilepsy, for example, you don't want to go to a website and find blinking elements, or at least, you need to have a way to turn them off, yeah. Dan Swift ** 22:13 Or or audio that starts automatically, or videos that start automatically, yeah, yeah. Michael Hingson ** 22:19 So many different things, or video that starts automatically, and there's music, but there's no audio, so you so a blind person doesn't even know what the video is, yes, which, which happens all too often. But the the reality is that with the Americans with Disabilities Act, it's it's been interesting, because some lawyers have tried to fight the courts and say, well, but the ADA came out long before the internet, so we didn't know anything about the internet, so it doesn't apply. And finally, the Department of Justice is taking some stands to say, yes, it does, because the internet is a place of business, but it's going to have to be codified, I think, to really bring it home. But some courts have sided with that argument and said, Well, yeah, the ADA is too old, so it doesn't, doesn't matter. And so we still see so many challenges with the whole idea of access. And people listening to this podcast know that, among other things I work with a company called accessibe. Are you familiar with them? I am, Yep, yeah, and, and so that's been an interesting challenge. But what makes access to be interesting is that, because it has an artificial intelligent widget that can monitor a website, and at the at the low end of of costs. It's like $490 a year. And it may not pick up everything that a body needs, but it will, will do a lot. And going back to what you said earlier, as websites change, as they evolve, because people are doing things on their website, which they should be doing, if you've got a static website, you never do anything with it. That's not going to do you very much good. But if it's changing constantly, the widget, at least, can look at it and make a lot of the changes to keep the website accessible. The other part of it is that it can tell you what it can't do, which is cool, Dan Swift ** 24:16 yeah, that's a really good point. You know, there's a lot of tools that are out there. They do monitor the stuff for you, you know, like we on our on our site, we have something that runs every night and it gives us a report every day. But then there are things that it doesn't always check, or it might, it might get a false positive, because it sees that like, you know, this element has a particular color background and the text is a particular color as well. But there's, you know, maybe a gradient image that lies between them, or an image that lies between them. So it's actually okay, even though the tool says it's not, or something like that. So, yeah, those automated tools, but you gotta also look at it. You know, a human has to look at those as well. Michael Hingson ** 24:52 Yeah, it's a challenge. But the thing that I think is important with, well, say, use accessibe. An example is that I think every web developer should use accessibe. And the reason I think that is not that accessibe will necessarily do a perfect job with with the access widget, but what it will do is give you something that is constantly monitored, and even if it only makes about 50% of the website more usable because there are complex graphics and other things that it can't do, the reality is, why work harder than you have to, and if accessibility can do a lot of the work for you without you having to do it, it doesn't mean that you need to charge less or you need to do things any different, other than the fact that you save a lot of time on doing part of it because the widget does it for you. Absolutely, absolutely. Dan Swift ** 25:47 That's that's a really, really good point too, having that tool, that tool in your tool belt, you know, yeah, Michael Hingson ** 25:55 yeah. And it makes a lot of sense to do. And there are, there are people who complain about products like accessibe, saying artificial intelligence can't do it. It's too new. You gotta start somewhere. And the reality is that accessibe, in of itself, does a lot, and it really makes websites a lot better than they otherwise were. And some people say, Well, we've gone to websites and accessibe doesn't really seem to make a difference on the site. Maybe not. But even if your website is pretty good up front and you use accessibe, it's that time that you change something that you don't notice and suddenly accessibe fixes it. That makes it better. It's an interesting discussion all the way around, but to to deny the reality of what an AI oriented system can do is, is really just putting your head in the sand and not really being realistic about life as we go forward. I think that is Dan Swift ** 26:52 so true. That is so true, and there's so many implications with AI and where it's going to go and what it will be able to do. You know, it's just in its infancy, and the amount of things that that the possibilities of what the future is going to be like, but they're just going to be very, very interesting. Michael Hingson ** 27:05 I interviewed someone, well, I can't say interview, because it's conversation. Well, I had a conversation with someone earlier on, unstoppable mindset, and he said something very interesting. He's a coach, and specifically, he does a lot of work with AI, and he had one customer that he really encouraged to start using chat GPT. And what this customer did, he called his senior staff into a meeting one day, and he said, Okay, I want you to take the rest of the day and just work with chat, G, P, T, and create ideas that will enhance our business, and then let's get together tomorrow to discuss them. And he did that because he wanted people to realize the value already that exists using some of this technology. Well, these people came back with incredible ideas because they took the time to focus on them, and again, they interacted with chat, GPT. So it was a symbiotic, is probably the wrong word, but synergistic, kind of relationship, where they and the AI system worked together and created, apparently, what became really clever ideas that enhanced this customer's business. And the guy, when he first started working with this coach, was totally down on AI, but after that day of interaction with his staff, he recognized the value of it. And I think the really important key of AI is AI will not replace anyone. And that's what this gentleman said to me. He said, AI won't do it. People may replace other people, which really means they're not using AI properly, because if they were, when they find that they can use artificial intelligence to do the job that someone else is doing, you don't get rid of that person. You find something else for them to do. And the conversation that we had was about truck drivers who are involved in transporting freight from one place to another. If you get to the point where you have an autonomous vehicle, who can really do that, you still keep a driver behind the wheel, but that driver is now doing other things for the company, while the AI system does the driving, once it gets dependable enough to do that. So he said, there's no reason for AI to eliminate, and it won't. It's people that do it eliminate any job at all, which I think is a very clever and appropriate response. And I completely agree Dan Swift ** 29:29 with that, you know, you think of other other technologies that are out there and how it disrupted, disrupted different industries. And the one example I like to use is the traffic light, you know. And I wonder, and I have no way of knowing this. I haven't researched this at all, but I wonder if there was any kind of pushback when they started putting in traffic lights. Because at that point in time, maybe you didn't have people directing traffic or something like that. Or maybe that was the event of the stop sign, it took it took away the jobs of people that were directing traffic or something like that. Maybe there was some kind of uproar over that. Maybe not, I don't know, but I like to think that things like that, you know. It disrupts the industry. But then people move on, and there are other other opportunities for them, and it progresses. It makes society progress forward. Michael Hingson ** 30:06 And one would note that we still do use school crossing guards at a lot of schools. Dan Swift ** 30:11 That is so true, that is true. Yeah, yeah. And especially, too, like talking about idea generation. I was talking to ginger. I forgot her last name, but she's the the president of pinstripe marketing, and she was saying that her team sometimes does the same thing that they they use chat GBT for idea generation. And I think, let's say Ashley, I think Ashley Mason, I think was her name, from Dasha social. The same thing they use, they use a chat GPT for idea generation, not not necessarily for creating the content, but for idea generation and the ideas it comes up with. It could be it can save you a lot of time. Well, Michael Hingson ** 30:48 it can. And you know, I've heard over the last year plus how a lot of school teachers are very concerned that kids will just go off and get chat GPT to write their papers. And every time I started hearing that, I made the comment, why not let it do that? You're not thinking about it in the right way. If a kid goes off and just uses chat GPT to write their paper, they do that and they turn it into you. The question is, then, what are you as the teacher, going to do? And I submit that what the teachers ought to do is, when they assign a paper and the class all turns in their papers, then what you do is you take one period, and you give each student a minute to come up and defend without having the paper in front of them their paper. You'll find out very quickly who knows what. And it's, I think it's a potentially great teaching tool that Dan Swift ** 31:48 is fascinating, that perspective is awesome. I love that. Speaker 1 ** 31:52 Well, it makes sense. It Dan Swift ** 31:55 certainly does. It certainly does. And that made me think of this too. You know, there's a lot of pushback from from artists about how that, you know, their their art was being used, or art is being used by AI to generate, you know, new art, essentially. And and musicians are saying the same thing that they're taking our stuff, it's getting fed into chat, GPT or whatever, and they're using it to train these different models. And I read this, this article. I don't even know where it was, but it's probably a couple months ago at this point. And the person made this comparison, and the person said, you know, it's really no different than a person learning how to paint in school by studying other people's art. You know, it's the same idea. It's just at a much, much much accelerated pace. And I thought, you know what that's that's kind of interesting. It's an interesting Michael Hingson ** 32:45 perspective. It is. I do agree that we need to be concerned, that the human element is important. And there are a lot of things that people are are doing already to misuse some of this, this AI stuff, these AI tools, but we already have the dark web. We've had that for a while, too. I've never been to the dark web. I don't know how to get to it. That's fine. I don't need to go to the dark web. Besides that, I'll bet it's not accessible anyway. But the we've had the dark web, and people have accepted the fact that it's there, and there are people who monitor it and and all that. But the reality is, people are going to misuse things. They're going to be people who will misuse and, yeah, we have to be clever enough to try to ferret that out. But the fact of the matter is, AI offers so much already. One of the things that I heard, oh, gosh, I don't whether it was this year or late last year, was that, using artificial intelligence, Pfizer and other organizations actually created in only a couple of days? Or moderna, I guess, is the other one, the COVID vaccines that we have. If people had to do it alone, it would have taken them years that that we didn't have. And the reality is that using artificial intelligence, it was only a few days, and they had the beginnings of those solutions because they they created a really neat application and put the system to work. Why wouldn't we want to do that? Dan Swift ** 34:23 I completely agree. I completely agree. And that's, again, that's how you move society forward. You know, it's similar to the idea of, you know, testing medicine on or testing medications on animals. For instance, you know, I love animals. You know, I love dogs, bunnies. I mean, the whole, the whole gamut, you know, love animals, but I understand the importance of, you know, well, do we test on them, or do we press on people, you know, you gotta, or do you not test? Or do just not you like you gotta. You gotta weigh out the pros and cons. And they're, they're definitely, definitely those with AI as well. Michael Hingson ** 34:56 Well, I agree, and I. With animals and people. Now, I mean, as far as I'm concerned, we ought to be doing tests on politicians. You know, they're not people. Anyway. So I think when you decide to become a politician, you take a special pill that nobody seems to be able to prove, but they take dumb pills, so they're all there. But anyway, I'm with Mark Twain. Congress is at Grand Ole benevolent asylum for the helpless. So I'm an equal opportunity abuser, which is why we don't do politics on unstoppable mindset. We can have a lot of fun with it, I'm sure, but we sure could. It would be great talk about artificial intelligence. You got politicians. But the reality is that it's, it's really something that that brings so much opportunity, and I'm and it's going to continue to do that, and every day, as we see advances in what AI is doing, we will continue to see advances and what is open for us to be able to utilize it to accomplish, which is cool. I Dan Swift ** 36:04 completely agree. Completely agree. Yeah, Michael Hingson ** 36:06 so it'll be fun to see you know kind of how it goes. So are you, do you work for a company now that makes websites? Or what is your company that you work for? Do, sure. Dan Swift ** 36:16 So I'm still in the education space, so I'm still, I'm like, in a state school managing a team of web professionals. Michael Hingson ** 36:23 Okay, well, that's cool. So you keep the school sites and all the things that go along with it up at all that Dan Swift ** 36:31 is correct. And we have lots of fun challenges when we start to integrate with third parties and got to make sure they're accessible too. And sometimes there's dialog that goes back and forth that people aren't happy with but, but it's my job to make sure, that's one of the things that we make sure happens, especially since I'm sure you've been following this. There's the Department of Justice ruling back in April, but I think it's anyone that's receiving state funding, they have to be. They have to follow the WCAG. Two point, I think, 2.1 double A compliance by April of 26 if you are a certain size, and my my institution, falls into that category. So we need to make sure that we were on the right path Michael Hingson ** 37:06 well. And the reality is that has been around since 2010 but it took the the DOJ 12 years to finally come up with rules and regulations to implement section 508. Yep, but it's it's high time they did and they do need to do it for the rest of the internet, and that's coming, but people are just being slow. And for me personally, I think it's just amazing that it's taking so long. It's not like you have to redesign a box, that you have to go off and retool hardware. This is all code. Why should it be that difficult to do? But people throw roadblocks in your way, and so it becomes tough. Yeah, it's Dan Swift ** 37:47 interesting, too. I remember reading this article, oh, gosh, this is probably, this is probably about a dozen years ago, and it said that, you know, the original web was 100% accessible, that it was just, you know, just text on a page pretty much. And you could do very, very simple layouts, you know, and then it got more convoluted. People would start doing tables for layouts, and tables within tables within tables, and so on and so forth. Like the original web it was, it was completely accessible. And now with, with all the the interactions we do with with client side scripting and everything like that, is just, it's a mess. If Michael Hingson ** 38:19 you really want to hear an interesting thing, I like to look and I've done it for a long time, long before accessibe. I like to explore different sites and see how accessible they are. And one day I visited nsa.gov, the National Security Agency, which, of course, doesn't really exist. So I could tell you stories, but I went to nsa.gov, and I found that that was the most accessible website I had ever encountered. If you arrow down to a picture, for example, when you arrowed into it, suddenly you got on your screen reader a complete verbal description of what the picture was, and everything about that site was totally usable and totally accessible. I'd never seen a website that was so good contrast that with and it's changed. I want to be upfront about it, Martha Stewart Living. The first time I went to that website because I was selling products that Martha Stewart was interested. So I went to look at the website. It was totally inaccessible. The screen reader wouldn't talk at all. Now, I've been to Martha Stewart since, and it's and it's much more accessible, but, but I was just amazed@nsa.gov was so accessible. It was amazing, which I thought was really pretty cool. Of all places. You Dan Swift ** 39:41 know, it's interesting. Before I started my my YouTube channel and podcast, I actually thought about creating a channel and or podcast about websites that are inaccessible, and I thought about calling companies out. And the more I thought about it, I was like, I don't know if I want to make that many people angry. I don't know if that's a Michael Hingson ** 39:58 good idea. I'm. Would suggest going the other way, and maybe, you know, maybe we can work together on it. But I would rather feature websites that are accessible and tell the story of how they got there, how their people got there. I would think that would be, I hear what you're saying about making people angry. So I would think, rather than doing that, feature the places that are and why they are and and their stories, and that might help motivate more people to make their websites accessible. What do you think about that as an idea? Dan Swift ** 40:28 I actually thought about that as well, and I was going backwards between that and and the other the negative side, because I thought, you know, bring that to light. Might actually force them to like by shedding light on it, might force them to make their site more accessible, whether what or not or not, no, but I definitely thought about those two sites. Michael Hingson ** 40:45 Yeah, it's, it's, it's a challenge all the way around. Well, what was the very first thing you did, the first experience that you ever had dealing with accessibility that got you started down that road. Dan Swift ** 40:58 I think it was like I said, when I work with that, that blind person, when I, when I first had that opportunity to see how he used the different web applications, we had the different web pages, and he was using a Mac. So he was using VoiceOver, he was using the, I think it's called the rotor menu, or roto something like that. Yeah, yep. So then after that happened, it was like, whoa. I need to get them back so I can, like, learn to use this as well and do my own testing. So the IT department had an old I asked them. I said, Hey guys, do you have any any old MacBooks that I can use? I was like, it can be old. I just need to test it. I need to, I need it to test for accessibility on the web. They hooked me up with an old machine, you know, it wasn't super old, you know, but it was. It worked for me. It gave me an opportunity to do my testing, and then I kind of became like the person in the department to do that. Everyone else, they didn't have the interest as much as I did. They recognized the importance of it, but they, they didn't have the same fire on the inside that I had, so I kind of took that on, and then like that. Now that I'm in the position of leadership, now it's more of a delegating that and making sure it still gets done. But I'm kind of like the resident expert in our in our area, so I'm still kind of the person that dives in a little bit by trying to make my team aware and do the things they need to do to make sure we're continuing, continuing to create accessible projects. You Michael Hingson ** 42:20 mentioned earlier about the whole idea of third party products and so on and and dealing with them. What do you do? And how do you deal with a company? Let's say you you need to use somebody else's product and some of the things that the school system has to do, and you find they're not accessible. What do you do? Dan Swift ** 42:42 So a lot of times, what will happen, I shouldn't say a lot of times. It's not uncommon for a department to make a purchase from a third party, and this is strictly, I'm talking in the web space. They might, they might make a purchase with a third party, and then they want us to integrate it. And this is a great example I had. It was actually in the spring the this, they had essentially a widget that would be on the on their particular set of pages, and there was a pop up that would appear. And don't get me started on pop ups, because I got very strong opinion about those. Me too, like I said, growing up, you know, late 90s, early 2000s very, very strong opinions about pop ups. So, but, but I encountered this, and it wasn't accessible. And I'm glad that in the position I'm in, I could say this unit, you need to talk to the company, and they need to fix this, or I'm taking it down. And I'm glad that I had the backing from, you know, from leadership, essentially, that I could do, I can make that claim and then do that, and the company ended up fixing it. So that was good. Another example was another department was getting ready to buy something. Actually, no, they had already purchased it, but they hadn't implemented it yet. The first example that was already implemented, that was I discovered that after the fact. So in the second example, they were getting ready to implement it, and they showed us another school that used it also a pop up. And I looked at it on the on the other school site, and I said, this isn't accessible. We cannot use this. No. And they said, Well, yes, it is. And I said, No, it isn't. And I explained to them, and I showed them how it was not accessible, and they ended up taking it back to their developers. Apparently there was a bug that they then fixed and they made it accessible, and then we could implement it. So it's nice that like that. I have the support from from leadership, that if there is something that is inaccessible, I have the power to kind of wheel my fist and take that down, take it off of our site. Do Michael Hingson ** 44:31 you ever find that when some of this comes up within the school system, that departments push back, or have they caught on and recognize the value of accessibility, so they'll be supportive. Dan Swift ** 44:45 I think the frustration with them becomes more of we bought this tool. We wish we had known this was an issue before we bought I think it's more of a like like that. We just wasted our time and money, possibly. But generally speaking, they do see the. Value of it, and they've recognized the importance of it. It's just more of a when others, there's more hoops everyone has to go through. Michael Hingson ** 45:05 Yeah, and as you mentioned with pop ups, especially, it's a real challenge, because you could be on a website, and a lot of times A pop up will come up and it messes up the website for people with screen readers and so on. And part of the problem is we don't even always find the place to close or take down the pop up, which is really very frustrating Dan Swift ** 45:30 Exactly, exactly the tab index could be off, or you could still be on the page somewhere, and it doesn't allow you to get into it and remove it, or, yeah, and extra bonus points if they also have an audio playing or a video playing inside of that. Michael Hingson ** 45:44 Yeah, it really does make life a big challenge, which is very, very frustrating all the way around. Yeah, pop ups are definitely a big pain in the butt, and I know with accessibility, we're we're all very concerned about that, but still, pop ups do occur. And the neat thing about a product like accessibe, and one of the reasons I really support it, is it's scalable, and that is that as the people who develop the product at accessibe improve it, those improvements filter down to everybody using the widget, which is really cool, and that's important, because with individual websites where somebody has to code it in and keep monitoring it, as you pointed out, the problem is, if that's all you have, then you've got to keep paying people to to monitor everything, to make sure everything stays accessible and coded properly, whereas there are ways to be able to take advantage of something like accessibe, where what you're able to do is let it, monitor it, and as accessibe learns, and I've got some great examples where people contacted me because they had things like a shopping cart on a website that didn't work, but when accessibe fixed it, because it turns out there was something that needed to be addressed that got fixed for anybody using the product. Which is really cool. Dan Swift ** 47:07 Yeah, that's really neat. I definitely appreciate things like that where, you know, you essentially fix something for one person, it's fixed for everyone, or a new feature gets added for someone, or, you know, a group of people, for instance, and then everyone is able to benefit from that. That's really, really awesome. I love that type of stuff. Michael Hingson ** 47:22 Yeah, I think it's really so cool. How has all this business with accessibility and so on affected you in terms of your YouTube channel and podcasting and so on? How do you bring that into the process? That's that's Dan Swift ** 47:37 really, really good question. I am very proud to say that I take the time to create transcripts of all my recordings, and then I go through them, and I check them for for accuracy, to make sure that things aren't correct, things are incorrect. Make sure things are correct, that they are not incorrect. So I'll make sure that those are there when the when the videos go live, those are available. Spotify creates them automatically for you. I don't know that you that I have the ability to modify them. I'm assuming I probably do, but honestly, I haven't checked into that. But so that's that's all accessible. When it comes to my web page, I make sure that all my images have the appropriate, you know, alt tags associated with them, that the the descriptions are there so people understand what the pictures are. I don't have a whole lot of pictures. Usually it's just the thumbnail for the videos, so just indicating what it is. And then I just try to be, you know, kind of, kind of text heavy. I try to make sure that my, you know, my links are not, you know, click here, learn more stuff like that. I make sure or they're not actual web addresses. I try to make sure that they're actual actionable. So when someone's using a screen reader and they go over a link, it actually is meaningful. And color contrast is another big one. I try to make sure my color contrast is meeting the appropriate level for WCAG, 2.1 double A which I can't remember what actual contrast is, but there's a contrast checker for it, which is really, really helpful Michael Hingson ** 49:00 well. And the other, the other part about it is when somebody goes to your website again, of course, accessibility is different for different people, so when you're dealing with things like contrast or whatever, do people who come to the website have the ability to monitor or not monitor, but modify some of those settings so that they get maybe a higher contrast or change colors. Or do they have that ability? Dan Swift ** 49:28 I They do not have that ability. I remember looking into a tool a while ago, and it was and actually, you know, at the school, we thought about developing a tool. It would be like a widget on the side that you could adjust on different things like that. You could do, you could remove images, you could remove animation, you could change color, contrast, that sort of thing. And it just be like a very predefined kind of kind of settings. But in my research, I found that a lot of times that causes other problems for people, and it kind of falls into the the arena of. Um, separate but equal. And there's a lot of issues with that right now in the accessibility space when it comes to the web. So for instance, there was a company, I forget what the company name was, but they had one of their things that they did was they would create text only versions of your pages. So you'd contract with them. They would they would scrape the content of your site. They would create a text version, text only version of your pages. So if people were using a screen reader, they could just follow that link and then browse the text only version. And there was litigation, and the company got sued, and the the person suing was successful, because it was essentially creating a separate argument. Michael Hingson ** 50:34 And that's not necessarily separate, but equal is the problem, because if you only got the text, pictures are put on websites, graphs are put on websites. All of those other kinds of materials are put on websites for reasons. And so what really needs to happen is that those other things need to be made accessible, which is doable, and the whole web con excessive content. Accessibility Guidelines do offer the the information as to how to do that and what to do, but it is important that that other information be made available, because otherwise it really is separate, but not totally equal at Dan Swift ** 51:11 all. That's absolutely true. Absolutely true. Yeah. So it Michael Hingson ** 51:15 is a, it is something to, you know, to look at well, you've been doing a podcast and so on for a while. What are some challenges that someone might face that you advise people about if they're going to create their own podcast or a really productive YouTube channel, Dan Swift ** 51:31 be real with yourself with the amount of time you have to dedicate to it, because what I found is that it takes a lot more time than I originally anticipated I thought going in, I thought, you know, so I typically try to record one or two people a week. When I first started out, I was only recording one person. And usually I would do, you know, record one day, edit the next day, you know, do the web page stuff. I would go with it, you know, I can knock it out in like an hour or two. But I wasn't anticipating the social media stuff that goes with it, the search engine optimization that goes with it, the research that goes with it, trying to so if I'm if I'm producing a video that's going to go on YouTube, what's hot at the moment? What are people actually searching for? What's going to grab people's attention? What kind of thumbnail do I have to create to grab someone's attention, where it's not clickbait, but it also represents what I'm actually talking to the person about, and still interesting. So it's a lot of a lot of that research, a lot of that sort of thing. It just eats up a lot a lot of time when it comes to like the transcripts, for instance, that was those super easy on their number of services out there that created automatically for you, and they just have to read through it and make sure it's okay. I know YouTube will do it as well. I found that YouTube isn't as good as some of the other services that are out there, but in a bind, you can at least rely on YouTube and then go and edit from that point. But yet, time is definitely a big one. I would say, if anyone is starting to do it, make sure you have some serious time to dedicate several, several hours a week, I would say, upwards, you know, probably a good, you know, four to 10 hours a week is what I would estimate in the moment. If you're looking to produce a 30 minute segment once or twice a week, I would estimate about that time. Michael Hingson ** 53:11 Yeah, one of the things I've been hearing about videos is that that the trend is is clearly not to have long videos, but only 32nd videos, and put them vertical as opposed to horizontal. And anything over 30 seconds is is not good, which seems to me to really not challenge people to deal with having enough content to make something relevant, because you can't do everything in 30 seconds exactly, Dan Swift ** 53:41 and what I found too. So this was very this was a little bit of a learning curve for me. So with, with the YouTube shorts that you have, they have to be a minute or less. I mean, now they're actually in the process of changing it to three minutes or less. I do not have that access yet, but it has Go ahead, yeah, yeah. Yeah, so. But what I'm finding Michael is that the people that so I might create this a great example. So I was interviewing a comedian in New York City, Meredith Dietz, awesome, awesome episode. But I was talking to her about becoming a comedian, and I made about four different shorts for her from her video, and I was doing a new one each week to kind of promote it. And the videos, for me, they were getting a lot I was getting anywhere between maybe 315 100 views on the short for me, that was awesome. For other people, you know, that might be nothing, but for me, that was awesome. But what I found was that the people that watch the shorts aren't necessarily the same people that watch the long form videos. So I'm or, or I might get subscribers from people that watch the shorts, but then they're not actually watching the video. And in the end, that kind of hurts your channel, because it's showing, it's telling the YouTube I'm gonna use air quotes, YouTube algorithm that my subscribers aren't interested in my content, and it ends up hurting me more. So anyone that's trying to play that game. And be aware of that. You know, you can't get more subscribers through shorts, but if you're not converting them, it's going to hurt you. Michael Hingson ** 55:05 I can accept three minutes, but 30 seconds just seems to be really strange. And I was asked once to produce a demonstration of accessibe on a website. They said you got to do it in 30 seconds, or no more than a minute, but preferably 30 seconds. Well, you can't do that if, in part, you're also trying to explain what a screen reader is and everything else. The reality is, there's got to be some tolerance. And I think that the potential is there to do that. But it isn't all about eyesight, which is, of course, the real issue from my perspective. Anyway. Dan Swift ** 55:41 Yeah, I completely agree. I think what YouTube is trying to do, and I believe in getting this from Tiktok, I think Tiktok has three up to three minutes. Actually, there might be 10 minutes now that I think about it, but, but I think they're trying to follow the trend, and it's like, let's make videos slightly longer and see how that goes. So be very curious to see how that all pans out. Michael Hingson ** 55:58 Well. And I think that makes sense. I think there's some value in that, but 30 seconds is not enough time to get real content, and if people dumb down to that point, then that's pretty scary. So I'm glad to hear that the trend seems to be going a little bit longer, which is, which is a good thing, which is pretty important to be able to do. Yeah, I completely Dan Swift ** 56:21 agree. Because like that, the trend right now, it's, you know, people, they want stuff immediately, and if you don't catch them in 10 seconds, they're swiping onto something else, which is which is very challenging, at least, especially for me and what I do. Who's Michael Hingson ** 56:32 the most inspiring guest that you've ever had on your podcast? Dan Swift ** 56:37 Michael, this is a good one. This is a good one. So the video for Ashley Mason. She is a social media marketing she created a social medi
Sia Karamalegos is a freelance web developer and web performance engineer helping ecommerce brands turn faster load times into real revenue. As a Google Developer Expert in Web Technologies and a former engineer on Shopify's performance team, Sia brings a unique blend of hands-on experience and deep technical insight to the challenge of building faster, more performant online stores.With a passion for developer education and community building, Sia organizes the Eleventy Meetup, Durham Social Hack Night, and a new global web performance meetup, connecting engineers around the world to share real-world tactics and tools. She's also a frequent international speaker and writer, known for making complex topics like Core Web Vitals and JavaScript performance approachable and actionable.In 2024, Sia launched ThemeVitals, a tool that benchmarks Shopify themes using real user data—not lab simulations to uncover which themes actually perform well across the devices your customers use. It's a mission rooted in impact: helping merchants and theme developers make smarter, faster decisions that drive conversion and long-term growth.Through her work, Sia is redefining how ecommerce teams think about performance, showing that real user data, smart defaults, and community-driven tooling can transform the way we build the web.In This Conversation We Discuss: [00:40] Intro[01:00] Focusing on real-world site speed fixes[02:39] Improving performance metrics for merchants[04:22] Translating Google metrics for merchants[04:56] Understanding how Core Web Vitals work[07:34] Balancing traffic vs technical optimization[10:36] Shifting focus from speed to sales[13:16] Balancing performance with product experience[15:26] Highlighting global device performance gaps[16:54] Uploading giant images the wrong way[21:04] Auditing your tech stack regularly[21:53] Comparing Shopify themes with real data[24:11] Balancing features vs speed in theme choice[26:00] Avoiding minimalist themes that lack function[28:08] Encouraging feedback for future improvementsResources:Subscribe to Honest Ecommerce on YoutubeExplore real-world Core Web Vitals performance data for popular Shopify themes themevitals.com/Web Developer & Performance Engineer sia.codes/Follow Sia Karamalegos linkedin.com/in/karamalegosIf you're enjoying the show, we'd love it if you left Honest Ecommerce a review on Apple Podcasts. It makes a huge impact on the success of the podcast, and we love reading every one of your reviews!
News includes Phoenix now including DaisyUI which has sparked mixed reactions, Erlang/OTP 28.0-rc2 release introducing priority process messages, the EEF Security Working Group's roadmap called Aegis, a new LiveViewPortal library for embedding LiveView pages in any website, upcoming improvements in Elixir that will spawn more OS processes for compiling dependencies potentially doubling performance, Sean Moriarity's keynote about designing LLM Native systems, and more! Show Notes online - http://podcast.thinkingelixir.com/247 (http://podcast.thinkingelixir.com/247) Elixir Community News https://gigalixir.com/thinking (https://gigalixir.com/thinking?utm_source=thinkingelixir&utm_medium=shownotes) – Gigalixir is sponsoring the show, offering 20% off standard tier prices for a year with promo code "Thinking". https://bsky.app/profile/samrat.me/post/3lksxzzjqss2t (https://bsky.app/profile/samrat.me/post/3lksxzzjqss2t?utm_source=thinkingelixir&utm_medium=shownotes) – Phoenix now comes with DaisyUI, a decision that has sparked mixed reactions in the community. https://github.com/phoenixframework/phoenix/issues/6121 (https://github.com/phoenixframework/phoenix/issues/6121?utm_source=thinkingelixir&utm_medium=shownotes) – The GitHub issue discussing the addition of DaisyUI to Phoenix, showing the community's divided opinions. https://github.com/phoenixframework/phoenix/issues/6121#issuecomment-2739647725 (https://github.com/phoenixframework/phoenix/issues/6121#issuecomment-2739647725?utm_source=thinkingelixir&utm_medium=shownotes) – José Valim's explanation of the decision to include DaisyUI in Phoenix. https://security.erlef.org/aegis/ (https://security.erlef.org/aegis/?utm_source=thinkingelixir&utm_medium=shownotes) – EEF Security Working Group released their objectives and roadmap as the Aegis of the ecosystem. https://podcast.thinkingelixir.com/245 (https://podcast.thinkingelixir.com/245?utm_source=thinkingelixir&utm_medium=shownotes) – Previous podcast episode featuring the Erlang Ecosystem Foundation (EEF). https://x.com/erlangforums/status/1902297914791358669 (https://x.com/erlangforums/status/1902297914791358669?utm_source=thinkingelixir&utm_medium=shownotes) – Announcement of Erlang/OTP 28.0-rc2 release. https://erlangforums.com/t/erlang-otp-28-0-rc2-released/4599 (https://erlangforums.com/t/erlang-otp-28-0-rc2-released/4599?utm_source=thinkingelixir&utm_medium=shownotes) – Forum discussion about the Erlang/OTP 28.0-rc2 release. https://github.com/erlang/otp/releases/tag/OTP-28.0-rc2 (https://github.com/erlang/otp/releases/tag/OTP-28.0-rc2?utm_source=thinkingelixir&utm_medium=shownotes) – GitHub release page for Erlang/OTP 28.0-rc2, which includes a source Software Bill of Materials (SBOM). https://www.erlang.org/eeps/eep-0076 (https://www.erlang.org/eeps/eep-0076?utm_source=thinkingelixir&utm_medium=shownotes) – Erlang Enhancement Proposal (EEP) 76 introducing priority messages, a key feature in OTP 28. https://www.youtube.com/watch?v=R9JRhIKQmqk (https://www.youtube.com/watch?v=R9JRhIKQmqk?utm_source=thinkingelixir&utm_medium=shownotes) – Sean Moriarity's keynote at Code BEAM America 2025 about designing LLM Native systems. https://www.cybersecuritydive.com/news/AI-project-fail-data-SPGlobal/742768/ (https://www.cybersecuritydive.com/news/AI-project-fail-data-SPGlobal/742768/?utm_source=thinkingelixir&utm_medium=shownotes) – Report showing AI project failure rates are on the rise, with 42% of businesses scrapping most AI initiatives. https://tech.doofinder.com/posts/live-view-portal (https://tech.doofinder.com/posts/live-view-portal?utm_source=thinkingelixir&utm_medium=shownotes) – Introduction to LiveViewPortal, a JavaScript library for embedding Phoenix LiveView pages into any website. https://github.com/doofinder/liveviewportal (https://github.com/doofinder/live_view_portal?utm_source=thinkingelixir&utm_medium=shownotes) – GitHub repository for LiveViewPortal. https://elixirforum.com/t/liveviewportal-embed-liveviews-in-other-websites/70040 (https://elixirforum.com/t/liveviewportal-embed-liveviews-in-other-websites/70040?utm_source=thinkingelixir&utm_medium=shownotes) – Elixir Forum discussion about LiveViewPortal. https://bsky.app/profile/ftes.de/post/3lkohiog4uv2b (https://bsky.app/profile/ftes.de/post/3lkohiog4uv2b?utm_source=thinkingelixir&utm_medium=shownotes) – Announcement of phoenixtestplaywright v0.6.0 release. https://github.com/ftes/phoenixtestplaywright (https://github.com/ftes/phoenix_test_playwright?utm_source=thinkingelixir&utm_medium=shownotes) – GitHub repository for phoenixtestplaywright with new features like cookie manipulation and browser launch timeout options. https://bsky.app/profile/david.bernheisel.com/post/3lkoe4tvc2s2o (https://bsky.app/profile/david.bernheisel.com/post/3lkoe4tvc2s2o?utm_source=thinkingelixir&utm_medium=shownotes) – Announcement about Elixir's upcoming improvement to spawn more OS processes for compiling dependencies. https://github.com/elixir-lang/elixir/pull/14340 (https://github.com/elixir-lang/elixir/pull/14340?utm_source=thinkingelixir&utm_medium=shownotes) – Pull request for concurrent dependencies compilation in Elixir, potentially improving performance by 2x. https://goatmire.com/ (https://goatmire.com/?utm_source=thinkingelixir&utm_medium=shownotes) – Explanation of the name "Goatmire," which is a loose translation of Getakärr, the historical name for Varberg. Do you have some Elixir news to share? Tell us at @ThinkingElixir (https://twitter.com/ThinkingElixir) or email at show@thinkingelixir.com (mailto:show@thinkingelixir.com) Find us online - Message the show - Bluesky (https://bsky.app/profile/thinkingelixir.com) - Message the show - X (https://x.com/ThinkingElixir) - Message the show on Fediverse - @ThinkingElixir@genserver.social (https://genserver.social/ThinkingElixir) - Email the show - show@thinkingelixir.com (mailto:show@thinkingelixir.com) - Mark Ericksen on X - @brainlid (https://x.com/brainlid) - Mark Ericksen on Bluesky - @brainlid.bsky.social (https://bsky.app/profile/brainlid.bsky.social) - Mark Ericksen on Fediverse - @brainlid@genserver.social (https://genserver.social/brainlid) - David Bernheisel on Bluesky - @david.bernheisel.com (https://bsky.app/profile/david.bernheisel.com) - David Bernheisel on Fediverse - @dbern@genserver.social (https://genserver.social/dbern)
Evan Phoenix (@evanphx), CEO of Miren, joins Robby to explore the subtle but powerful difference between writing code that works and writing code that explains itself. They discuss the role of clarity in maintainable systems, why splitting a monolith can backfire, and what developers can learn from artists and tradespeople alike.Episode Highlights[00:01:30] What Makes Software Maintainable?Evan defines maintainability as how easily a newcomer can make a change with minimal context.[00:02:30] Why Business Logic Should Be ObviousA discussion on domain knowledge leakage and abstracting rules like “can we sell today?”[00:05:00] Programming 'Mouthfeel' and the Trap of PrefactoringEvan explains why prematurely optimizing for reuse can lead to unnecessary complexity.[00:07:00] When to Extract Logic: The Copy/Paste SignalA practical approach to identifying reusable components by spotting repeated code.[00:08:00] Technical Debt as a Reflection of Cognitive LoadWhy forgetting your own code doesn't automatically mean it's “bad” code.[00:10:30] Testing as Emotional InsuranceHow writing even basic checks can build team confidence—especially when test coverage is weak.[00:13:00] Daily Integration Tests: A Low-Pressure Safety NetUsing nightly integration runs to catch invisible bugs in complex systems.[00:14:00] Confidence > 100% Test CoverageWhy fast feedback loops matter more than aiming for exhaustive tests.[00:20:00] Splitting the Monolith: A Cautionary TaleEvan shares how decoupling apps without decoupling the database created chaos.[00:22:00] Shared Models, Split Repos, and Hidden PitfallsThe unexpected bugs that emerge when two apps maintain duplicate models and validations.[00:23:00] Better Alternatives to Splitting CodebasesHow separate deployments and tooling can mimic team separation without architectural debt.[00:28:00] The Hidden Cost of Diverging Business DomainsWhen apps evolve independently, business logic begins to drift—undermining consistency.[00:29:00] Building Miren and Staying MotivatedHow Evan approaches early-stage product development with curiosity and detachment.[00:36:00] How to Know When Your Open Source Project Is “Done”Reframing “dead” projects as complete—and why stability is often a feature.[01:01:00] Signals for Trusting Open Source DependenciesEvan's mental checklist for evaluating if a library is worth adopting.[01:07:00] The Importance of Hiring Junior DevelopersWhy investing in beginners is crucial for the future of our industry.[01:08:00] Book RecommendationsEvan recommends The Inner Game of Tennis and Snow Crash.Links and ResourcesEvan Phoenix's WebsiteEvan on GitHubEvan on MastodonBook RecommendationsThe Inner Game of Tennis (book)Snow Crash by Neal StephensonThanks to Our Sponsor!Turn hours of debugging into just minutes! AppSignal is a performance monitoring and error-tracking tool designed for Ruby, Elixir, Python, Node.js, Javascript, and other frameworks.It offers six powerful features with one simple interface, providing developers with real-time insights into the performance and health of web applications.Keep your coding cool and error-free, one line at a time! Use the code maintainable to get a 10% discount for your first year. Check them out! Subscribe to Maintainable on:Apple PodcastsSpotifyOr search "Maintainable" wherever you stream your podcasts.Keep up to date with the Maintainable Podcast by joining the newsletter.
In this episode, Amy and Brad dive into the ongoing debate between Laravel and full stack JavaScript frameworks. They explore both ecosystems from their unique perspectives. Amy shares her real-world experience building a project in Laravel after working extensively with JavaScript frameworks, highlighting where each approach shines and struggles. From Laravel's backend prowess to the cognitive load of context switching between languages, this episode offers practical insights for developers weighing these technology choices.Show Notes00:00 - Intro01:00 - Sponsorship: Sanity01:59 - Origins of the Laravel vs JavaScript Discussion03:59 - Amy's Experience Building a Project in Laravel06:59 - PHP Development and Linting Experience11:59 - Understanding MVC Architecture15:00 - Challenges with JavaScript Backend Services18:00 - Backend Strengths of Laravel20:00 - Frontend Challenges in Laravel23:00 - Comparing Laravel and JavaScript Ecosystem Solutions26:59 - JavaScript Full Stack Frameworks Discussion30:00 - Architectural Differences Between Frameworks33:00 - Framework Choice Considerations38:59 - Picks and Plugs: Newsletter and Cameras42:00 - Picks and Plugs: Games and YouTube Links and ResourcesSanity.io (sponsor)LaravelSam's podcast: Frontend FirstRedwoodJSRemixNext.jsAstroSupabaseInngestResend (email service)Postmark (email service)OpenAIPrismaPHP StormLaravel Blade (templating language)Laravel LivewireAlpine.jsLaravel BreezeLaravel Eloquent ORMAdonis/AdonisJSEpisode 54: Why RedwoodJS is the App Framework for Startups, with David PriceViteStorybookAmy's newsletter: Broken CombInsta360 X2 cameraInsta360 Go 3 cameraStardew Valley (game)Brad's YouTube channelCloudinary channel and Dev Hints series
What happens when a seasoned Rails developer with 17 years of experience decides to document their journey learning Hotwire? Radan Skorić joins us to discuss his ebook "Master Hotwire" and the fascinating parallels between writing and coding.Unlike most tutorials that start from ground zero, Radan's approach assumes you already know Rails—because that was his experience when learning Hotwire. "When I was picking up Hotwire, I had tons of Rails experience. I've just not done Hotwire," he explains. This focus allows his readers to skip the basics and dive deeper into what makes Hotwire powerful.We explore the meticulous process behind creating technical content, from researching pain points on forums to managing a beta reader program. Radan shares a powerful insight about feedback: "With positive feedback I feel good. With negative feedback I can actually go and improve it." This mindset led him to completely restructure portions of his book based on reader experiences.The conversation takes unexpected turns as Radan reveals how he overcame writer's block by applying software development principles to his writing process. Just as he might write tests to overcome coder's block, he found success by allowing himself to write "crap words" initially, knowing he would refactor later—a technique that mirrors how many of us approach code.Perhaps most compelling is Radan's observation about Hotwire's place in the ecosystem: it allows backend-focused developers to "stop lying" about being full-stack by providing a framework they can realistically master without diving deep into JavaScript frameworks like React. It's a refreshing perspective that reframes how we think about the full-stack developer identity.Check out masterhotwire.com and use coupon code "CodingCoders" for 20% off the book, and join the growing community of Rails developers embracing Hotwire!Send us some love.HoneybadgerHoneybadger is an application health monitoring tool built by developers for developers.JudoscaleAutoscaling that actually works. Take control of your cloud hosting.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Support the show
Send a text and I may answer it on next episode (I cannot reply from this service
SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast
A Tale of Two Phishing Sties Two phishing sites may use very different backends, even if the site itself appears to be visually very similar. Phishing kits are often copied and modified, leading to sites using similar visual tricks on the user facing site, but very different backends to host the sites and reporting data to the miscreant. https://isc.sans.edu/diary/A%20Tale%20of%20Two%20Phishing%20Sites/31810 A Phihsing Tale of DOH and DNS MX Abuse Infoblox discovered a new variant of the Meerkat phishing kit that uses DoH in Javascript to discover MX records, and generate better customized phishing pages. https://blogs.infoblox.com/threat-intelligence/a-phishing-tale-of-doh-and-dns-mx-abuse/ Using OpenID Connect for SSH Cloudflare opensourced it's OPKSSH too. It integrates SSO systems supporting OpenID connect with SSH. https://github.com/openpubkey/opkssh/
150,000 sites compromised by JavaScript injection Vulnerabilities in numerous solar power systems found T-Mobile pays $33 million in SIM swap lawsuit Huge thanks to our episode sponsor, ThreatLocker ThreatLocker® is a global leader in Zero Trust endpoint security, offering cybersecurity controls to protect businesses from zero-day attacks and ransomware. ThreatLocker operates with a default deny approach to reduce the attack surface and mitigate potential cyber vulnerabilities. To learn more and start your free trial, visit ThreatLocker.com. Find the stories behind the headlines at CISOseries.com.
Faaaaala Dev! Chegou mais um episódio do #FalaDev, trazendo um bate-papo inspirador sobre os desafios e oportunidades na carreira de desenvolvimento mobile!Dessa vez, o PV Faria recebe Gi Moeller, desenvolvedora mobile, para uma conversa sobre como é trabalhar com Swift, JavaScript e Python, as mudanças no mercado e os desafios da área. Veja como é a jornada para se tornar dev mobile, as diferenças entre desenvolvimento para iOS e Android, a importância de boas práticas, a adaptação às novas tecnologias e o futuro do mobile no mercado tech.Entenda os caminhos para se tornar um(a) dev mobile de sucesso, dá o play e vem com a gente
What are JavaScript promises, and why do you want to make them? Carl and Richard talk to Martine Dowden about all the various async options available in Javascript today, including Callbacks, Promises, Async/Await, and even ReactiveJS! Martine digs into some of the more remarkable features available, including grouping sync calls together so code is only called when they all complete, or the race option where only one needs to complete, and everything else is thrown away. Lots of power is available in Javascript today. Have you taken advantage of it?
How do we handle scope creep for vulnerabilities?, find the bugs before it hits the real world, risk or hype vulnerabilities, RTL-SDR in a browser, using AI to hack AI and protect AI, 73 vulnerabilities of which 0 patches have been issued, Spinning Cats, bypassing WDAC with Teams and JavaScript, Rust will solve all the security problems, did you hear some Signal chats were leaked?, ingress nginx, robot dogs, what happens to your 23andme data?, Oracle's cloud was hacked, despite what Oracle PR says, inside the SCIF, and cvemap to the rescue. Visit https://www.securityweekly.com/psw for all the latest episodes! Show Notes: https://securityweekly.com/psw-867
Luca Casanato, member of the Deno core team, delves into the intricacies of debugging applications using Deno and OpenTelemetry. Discover how Deno's native integration with OpenTelemetry enhances application performance monitoring, simplifies instrumentation compared to Node.js, and unlocks new insights for developers! Links https://lcas.dev https://x.com/lcasdev https://github.com/lucacasonato https://mastodon.social/@lcasdev https://www.linkedin.com/in/luca-casonato-15946b156 We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Let us know by sending an email to our producer, Emily, at emily.kochanekketner@logrocket.com (mailto:emily.kochanekketner@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understand where your users are struggling by trying it for free at [LogRocket.com]. Try LogRocket for free today.(https://logrocket.com/signup/?pdr) Special Guest: Luca Casonato.
How do we handle scope creep for vulnerabilities?, find the bugs before it hits the real world, risk or hype vulnerabilities, RTL-SDR in a browser, using AI to hack AI and protect AI, 73 vulnerabilities of which 0 patches have been issued, Spinning Cats, bypassing WDAC with Teams and JavaScript, Rust will solve all the security problems, did you hear some Signal chats were leaked?, ingress nginx, robot dogs, what happens to your 23andme data?, Oracle's cloud was hacked, despite what Oracle PR says, inside the SCIF, and cvemap to the rescue. Show Notes: https://securityweekly.com/psw-867
How do we handle scope creep for vulnerabilities?, find the bugs before it hits the real world, risk or hype vulnerabilities, RTL-SDR in a browser, using AI to hack AI and protect AI, 73 vulnerabilities of which 0 patches have been issued, Spinning Cats, bypassing WDAC with Teams and JavaScript, Rust will solve all the security problems, did you hear some Signal chats were leaked?, ingress nginx, robot dogs, what happens to your 23andme data?, Oracle's cloud was hacked, despite what Oracle PR says, inside the SCIF, and cvemap to the rescue. Visit https://www.securityweekly.com/psw for all the latest episodes! Show Notes: https://securityweekly.com/psw-867
Ever wondered how companies like Amazon or Pinterest deliver lightning-fast image search? Dive into this episode of MongoDB Podcast Live with Shane McAllister and Nenad, a MongoDB Champion, as they unravel the magic of semantic image search powered by MongoDB Atlas Vector Search!
React Native is officially 10!
News includes the release of Plug v1.17.0 with dark mode support for Plug.Debugger, an exciting Phoenix PR for co-located hooks that would place hook logic directly next to component code, a new RAG (Retrieval Augmented Generation) library from Bitcrowd for enhancing LLM interactions with document management, a syntax highlighter called Autumn powered by Tree-sitter, an Elixir-built YouTube downloader project called Pinchflat, and more! Show Notes online - http://podcast.thinkingelixir.com/246 (http://podcast.thinkingelixir.com/246) Elixir Community News https://gigalixir.com/thinking (https://gigalixir.com/thinking?utm_source=thinkingelixir&utm_medium=shownotes) – Gigalixir is sponsoring the show, offering 20% off standard tier prices for a year with promo code "Thinking". https://github.com/elixir-plug/plug/pull/1261 (https://github.com/elixir-plug/plug/pull/1261?utm_source=thinkingelixir&utm_medium=shownotes) – Plug v1.17.0 introduces dark mode to Plug.Debugger, providing a more comfortable experience for developers working in dark environments. https://github.com/elixir-plug/plug/pull/1263 (https://github.com/elixir-plug/plug/pull/1263?utm_source=thinkingelixir&utm_medium=shownotes) – Plug.Debugger now links to function definitions in Hexdocs, making it easier to understand errors. https://github.com/phoenixframework/phoenixliveview/pull/3705 (https://github.com/phoenixframework/phoenix_live_view/pull/3705?utm_source=thinkingelixir&utm_medium=shownotes) – Phoenix PR in progress for "Co-located Hooks" that would allow hook logic to be placed next to component code. https://github.com/elixir-nx/fine/tree/main/example (https://github.com/elixir-nx/fine/tree/main/example?utm_source=thinkingelixir&utm_medium=shownotes) – Fine, the C++ library for Elixir NIFs, now has an example project making it easier to experiment with C++ integrations in Elixir. https://podcast.thinkingelixir.com/244 (https://podcast.thinkingelixir.com/244?utm_source=thinkingelixir&utm_medium=shownotes) – Previous episode discussing Fine and how it integrates with PythonEx for embedding Python in Elixir. https://github.com/bitcrowd/rag (https://github.com/bitcrowd/rag?utm_source=thinkingelixir&utm_medium=shownotes) – New RAG (Retrieval Augmented Generation) library for Elixir from Bitcrowd to help with LLM context and document management. https://bitcrowd.dev/a-rag-library-for-elixir/ (https://bitcrowd.dev/a-rag-library-for-elixir/?utm_source=thinkingelixir&utm_medium=shownotes) – Blog post explaining the new RAG library and its functionality for document ingestion, retrieval, and augmentation. https://expert-lsp.org/ (https://expert-lsp.org/?utm_source=thinkingelixir&utm_medium=shownotes) – Expert LSP, the built-in Elixir LSP, now has a reserved domain, though the site is currently empty. https://github.com/kieraneglin/pinchflat (https://github.com/kieraneglin/pinchflat?utm_source=thinkingelixir&utm_medium=shownotes) – Pinchflat is an Elixir-built project for downloading YouTube content locally, ideal for media centers or archiving. https://github.com/leandrocp/autumn (https://github.com/leandrocp/autumn?utm_source=thinkingelixir&utm_medium=shownotes) – Autumn is a new Elixir/tree-sitter syntax highlighter that supports terminal and HTML outputs, powered by Tree-sitter and Neovim themes. https://autumnus.dev/ (https://autumnus.dev/?utm_source=thinkingelixir&utm_medium=shownotes) – Website for the new Autumn syntax highlighter for Elixir. https://github.com/leandrocp/mdex (https://github.com/leandrocp/mdex?utm_source=thinkingelixir&utm_medium=shownotes) – MDEx library updated to support CommonMark, GitHub Flavored Markdown, Wiki Links, Discord Markdown tags, emoji, and syntax highlighting via Autumn. https://voidzero.dev/posts/announcing-voidzero-inc (https://voidzero.dev/posts/announcing-voidzero-inc?utm_source=thinkingelixir&utm_medium=shownotes) – Evan You (Vue.js creator) announces Vite Plus, a comprehensive JavaScript toolchain described as "Cargo but for JavaScript." Do you have some Elixir news to share? Tell us at @ThinkingElixir (https://twitter.com/ThinkingElixir) or email at show@thinkingelixir.com (mailto:show@thinkingelixir.com) Find us online - Message the show - Bluesky (https://bsky.app/profile/thinkingelixir.com) - Message the show - X (https://x.com/ThinkingElixir) - Message the show on Fediverse - @ThinkingElixir@genserver.social (https://genserver.social/ThinkingElixir) - Email the show - show@thinkingelixir.com (mailto:show@thinkingelixir.com) - Mark Ericksen on X - @brainlid (https://x.com/brainlid) - Mark Ericksen on Bluesky - @brainlid.bsky.social (https://bsky.app/profile/brainlid.bsky.social) - Mark Ericksen on Fediverse - @brainlid@genserver.social (https://genserver.social/brainlid) - David Bernheisel on Bluesky - @david.bernheisel.com (https://bsky.app/profile/david.bernheisel.com) - David Bernheisel on Fediverse - @dbern@genserver.social (https://genserver.social/dbern)
Building a NestJS course for Scrimba and other channel & life updates---------------------------------------------------
In this conversation, Simon Grimm and Matt Palmer discuss the capabilities and evolution of Replit, a platform that allows developers to quickly turn ideas into applications using AI tools. They explore the features of Replit, including its ability to create full stack applications, the integration of AI, and the unique advantages it offers compared to other development tools. The discussion also touches on the possibilities and limitations of using Replit for various types of projects. In this conversation, Simon and Matt discuss the challenges of managing Python environments and the advantages of using Replit for development. They explore how developers can integrate various tools into their workflows, the benefits of building with AI for rapid prototyping, and the importance of effective prompt engineering. The discussion also touches on the future collaboration between Replit and Expo, highlighting the evolving landscape of software development.Learn React Native - https://galaxies.devMatt PalmerMatt leads developer relations and product marketing at Replit, creating everything from tutorials to technical content. He got his start in data, working as a product analyst at AllTrails before moving to data engineering and eventually DevRel. He's worked on content with companies like LinkedIn, O'Reilly Media, xAI and Y Combinator. Outside of work, you can find him lifting weights or exploring the outdoors. Matt currently lives in San Francisco, but hails from Asheville, North Carolina.https://x.com/mattppalhttps://youtube.com/@mattpalmerhttps://www.linkedin.com/in/matt-palmer/https://mattpalmer.io/LinksReplit: https://replit.com/Replit X: https://x.com/replitReplit YouTube: https://www.youtube.com/@replitReplit Expo / React Native template: https://replit.com/@replit/ExpoReplit Sign-up: https://replit.comExpo tutorial: https://www.youtube.com/playlist?list=PLto9KpJAqHMRuHwQ9OUjkVgZ69efpvslMExpo Blog: https://expo.dev/blog/from-idea-to-app-with-replit-and-expoTakeawaysReplit allows developers to create applications quickly and efficiently.AI integration in Replit enhances the development process.The platform supports multiple programming languages, primarily JavaScript and Python.Replit's workspace is designed for ease of use, requiring no installations.Users can deploy applications with a single click.Replit is evolving rapidly with advancements in AI technology.The platform is suitable for both beginners and experienced developers.Replit's unique features set it apart from other development tools.The community around Replit is growing, with increasing interest and usage.Building complex applications still requires significant effort and planning. Python environments can be cumbersome for developers.Replit excels in managing single directory projects.AI can significantly speed up the prototyping process.Disposable software allows for quick iterations and testing.Effective prompt engineering can enhance AI outputs.Developers should focus on minimum viable prompts for efficiency.Replit's integration with Expo is a promising development.AI tools can help in learning and understanding code better.Collaboration between tools can streamline the development process.Keeping up with new tools and technologies is essential for developers.
In this episode, Sarah and Will chat to Josh de Leeuw from Vassar College and the creator of jsPsych. We chat about the history of jsPsych, the unseen process behind creating open-access scientific software, and the current challenges facing software developers in the open scholarship movement. jsPsych is a javaScript framework for creating online experiments, and is always looking for people to contribute to the codebase: https://jspsych.org. Follow Josh de Leeuw on Bluesky: https://bsky.app/profile/joshdeleeuw.bsky.social
Anthony Fu, Framework Developer at Nuxt Labs, discusses the shift to ESM-only formats in JavaScript development. He covers the controversy surrounding ESM, the advantages of moving from CJS to ESM, and what this transition means for the future of web development. Tune in to learn why now is the ideal time for this change, and how it benefits developers! Links https://antfu.me https://bsky.app/profile/antfu.me https://github.com/antfu https://x.com/antfu7 https://www.linkedin.com/in/antfu https://antfu.me/posts/move-on-to-esm-only We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Let us know by sending an email to our producer, Emily, at emily.kochanekketner@logrocket.com (mailto:emily.kochanekketner@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understand where your users are struggling by trying it for free at [LogRocket.com]. Try LogRocket for free today.(https://logrocket.com/signup/?pdr) Special Guest: Anthony Fu.
Amjad Masad is the co-founder and CEO of Replit, a programming environment for everyone that allows anyone to write and deploy code, regardless of experience. Replit has 34 million users globally and is one of the fastest-growing developer communities in the world.Before Replit, Amjad was a tech lead on the JavaScript infrastructure team (which he helped start) at Facebook, where he contributed to popular open-source developer tools. Additionally, he played a key role as a founding engineer at the online coding school Codecademy.0:00 - Intro4:31 - Utopia, Dystopia, and Life in a Post-AI World11:28 - Replit and Expressiveness in Computing17:01 - Balancing Accessibility and Control in Products19:53 - Is AI a Sustaining or Disruptive Technology?25:04 - Building With AI and the Future of Company Structure29:32 - The Shape and Defensibility of Software in a World of AI33:37 - The Nation State and Stagnation in the Physical World38:28 - Technology and Resilience41:54 - What Shouldn't Get Automated?43:54 - What Becomes Valuable in a Post-AI World?47:10 - AI Augmenting vs Competing with Humans51:51 - What Should More People Be Thinking About?
Hoje o papo é sobre desenvolvimento mobile. Neste episódio, reunimos um time de peso para explorar o histórico, os desafios e o futuro de quem lida diariamente com o desafio de desenvolver aplicações multiplataforma. Vem ver quem participou desse papo: André David, o host que reflete consigo mesmo Vinny Neves, Líder de Front-End na Alura Yago Oliveira, Coordenador de Conteúdo Técnico na Alura Ilda Neta, Mobile Software Engineer Pedro Mello, Senior Software Engineer
Node.js начинался с невинного вопроса: «А что будет, если запустить Javascript вне браузера?». Несмотря на предубеждения и скепсис, отрицать бессмысленно – эксперимент удался, ведь миллионы разработчиков используют Node.js каждый день. Почему так вышло – разбираемся с Игорем Антоновым! Также ждем вас, ваши лайки, репосты и комменты в мессенджерах и соцсетях! Telegram-чат: https://t.me/podlodka Telegram-канал: https://t.me/podlodkanews Страница в Facebook: www.facebook.com/podlodkacast/ Twitter-аккаунт: https://twitter.com/PodlodkaPodcast Ведущие в выпуске: Женя Кателла, Катя Петрова Полезные ссылки: Блог «Про JavaScript и разработку» в телеграм — https://t.me/antonovjs Блог «Про JavaScript и разработку» в YouTube — https://www.youtube.com/@antonov_i
We are working with Amplify on the 2025 State of AI Engineering Survey to be presented at the AIE World's Fair in SF! Join the survey to shape the future of AI Eng!We first met Snipd over a year ago, and were immediately impressed by the design, but were doubtful about the behavior of snipping as the title behavior:Podcast apps are enormously sticky - Spotify spent almost $1b in podcast acquisitions and exclusive content just to get an 8% bump in market share among normies.However, after a disappointing Overcast 2.0 rewrite with no AI features in the last 3 years, I finally bit the bullet and switched to Snipd. It's 2025, your podcast app should be able to let you search transcripts of your podcasts. Snipd is the best implementation of this so far.And yet they keep shipping:What impressed us wasn't just how this tiny team of 4 was able to bootstrap a consumer AI app against massive titans and do so well; but also how seriously they think about learning through podcasts and improving retention of knowledge over time, aka “Duolingo for podcasts”. As an educational AI podcast, that's a mission we can get behind.Full Video PodFind us on YouTube! This was the first pod we've ever shot outdoors!Show Notes* How does Shazam work?* Flutter/FlutterFlow* wav2vec paper* Perplexity Online LLM* Google Search Grounding* Comparing Snipd transcription with our Bee episode* NIPS 2017 Flo Rida* Gustav Söderström - Background AudioTimestamps* [00:00:03] Takeaways from AI Engineer NYC* [00:00:17] Weather in New York.* [00:00:26] Swyx and Snipd.* [00:01:01] Kevin's AI summit experience.* [00:01:31] Zurich and AI.* [00:03:25] SigLIP authors join OpenAI.* [00:03:39] Zurich is very costly.* [00:04:06] The Snipd origin story.* [00:05:24] Introduction to machine learning.* [00:09:28] Snipd and user knowledge extraction.* [00:13:48] App's tech stack, Flutter, Python.* [00:15:11] How speakers are identified.* [00:18:29] The concept of "backgroundable" video.* [00:29:05] Voice cloning technology.* [00:31:03] Using AI agents.* [00:34:32] Snipd's future is multi-modal AI.* [00:36:37] Snipd and existing user behaviour.* [00:42:10] The app, summary, and timestamps.* [00:55:25] The future of AI and podcasting.* [1:14:55] Voice AITranscriptswyx [00:00:03]: Hey, I'm here in New York with Kevin Ben-Smith of Snipd. Welcome.Kevin [00:00:07]: Hi. Hi. Amazing to be here.swyx [00:00:09]: Yeah. This is our first ever, I think, outdoors podcast recording.Kevin [00:00:14]: It's quite a location for the first time, I have to say.swyx [00:00:18]: I was actually unsure because, you know, it's cold. It's like, I checked the temperature. It's like kind of one degree Celsius, but it's not that bad with the sun. No, it's quite nice. Yeah. Especially with our beautiful tea. With the tea. Yeah. Perfect. We're going to talk about Snips. I'm a Snips user. I'm a Snips user. I had to basically, you know, apart from Twitter, it's like the number one use app on my phone. Nice. When I wake up in the morning, I open Snips and I, you know, see what's new. And I think in terms of time spent or usage on my phone, I think it's number one or number two. Nice. Nice. So I really had to talk about it also because I think people interested in AI want to think about like, how can we, we're an AI podcast, we have to talk about the AI podcast app. But before we get there, we just finished. We just finished the AI Engineer Summit and you came for the two days. How was it?Kevin [00:01:07]: It was quite incredible. I mean, for me, the most valuable was just being in the same room with like-minded people who are building the future and who are seeing the future. You know, especially when it comes to AI agents, it's so often I have conversations with friends who are not in the AI world. And it's like so quickly it happens that you, it sounds like you're talking in science fiction. And it's just crazy talk. It was, you know, it's so refreshing to talk with so many other people who already see these things and yeah, be inspired then by them and not always feel like, like, okay, I think I'm just crazy. And like, this will never happen. It really is happening. And for me, it was very valuable. So day two, more relevant, more relevant for you than day one. Yeah. Day two. So day two was the engineering track. Yeah. That was definitely the most valuable for me. Like also as a producer. Practitioner myself, especially there were one or two talks that had to do with voice AI and AI agents with voice. Okay. So that was quite fascinating. Also spoke with the speakers afterwards. Yeah. And yeah, they were also very open and, and, you know, this, this sharing attitudes that's, I think in general, quite prevalent in the AI community. I also learned a lot, like really practical things that I can now take away with me. Yeah.swyx [00:02:25]: I mean, on my side, I, I think I watched only like half of the talks. Cause I was running around and I think people saw me like towards the end, I was kind of collapsing. I was on the floor, like, uh, towards the end because I, I needed to get, to get a rest, but yeah, I'm excited to watch the voice AI talks myself.Kevin [00:02:43]: Yeah. Yeah. Do that. And I mean, from my side, thanks a lot for organizing this conference for bringing everyone together. Do you have anything like this in Switzerland? The short answer is no. Um, I mean, I have to say the AI community in, especially Zurich, where. Yeah. Where we're, where we're based. Yeah. It is quite good. And it's growing, uh, especially driven by ETH, the, the technical university there and all of the big companies, they have AI teams there. Google, like Google has the biggest tech hub outside of the U S in Zurich. Yeah. Facebook is doing a lot in reality labs. Uh, Apple has a secret AI team, open AI and then SwapBit just announced that they're coming to Zurich. Yeah. Um, so there's a lot happening. Yeah.swyx [00:03:23]: So, yeah, uh, I think the most recent notable move, I think the entire vision team from Google. Uh, Lucas buyer, um, and, and all the other authors of Siglip left Google to join open AI, which I thought was like, it's like a big move for a whole team to move all at once at the same time. So I've been to Zurich and it just feels expensive. Like it's a great city. Yeah. It's great university, but I don't see it as like a business hub. Is it a business hub? I guess it is. Right.Kevin [00:03:51]: Like it's kind of, well, historically it's, uh, it's a finance hub, finance hub. Yeah. I mean, there are some, some large banks there, right? Especially UBS, uh, the, the largest wealth manager in the world, but it's really becoming more of a tech hub now with all of the big, uh, tech companies there.swyx [00:04:08]: I guess. Yeah. Yeah. And, but we, and research wise, it's all ETH. Yeah. There's some other things. Yeah. Yeah. Yeah.Kevin [00:04:13]: It's all driven by ETH. And then, uh, it's sister university EPFL, which is in Lausanne. Okay. Um, which they're also doing a lot, but, uh, it's, it's, it's really ETH. Uh, and otherwise, no, I mean, it's a beautiful, really beautiful city. I can recommend. To anyone. To come, uh, visit Zurich, uh, uh, let me know, happy to show you around and of course, you know, you, you have the nature so close, you have the mountains so close, you have so, so beautiful lakes. Yeah. Um, I think that's what makes it such a livable city. Yeah.swyx [00:04:42]: Um, and the cost is not, it's not cheap, but I mean, we're in New York city right now and, uh, I don't know, I paid $8 for a coffee this morning, so, uh, the coffee is cheaper in Zurich than the New York city. Okay. Okay. Let's talk about Snipt. What is Snipt and, you know, then we'll talk about your origin story, but I just, let's, let's get a crisp, what is Snipt? Yeah.Kevin [00:05:03]: I always see two definitions of Snipt, so I'll give you one really simple, straightforward one, and then a second more nuanced, um, which I think will be valuable for the rest of our conversation. So the most simple one is just to say, look, we're an AI powered podcast app. So if you listen to podcasts, we're now providing this AI enhanced experience. But if you look at the more nuanced, uh, podcast. Uh, perspective, it's actually, we, we've have a very big focus on people who like your audience who listened to podcasts to learn something new. Like your audience, you want, they want to learn about AI, what's happening, what's, what's, what's the latest research, what's going on. And we want to provide a, a spoken audio platform where you can do that most effectively. And AI is basically the way that we can achieve that. Yeah.swyx [00:05:53]: Means to an end. Yeah, exactly. When you started. Was it always meant to be AI or is it, was it more about the social sharing?Kevin [00:05:59]: So the first version that we ever released was like three and a half years ago. Okay. Yeah. So this was before ChatGPT. Before Whisper. Yeah. Before Whisper. Yeah. So I think a lot of the features that we now have in the app, they weren't really possible yet back then. But we already from the beginning, we always had the focus on knowledge. That's the reason why, you know, we in our team, why we listen to podcasts, but we did have a bit of a different approach. Like the idea in the very beginning was, so the name is Snips and you can create these, what we call Snips, which is basically a small snippet, like a clip from a, from a podcast. And we did envision sort of like a, like a social TikTok platform where some people would listen to full episodes and they would snip certain, like the best parts of it. And they would post that in a feed and other users would consume this feed of Snips. And use that as a discovery tool or just as a means to an end. And yeah, so you would have both people who create Snips and people who listen to Snips. So our big hypothesis in the beginning was, you know, it will be easy to get people to listen to these Snips, but super difficult to actually get them to create them. So we focused a lot of, a lot of our effort on making it as seamless and easy as possible to create a Snip. Yeah.swyx [00:07:17]: It's similar to TikTok. You need CapCut for there to be videos on TikTok. Exactly.Kevin [00:07:23]: And so for, for Snips, basically whenever you hear an amazing insight, a great moment, you can just triple tap your headphones. And our AI actually then saves the moment that you just listened to and summarizes it to create a note. And this is then basically a Snip. So yeah, we built, we built all of this, launched it. And what we found out was basically the exact opposite. So we saw that people use the Snips to discover podcasts, but they really, you know, they don't. You know, really love listening to long form podcasts, but they were creating Snips like crazy. And this was, this was definitely one of these aha moments when we realized like, hey, we should be really doubling down on the knowledge of learning of, yeah, helping you learn most effectively and helping you capture the knowledge that you listen to and actually do something with it. Because this is in general, you know, we, we live in this world where there's so much content and we consume and consume and consume. And it's so easy to just at the end of the podcast. You just start listening to the next podcast. And five minutes later, you've forgotten everything. 90%, 99% of what you've actually just learned. Yeah.swyx [00:08:31]: You don't know this, but, and most people don't know this, but this is my fourth podcast. My third podcast was a personal mixtape podcast where I Snipped manually sections of podcasts that I liked and added my own commentary on top of them and published them as small episodes. Nice. So those would be maybe five to 10 minute Snips. Yeah. And then I added something that I thought was a good story or like a good insight. And then I added my own commentary and published it as a separate podcast. It's cool. Is that still live? It's still live, but it's not active, but you can go back and find it. If you're, if, if you're curious enough, you'll see it. Nice. Yeah. You have to show me later. It was so manual because basically what my process would be, I hear something interesting. I note down the timestamp and I note down the URL of the podcast. I used to use Overcast. So it would just link to the Overcast page. And then. Put in my note taking app, go home. Whenever I feel like publishing, I will take one of those things and then download the MP3, clip out the MP3 and record my intro, outro and then publish it as a, as a podcast. But now Snips, I mean, I can just kind of double click or triple tap.Kevin [00:09:39]: I mean, those are very similar stories to what we hear from our users. You know, it's, it's normal that you're doing, you're doing something else while you're listening to a podcast. Yeah. A lot of our users, they're driving, they're working out, walking their dog. So in those moments when you hear something amazing, it's difficult to just write them down or, you know, you have to take out your phone. Some people take a screenshot, write down a timestamp, and then later on you have to go back and try to find it again. Of course you can't find it anymore because there's no search. There's no command F. And, um, these, these were all of the issues that, that, that we encountered also ourselves as users. And given that our background was in AI, we realized like, wait, hey, this is. This should not be the case. Like podcast apps today, they're still, they're basically repurposed music players, but we actually look at podcasts as one of the largest sources of knowledge in the world. And once you have that different angle of looking at it together with everything that AI is now enabling, you realize like, hey, this is not the way that we, that podcast apps should be. Yeah.swyx [00:10:41]: Yeah. I agree. You mentioned something that you said your background is in AI. Well, first of all, who's the team and what do you mean your background is in AI?Kevin [00:10:48]: Those are two very different things. I'm going to ask some questions. Yeah. Um, maybe starting with, with my backstory. Yeah. My backstory actually goes back, like, let's say 12 years ago or something like that. I moved to Zurich to study at ETH and actually I studied something completely different. I studied mathematics and economics basically with this specialization for quant finance. Same. Okay. Wow. All right. So yeah. And then as you know, all of these mathematical models for, um, asset pricing, derivative pricing, quantitative trading. And for me, the thing that, that fascinates me the most was the mathematical modeling behind it. Uh, mathematics, uh, statistics, but I was never really that passionate about the finance side of things.swyx [00:11:32]: Oh really? Oh, okay. Yeah. I mean, we're different there.Kevin [00:11:36]: I mean, one just, let's say symptom that I noticed now, like, like looking back during that time. Yeah. I think I never read an academic paper about the subject in my free time. And then it was towards the end of my studies. I was already working for a big bank. One of my best friends, he comes to me and says, Hey, I just took this course. You have to, you have to do this. You have to take this lecture. Okay. And I'm like, what, what, what is it about? It's called machine learning and I'm like, what, what, what kind of stupid name is that? Uh, so you sent me the slides and like over a weekend I went through all of the slides and I just, I just knew like freaking hell. Like this is it. I'm, I'm in love. Wow. Yeah. Okay. And that was then over the course of the next, I think like 12 months, I just really got into it. Started reading all about it, like reading blog posts, starting building my own models.swyx [00:12:26]: Was this course by a famous person, famous university? Was it like the Andrew Wayne Coursera thing? No.Kevin [00:12:31]: So this was a ETH course. So a professor at ETH. Did he teach in English by the way? Yeah. Okay.swyx [00:12:37]: So these slides are somewhere available. Yeah. Definitely. I mean, now they're quite outdated. Yeah. Sure. Well, I think, you know, reflecting on the finance thing for a bit. So I, I was, used to be a trader, uh, sell side and buy side. I was options trader first and then I was more like a quantitative hedge fund analyst. We never really use machine learning. It was more like a little bit of statistical modeling, but really like you, you fit, you know, your regression.Kevin [00:13:03]: No, I mean, that's, that's what it is. And, uh, or you, you solve partial differential equations and have then numerical methods to, to, to solve these. That's, that's for you. That's your degree. And that's, that's not really what you do at work. Right. Unless, well, I don't know what you do at work. In my job. No, no, we weren't solving the partial differential. Yeah.swyx [00:13:18]: You learn all this in school and then you don't use it.Kevin [00:13:20]: I mean, we, we, well, let's put it like that. Um, in some things, yeah, I mean, I did code algorithms that would do it, but it was basically like, it was the most basic algorithms and then you just like slightly improve them a little bit. Like you just tweak them here and there. Yeah. It wasn't like starting from scratch, like, Oh, here's this new partial differential equation. How do we know?swyx [00:13:43]: Yeah. Yeah. I mean, that's, that's real life, right? Most, most of it's kind of boring or you're, you're using established things because they're established because, uh, they tackle the most important topics. Um, yeah. Portfolio management was more interesting for me. Um, and, uh, we, we were sort of the first to combine like social data with, with quantitative trading. And I think, uh, I think now it's very common, but, um, yeah. Anyway, then you, you went, you went deep on machine learning and then what? You quit your job? Yeah. Yeah. Wow.Kevin [00:14:12]: I quit my job because, uh, um, I mean, I started using it at the bank as well. Like try, like, you know, I like desperately tried to find any kind of excuse to like use it here or there, but it just was clear to me, like, no, if I want to do this, um, like I just have to like make a real cut. So I quit my job and joined an early stage, uh, tech startup in Zurich where then built up the AI team over five years. Wow. Yeah. So yeah, we built various machine learning, uh, things for, for banks from like models for, for sales teams to identify which clients like which product to sell to them and with what reasons all the way to, we did a lot, a lot with bank transactions. One of the actually most fun projects for me was we had an, an NLP model that would take the booking text of a transaction, like a credit card transaction and pretty fired. Yeah. Because it had all of these, you know, like numbers in there and abbreviations and whatnot. And sometimes you look at it like, what, what is this? And it was just, you know, it would just change it to, I don't know, CVS. Yeah.swyx [00:15:15]: Yeah. But I mean, would you have hallucinations?Kevin [00:15:17]: No, no, no. The way that everything was set up, it wasn't like, it wasn't yet fully end to end generative, uh, neural network as what you would use today. Okay.swyx [00:15:30]: Awesome. And then when did you go like full time on Snips? Yeah.Kevin [00:15:33]: So basically that was, that was afterwards. I mean, how that started was the friend of mine who got me into machine learning, uh, him and I, uh, like he also got me interested into startups. He's had a big impact on my life. And the two of us were just a jam on, on like ideas for startups every now and then. And his background was also in AI data science. And we had a couple of ideas, but given that we were working full times, we were thinking about, uh, so we participated in Hack Zurich. That's, uh, Europe's biggest hackathon, um, or at least was at the time. And we said, Hey, this is just a weekend. Let's just try out an idea, like hack something together and see how it works. And the idea was that we'd be able to search through podcast episodes, like within a podcast. Yeah. So we did that. Long story short, uh, we managed to do it like to build something that we realized, Hey, this actually works. You can, you can find things again in podcasts. We had like a natural language search and we pitched it on stage. And we actually won the hackathon, which was cool. I mean, we, we also, I think we had a good, um, like a good, good pitch or a good example. So we, we used the famous Joe Rogan episode with Elon Musk where Elon Musk smokes a joint. Okay. Um, it's like a two and a half hour episode. So we were on stage and then we just searched for like smoking weed and it would find that exact moment. It will play it. And it just like, come on with Elon Musk, just like smoking. Oh, so it was video as well? No, it was actually completely based on audio. But we did have the video for the presentation. Yeah. Which had a, had of course an amazing effect. Yeah. Like this gave us a lot of activation energy, but it wasn't actually about winning the hackathon. Yeah. But the interesting thing that happened was after we pitched on stage, several of the other participants, like a lot of them came up to us and started saying like, Hey, can I use this? Like I have this issue. And like some also came up and told us about other problems that they have, like very adjacent to this with a podcast. Where's like, like this. Like, could, could I use this for that as well? And that was basically the, the moment where I realized, Hey, it's actually not just us who are having these issues with, with podcasts and getting to the, making the most out of this knowledge. Yeah. The other people. Yeah. That was now, I guess like four years ago or something like that. And then, yeah, we decided to quit our jobs and start, start this whole snip thing. Yeah. How big is the team now? We're just four people. Yeah. Just four people. Yeah. Like four. We're all technical. Yeah. Basically two on the, the backend side. So one of my co-founders is this person who got me into machine learning and startups. And we won the hackathon together. So we have two people for the backend side with the AI and all of the other backend things. And two for the front end side, building the app.swyx [00:18:18]: Which is mostly Android and iOS. Yeah.Kevin [00:18:21]: It's iOS and Android. We also have a watch app for, for Apple, but yeah, it's mostly iOS. Yeah.swyx [00:18:27]: The watch thing, it was very funny because in the, in the Latent Space discord, you know, most of us have been slowly adopting snips. You came to me like a year ago and you introduced snip to me. I was like, I don't know. I'm, you know, I'm very sticky to overcast and then slowly we switch. Why watch?Kevin [00:18:43]: So it goes back to a lot of our users, they do something else while, while listening to a podcast, right? Yeah. And one of the, us giving them the ability to then capture this knowledge, even though they're doing something else at the same time is one of the killer features. Yeah. Maybe I can actually, maybe at some point I should maybe give a bit more of an overview of what the, all of the features that we have. Sure. So this is one of the killer features and for one big use case that people use this for is for running. Yeah. So if you're a big runner, a big jogger or cycling, like really, really cycling competitively and a lot of the people, they don't want to take their phone with them when they go running. So you load everything onto the watch. So you can download episodes. I mean, if you, if you have an Apple watch that has internet access, like with a SIM card, you can also directly stream. That's also possible. Yeah. So of course it's a, it's basically very limited to just listening and snipping. And then you can see all of your snips later on your phone. Let me tell you this error I just got.swyx [00:19:47]: Error playing episode. Substack, the host of this podcast, does not allow this podcast to be played on an Apple watch. Yeah.Kevin [00:19:52]: That's a very beautiful thing. So we found out that all of the podcasts hosted on Substack, you cannot play them on an Apple watch. Why is this restriction? What? Like, don't ask me. We try to reach out to Substack. We try to reach out to some of the bigger podcasters who are hosting the podcast on Substack to also let them know. Substack doesn't seem to care. This is not specific to our app. You can also check out the Apple podcast app. Yeah. It's the same problem. It's just that we actually have identified it. And we tell the user what's going on.swyx [00:20:25]: I would say we host our podcast on Substack, but they're not very serious about their podcasting tools. I've told them before, I've been very upfront with them. So I don't feel like I'm shitting on them in any way. And it's kind of sad because otherwise it's a perfect creative platform. But the way that they treat podcasting as an afterthought, I think it's really disappointing.Kevin [00:20:45]: Maybe given that you mentioned all these features, maybe I can give a bit of a better overview of the features that we have. Let's do that. Let's do that. So I think we're mostly in our minds. Maybe for some of the listeners.swyx [00:20:55]: I mean, I'll tell you my version. Yeah. They can correct me, right? So first of all, I think the main job is for it to be a podcast listening app. It should be basically a complete superset of what you normally get on Overcast or Apple Podcasts or anything like that. You pull your show list from ListenNotes. How do you find shows? You've got to type in anything and you find them, right?Kevin [00:21:18]: Yeah. We have a search engine that is powered by ListenNotes. Yeah. But I mean, in the meantime, we have a huge database of like 99% of all podcasts out there ourselves. Yeah.swyx [00:21:27]: What I noticed, the default experience is you do not auto-download shows. And that's one very big difference for you guys versus other apps, where like, you know, if I'm subscribed to a thing, it auto-downloads and I already have the MP3 downloaded overnight. For me, I have to actively put it onto my queue, then it auto-downloads. And actually, I initially didn't like that. I think I maybe told you that I was like, oh, it's like a feature that I don't like. Like, because it means that I have to choose to listen to it in order to download and not to... It's like opt-in. There's a difference between opt-in and opt-out. So I opt-in to every episode that I listen to. And then, like, you know, you open it and depends on whether or not you have the AI stuff enabled. But the default experience is no AI stuff enabled. You can listen to it. You can see the snips, the number of snips and where people snip during the episode, which roughly correlates to interest level. And obviously, you can snip there. I think that's the default experience. I think snipping is really cool. Like, I use it to share a lot on Discord. I think we have tons and tons of just people sharing snips and stuff. Tweeting stuff is also like a nice, pleasant experience. But like the real features come when you actually turn on the AI stuff. And so the reason I got snipped, because I got fed up with Overcast not implementing any AI features at all. Instead, they spent two years rewriting their app to be a little bit faster. And I'm like, like, it's 2025. I should have a podcast that has transcripts that I can search. Very, very basic thing. Overcast will basically never have it.Kevin [00:22:49]: Yeah, I think that was a good, like, basic overview. Maybe I can add a bit to it with the AI features that we have. So one thing that we do every time a new podcast comes out, we transcribe the episode. We do speaker diarization. We identify the speaker names. Each guest, we extract a mini bio of the guest, try to find a picture of the guest online, add it. We break the podcast down into chapters, as in AI generated chapters. That one. That one's very handy. With a quick description per title and quick description per each chapter. We identify all books that get mentioned on a podcast. You can tell I don't use that one. It depends on the podcast. There are some podcasts where the guests often recommend like an amazing book. So later on, you can you can find that again.swyx [00:23:42]: So you literally search for the word book or I just read blah, blah, blah.Kevin [00:23:46]: No, I mean, it's all LLM based. Yeah. So basically, we have we have an LLM that goes through the entire transcript and identifies if a user mentions a book, then we use perplexity API together with various other LLM orchestration to go out there on the Internet, find everything that there is to know about the book, find the cover, find who or what the author is, get a quick description of it for the author. We then check on which other episodes the author appeared on.swyx [00:24:15]: Yeah, that is killer.Kevin [00:24:17]: Because that for me, if. If there's an interesting book, the first thing I do is I actually listen to a podcast episode with a with a writer because he usually gives a really great overview already on a podcast.swyx [00:24:28]: Sometimes the podcast is with the person as a guest. Sometimes his podcast is about the person without him there. Do you pick up both?Kevin [00:24:37]: So, yes, we pick up both in like our latest models. But actually what we show you in the app, the goal is to currently only show you the guest to separate that. In the future, we want to show the other things more.swyx [00:24:47]: For what it's worth, I don't mind. Yeah, I don't think like if I like if I like somebody, I'll just learn about them regardless of whether they're there or not.Kevin [00:24:55]: Yeah, I mean, yes and no. We we we have seen there are some personalities where this can break down. So, for example, the first version that we released with this feature, it picked up much more often a person, even if it was not a guest. Yeah. For example, the best examples for me is Sam Altman and Elon Musk. Like they're just mentioned on every second podcast and it has like they're not on there. And if you're interested in it, you can go to Elon Musk. And actually like learning from them. Yeah, I see. And yeah, we updated our our algorithms, improved that a lot. And now it's gotten much better to only pick it up if they're a guest. And yeah, so this this is maybe to come back to the features, two more important features like we have the ability to chat with an episode. Yes. Of course, you can do the old style of searching through a transcript with a keyword search. But I think for me, this is this is how you used to do search and extracting knowledge in the in the past. Old school. And the A.I. Web. Way is is basically an LLM. So you can ask the LLM, hey, when do they talk about topic X? If you're interested in only a certain part of the episode, you can ask them for four to give a quick overview of the episode. Key takeaways afterwards also to create a note for you. So this is really like very open, open ended. And yeah. And then finally, the snipping feature that we mentioned just to reiterate. Yeah. I mean, here the the feature is that whenever you hear an amazing idea, you can trip. It's up your headphones or click a button in the app and the A.I. summarizes the insight you just heard and saves that together with the original transcript and audio in your knowledge library. I also noticed that you you skip dynamic content. So dynamic content, we do not skip it automatically. Oh, sorry. You detect. But we detect it. Yeah. I mean, that's one of the thing that most people don't don't actually know that like the way that ads get inserted into podcasts or into most podcasts is actually that every time you listen. To a podcast, you actually get access to a different audio file and on the server, a different ad is inserted into the MP3 file automatically. Yeah. Based on IP. Exactly. And that's what that means is if we transcribe an episode and have a transcript with timestamps like words, word specific timestamps, if you suddenly get a different audio file, like the whole time says I messed up and that's like a huge issue. And for that, we actually had to build another algorithm that would dynamically on the floor. I re sync the audio that you're listening to the transcript that we have. Yeah. Which is a fascinating problem in and of itself.swyx [00:27:24]: You sync by matching up the sound waves? Or like, or do you sync by matching up words like you basically do partial transcription?Kevin [00:27:33]: We are not matching up words. It's happening on the basically a bytes level matching. Yeah. Okay.swyx [00:27:40]: It relies on this. It relies on the exact match at some point.Kevin [00:27:46]: So it's actually. We're actually not doing exact matches, but we're doing fuzzy matches to identify the moment. It's basically, we basically built Shazam for podcasts. Just as a little side project to solve this issue.swyx [00:28:02]: Actually, fun fact, apparently the Shazam algorithm is open. They published the paper, it's talked about it. I haven't really dived into the paper. I thought it was kind of interesting that basically no one else has built Shazam.Kevin [00:28:16]: Yeah, I mean, well, the one thing is the algorithm. If you now talk about Shazam, the other thing is also having the database behind it and having the user mindset that if they have this problem, they come to you, right?swyx [00:28:29]: Yeah, I'm very interested in the tech stack. There's a big data pipeline. Could you share what is the tech stack?Kevin [00:28:35]: What are the most interesting or challenging pieces of it? So the general tech stack is our entire backend is, or 90% of our backend is written in Python. Okay. Hosting everything on Google Cloud Platform. And our front end is written with, well, we're using the Flutter framework. So it's written in Dart and then compiled natively. So we have one code base that handles both Android and iOS. You think that was a good decision? It's something that a lot of people are exploring. So up until now, yes. Okay. Look, it has its pros and cons. Some of the, you know, for example, earlier, I mentioned we have a Apple Watch app. Yeah. I mean, there's no Flutter for that, right? So that you build native. And then of course you have to sort of like sync these things together. I mean, I'm not the front end engineer, so I'm not just relaying this information, but our front end engineers are very happy with it. It's enabled us to be quite fast and be on both platforms from the very beginning. And when I talk with people and they hear that we are using Flutter, usually they think like, ah, it's not performant. It's super junk, janky and everything. And then they use it. They use our app and they're always super surprised. Or if they've already used our app, I couldn't tell them. They're like, what? Yeah. Um, so there is actually a lot that you can do with it.swyx [00:29:51]: The danger, the concern, there's a few concerns, right? One, it's Google. So when were they, when are they going to abandon it? Two, you know, they're optimized for Android first. So iOS is like a second, second thought, or like you can feel that it is not a native iOS app. Uh, but you guys put a lot of care into it. And then maybe three, from my point of view, JavaScript, as a JavaScript guy, React Native was supposed to be there. And I think that it hasn't really fulfilled that dream. Um, maybe Expo is trying to do that, but, um, again, it is not, does not feel as productive as Flutter. And I've, I spent a week on Flutter and dot, and I'm an investor in Flutter flow, which is the local, uh, Flutter, Flutter startup. That's doing very, very well. I think a lot of people are still Flutter skeptics. Yeah. Wait. So are you moving away from Flutter?Kevin [00:30:41]: I don't know. We don't have plans to do that. Yeah.swyx [00:30:43]: You're just saying about that. What? Yeah. Watch out. Okay. Let's go back to the stack.Kevin [00:30:47]: You know, that was just to give you a bit of an overview. I think the more interesting things are, of course, on the AI side. So we, like, as I mentioned earlier, when we started out, it was before chat GPT for the chat GPT moment before there was the GPT 3.5 turbo, uh, API. So in the beginning, we actually were running everything ourselves, open source models, try to fine tune them. They worked. There was us, but let's, let's be honest. They weren't. What was the sort of? Before Whisper, the transcription. Yeah, we were using wave to work like, um, there was a Google one, right? No, it was a Facebook, Facebook one. That was actually one of the papers. Like when that came out for me, that was one of the reasons why I said we, we should try something to start a startup in the audio space. For me, it was a bit like before that I had been following the NLP space, uh, quite closely. And as, as I mentioned earlier, we, we did some stuff at the startup as well, that I was working up. But before, and wave to work was the first paper that I had at least seen where the whole transformer architecture moved over to audio and bit more general way of saying it is like, it was the first time that I saw the transformer architecture being applied to continuous data instead of discrete tokens. Okay. And it worked amazingly. Ah, and like the transformer architecture plus self-supervised learning, like these two things moved over. And then for me, it was like, Hey, this is now going to take off similarly. It's the text space has taken off. And with these two things in place, even if some features that we want to build are not possible yet, they will be possible in the near term, uh, with this, uh, trajectory. So that was a little side, side note. No, it's in the meantime. Yeah. We're using whisper. We're still hosting some of the models ourselves. So for example, the whole transcription speaker diarization pipeline, uh,swyx [00:32:38]: You need it to be as cheap as possible.Kevin [00:32:40]: Yeah, exactly. I mean, we're doing this at scale where we have a lot of audio.swyx [00:32:44]: We're what numbers can you disclose? Like what, what are just to give people an idea because it's a lot. So we have more than a million podcasts that we've already processed when you say a million. So processing is basically, you have some kind of list of podcasts that you will auto process and others where a paying pay member can choose to press the button and transcribe it. Right. Is that the rough idea? Yeah, exactly.Kevin [00:33:08]: Yeah. And if, when you press that button or we also transcribe it. Yeah. So first we do the, we do the transcription. We do the. The, the speaker diarization. So basically you identify speech blocks that belong to the same speaker. This is then all orchestrated within, within LLM to identify which speech speech block belongs to which speaker together with, you know, we identify, as I mentioned earlier, we identify the guest name and the bio. So all of that comes together with an LLM to actually then assign assigned speaker names to, to each block. Yeah. And then most of the rest of the, the pipeline we've now used, we've now migrated to LLM. So we use mainly open AI, Google models, so the Gemini models and the open AI models, and we use some perplexity basically for those things where we need, where we need web search. Yeah. That's something I'm still hoping, especially open AI will also provide us an API. Oh, why? Well, basically for us as a consumer, the more providers there are.swyx [00:34:07]: The more downtime.Kevin [00:34:08]: The more competition and it will lead to better, better results. And, um, lower costs over time. I don't, I don't see perplexity as expensive. If you use the web search, the price is like $5 per a thousand queries. Okay. Which is affordable. But, uh, if you compare that to just a normal LLM call, um, it's, it's, uh, much more expensive. Have you tried Exa? We've, uh, looked into it, but we haven't really tried it. Um, I mean, we, we started with perplexity and, uh, it works, it works well. And if I remember. Correctly, Exa is also a bit more expensive.swyx [00:34:45]: I don't know. I don't know. They seem to focus on the search thing as a search API, whereas perplexity, maybe more consumer-y business that is higher, higher margin. Like I'll put it like perplexity is trying to be a product, Exa is trying to be infrastructure. Yeah. So that, that'll be my distinction there. And then the other thing I will mention is Google has a search grounding feature. Yeah. Which you, which you might want. Yeah.Kevin [00:35:07]: Yeah. We've, uh, we've also tried that out. Um, not as good. So we, we didn't, we didn't go into. Too much detail in like really comparing it, like quality wise, because we actually already had the perplexity one and it, and it's, and it's working. Yeah. Um, I think also there, the price is actually higher than perplexity. Yeah. Really? Yeah.swyx [00:35:26]: Google should cut their prices.Kevin [00:35:29]: Maybe it was the same price. I don't want to say something incorrect, but it wasn't cheaper. It wasn't like compelling. And then, then there was no reason to switch. So, I mean, maybe like in general, like for us, given that we do work with a lot of content, price is actually something that we do look at. Like for us, it's not just about taking the best model for every task, but it's really getting the best, like identifying what kind of intelligence level you need and then getting the best price for that to be able to really scale this and, and provide us, um, yeah, let our users use these features with as many podcasts as possible. Yeah.swyx [00:36:03]: I wanted to double, double click on diarization. Yeah. Uh, it's something that I don't think people do very well. So you know, I'm, I'm a, I'm a B user. I don't have it right now. And, and they were supposed to speak, but they dropped out last minute. Um, but, uh, we've had them on the podcast before and it's not great yet. Do you use just PI Anode, the default stuff, or do you find any tricks for diarization?Kevin [00:36:27]: So we do use the, the open source packages, but we have tweaked it a bit here and there. For example, if you mentioned the BAI guys, I actually listened to the podcast episode was super nice. Thank you. And when you started talking about speaker diarization, and I just have to think about, uh, I don't know.Kevin [00:36:49]: Is it possible? I don't know. I don't know. F**k this. Yeah, no, I don't know.Kevin [00:36:55]: Yeah. We are the best. This is a.swyx [00:37:07]: I don't know. This is the best. I don't know. This is the best. Yeah. Yeah. Yeah. You're doing good.Kevin [00:37:12]: So, so yeah. This is great. This is good. Yeah. No, so that of course helps us. Another thing that helps us is that we know certain structural aspects of the podcast. For example, how often does someone speak? Like if someone, like let's say there's a one hour episode and someone speaks for 30 seconds, that person is most probably not the guest and not the host. It's probably some ad, like some speaker from an ad. So we have like certain of these heuristics that we can use and we leverage to improve things. And in the past, we've also changed the clustering algorithm. So basically how a lot of the speaker diarization works is you basically create an embedding for the speech that's happening. And then you try to somehow cluster these embeddings and then find out this is all one speaker. This is all another speaker. And there we've also tweaked a couple of things where we again used heuristics that we could apply from knowing how podcasts function. And that's also actually why I was feeling so much with the BAI guys, because like all of these heuristics, like for them, it's probably almost impossible to use any heuristics because it can just be any situation, anything.Kevin [00:38:34]: So that's one thing that we do. Yeah, another thing is that we actually combine it with LLM. So the transcript, LLMs and the speaker diarization, like bringing all of these together to recalibrate some of the switching points. Like when does the speaker stop? When does the next one start?swyx [00:38:51]: The LLMs can add errors as well. You know, I wouldn't feel safe using them to be so precise.Kevin [00:38:58]: I mean, at the end of the day, like also just to not give a wrong impression, like the speaker diarization is also not perfect that we're doing, right? I basically don't really notice it.swyx [00:39:08]: Like I use it for search.Kevin [00:39:09]: Yeah, it's not perfect yet, but it's gotten quite good. Like, especially if you compare, if you look at some of the, like if you take a latest episode and you compare it to an episode that came out a year ago, we've improved it quite a bit.swyx [00:39:23]: Well, it's beautifully presented. Oh, I love that I can click on the transcript and it goes to the timestamp. So simple, but you know, it should exist. Yeah, I agree. I agree. So this, I'm loading a two hour episode of Detect Me Right Home, where there's a lot of different guests calling in and you've identified the guest name. And yeah, so these are all LLM based. Yeah, it's really nice.Kevin [00:39:49]: Yeah, like the speaker names.swyx [00:39:50]: I would say that, you know, obviously I'm a power user of all these tools. You have done a better job than Descript. Okay, wow. Descript is so much funding. They had their open AI invested in them and they still suck. So I don't know, like, you know, keep going. You're doing great. Yeah, thanks. Thanks.Kevin [00:40:12]: I mean, I would, I would say that, especially for anyone listening who's interested in building a consumer app with AI, I think the, like, especially if your background is in AI and you love working with AI and doing all of that, I think the most important thing is just to keep reminding yourself of what's actually the job to be done here. Like, what does actually the consumer want? Like, for example, you now were just delighted by the ability to click on this word and it jumps there. Yeah. Like, this is not, this is not rocket science. This is, like, you don't have to be, like, I don't know, Android Kapathi to come up with that and build that, right? And I think that's, that's something that's super important to keep in mind.swyx [00:40:52]: Yeah, yeah. Amazing. I mean, there's so many features, right? It's, it's so packed. There's quotes that you pick up. There's summarization. Oh, by the way, I'm going to use this as my official feature request. I want to customize what, how it's summarized. I want to, I want to have a custom prompt. Yeah. Because your summarization is good, but, you know, I have different preferences, right? Like, you know.Kevin [00:41:14]: So one thing that you can already do today, I completely get your feature request. And I think it just.swyx [00:41:18]: I'm sure people have asked it.Kevin [00:41:19]: I mean, maybe just in general as a, as a, how I see the future, you know, like in the future, I think all, everything will be personalized. Yeah, yeah. Like, not, this is not specific to us. Yeah. And today we're still in a, in a phase where the cost of LLMs, at least if you're working with, like, such long context windows. As us, I mean, there's a lot of tokens in, if you take an entire podcast, so you still have to take that cost into consideration. So if for every single user, we regenerate it entirely, it gets expensive. But in the future, this, you know, cost will continue to go down and then it will just be personalized. So that being said, you can already today, if you go to the player screen. Okay. And open up the chat. Yeah. You can go to the, to the chat. Yes. And just ask for a summary in your style.swyx [00:42:13]: Yeah. Okay. I mean, I, I listen to consume, you know? Yeah. Yeah. I, I've never really used this feature. I don't know. I think that's, that's me being a slow adopter. No, no. I mean, that's. It has, when does the conversation start? Okay.Kevin [00:42:26]: I mean, you can just type anything. I think what you're, what you're describing, I mean, maybe that is also an interesting topic to talk about. Yes. Where, like, basically I told you, like, look, we have this chat. You can just ask for it. Yeah. And this is, this is how ChatGPT works today. But if you're building a consumer app, you have to move beyond the chat box. People do not want to always type out what they want. So your feature request was, even though theoretically it's already possible, what you are actually asking for is, hey, I just want to open up the app and it should just be there in a nicely formatted way. Beautiful way such that I can read it or consume it without any issues. Interesting. And I think that's in general where a lot of the, the. Opportunities lie currently in the market. If you want to build a consumer app, taking the capability and the intelligence, but finding out what the actual user interface is the best way how a user can engage with this intelligence in a natural way.swyx [00:43:24]: Is this something I've been thinking about as kind of like AI that's not in your face? Because right now, you know, we like to say like, oh, use Notion has Notion AI. And we have the little thing there. And there's, or like some other. Any other platform has like the sparkle magic wand emoji, like that's our AI feature. Use this. And it's like really in your face. A lot of people don't like it. You know, it should just kind of become invisible, kind of like an invisible AI.Kevin [00:43:49]: 100%. I mean, the, the way I see it as AI is, is the electricity of, of the future. And like no one, like, like we don't talk about, I don't know, this, this microphone uses electricity, this phone, you don't think about it that way. It's just in there, right? It's not an electricity enabled product. No, it's just a product. Yeah. It will be the same with AI. I mean, now. It's still a, something that you use to market your product. I mean, we do, we do the same, right? Because it's still something that people realize, ah, they're doing something new, but at some point, no, it'll just be a podcast app and it will be normal that it has all of this AI in there.swyx [00:44:24]: I noticed you do something interesting in your chat where you source the timestamps. Yeah. Is that part of this prompt? Is there a separate pipeline that adds source sources?Kevin [00:44:33]: This is, uh, actually part of the prompt. Um, so this is all prompt engine. Engineering, um, uh, you should be able to click on it. Yeah, I clicked on it. Um, this is all prompt engineering with how to provide the, the context, you know, we, because we provide all of the transcript, how to provide the context and then, yeah, I get them all to respond in a correct way with a certain format and then rendering that on the front end. This is one of the examples where I would say it's so easy to create like a quick demo of this. I mean, you can just go to chat to be deep, paste this thing in and say like, yeah, do this. Okay. Like 15 minutes and you're done. Yeah. But getting this to like then production level that it actually works 99% of the time. Okay. This is then where, where the difference lies. Yeah. So, um, for this specific feature, like we actually also have like countless regexes that they're just there to correct certain things that the LLM is doing because it doesn't always adhere to the format correctly. And then it looks super ugly on the front end. So yeah, we have certain regexes that correct that. And maybe you'd ask like, why don't you use an LLM for that? Because that's sort of the, again, the AI native way, like who uses regexes anymore. But with the chat for user experience, it's very important that you have the streaming because otherwise you need to wait so long until your message has arrived. So we're streaming live the, like, just like ChatGPT, right? You get the answer and it's streaming the text. So if you're streaming the text and something is like incorrect. It's currently not easy to just like pipe, like stream this into another stream, stream this into another stream and get the stream back, which corrects it, that would be amazing. I don't know, maybe you can answer that. Do you know of any?swyx [00:46:19]: There's no API that does this. Yeah. Like you cannot stream in. If you own the models, you can, uh, you know, whatever token sequence has, has been emitted, start loading that into the next one. If you fully own the models, uh, I don't, it's probably not worth it. That's what you do. It's better. Yeah. I think. Yeah. Most engineers who are new to AI research and benchmarking actually don't know how much regexing there is that goes on in normal benchmarks. It's just like this ugly list of like a hundred different, you know, matches for some criteria that you're looking for. No, it's very cool. I think it's, it's, it's an example of like real world engineering. Yeah. Do you have a tooling that you're proud of that you've developed for yourself?Kevin [00:47:02]: Is it just a test script or is it, you know? I think it's a bit more, I guess the term that has come up is, uh, vibe coding, uh, vibe coding, some, no, sorry, that's actually something else in this case, but, uh, no, no, yes, um, vibe evals was a term that in one of the talks actually on, on, um, I think it might've been the first, the first or the first day at the conference, someone brought that up. Yeah. Uh, because yeah, a lot of the talks were about evals, right. Which is so important. And yeah, I think for us, it's a bit more vibe. Evals, you know, that's also part of, you know, being a startup, we can take risks, like we can take the cost of maybe sometimes it failing a little bit or being a little bit off and our users know that and they appreciate that in return, like we're moving fast and iterating and building, building amazing things, but you know, a Spotify or something like that, half of our features will probably be in a six month review through legal or I don't know what, uh, before they could sell them out.swyx [00:48:04]: Let's just say Spotify is not very good at podcasting. Um, I have a documented, uh, dislike for, for their podcast features, just overall, really, really well integrated any other like sort of LLM focused engineering challenges or problems that you, that you want to highlight.Kevin [00:48:20]: I think it's not unique to us, but it goes again in the direction of handling the uncertainty of LLMs. So for example, with last year, at the end of the year, we did sort of a snipped wrapped. And one of the things we thought it would be fun to, just to do something with, uh, with an LLM and something with the snips that, that a user has. And, uh, three, let's say unique LLM features were that we assigned a personality to you based on the, the snips that, that you have. It was, I mean, it was just all, I guess, a bit of a fun, playful way. I'm going to look up mine. I forgot mine already.swyx [00:48:57]: Um, yeah, I don't know whether it's actually still in the, in the, we all took screenshots of it.Kevin [00:49:01]: Ah, we posted it in the, in the discord. And the, the second one, it was, uh, we had a learning scorecard where we identified the topics that you snipped on the most, and you got like a little score for that. And the third one was a, a quote that stood out. And the quote is actually a very good example of where we would run that for user. And most of the time it was an interesting quote, but every now and then it was like a super boring quotes that you think like, like how, like, why did you select that? Like, come on for there. The solution was actually just to say, Hey, give me five. So it extracted five quotes as a candidate, and then we piped it into a different model as a judge, LLM as a judge, and there we use a, um, a much better model because with the, the initial model, again, as, as I mentioned also earlier, we do have to look at the, like the, the costs because it's like, we have so much text that goes into it. So we, there we use a bit more cheaper model, but then the judge can be like a really good model to then just choose one out of five. This is a practical example.swyx [00:50:03]: I can't find it. Bad search in discord. Yeah. Um, so, so you do recommend having a much smarter model as a judge, uh, and that works for you. Yeah. Yeah. Interesting. I think this year I'm very interested in LM as a judge being more developed as a concept, I think for things like, you know, snips, raps, like it's, it's fine. Like, you know, it's, it's, it's, it's entertaining. There's no right answer.Kevin [00:50:29]: I mean, we also have it. Um, we also use the same concept for our books feature where we identify the, the mention. Books. Yeah. Because there it's the same thing, like 90% of the time it, it works perfectly out of the box one shot and every now and then it just, uh, starts identifying books that were not really mentioned or that are not books or made, yeah, starting to make up books. And, uh, they are basically, we have the same thing of like another LLM challenging it. Um, yeah. And actually with the speakers, we do the same now that I think about it. Yeah. Um, so I'm, I think it's a, it's a great technique. Interesting.swyx [00:51:05]: You run a lot of calls.Kevin [00:51:07]: Yeah.swyx [00:51:08]: Okay. You know, you mentioned costs. You move from self hosting a lot of models to the, to the, you know, big lab models, open AI, uh, and Google, uh, non-topic.Kevin [00:51:18]: Um, no, we love Claude. Like in my opinion, Claude is the, the best one when it comes to the way it formulates things. The personality. Yeah. The personality. Okay. I actually really love it. But yeah, the cost is. It's still high.swyx [00:51:36]: So you cannot, you tried Haiku, but you're, you're like, you have to have Sonnet.Kevin [00:51:40]: Uh, like basically we like with Haiku, we haven't experimented too much. We obviously work a lot with 3.5 Sonnet. Uh, also, you know, coding. Yeah. For coding, like in cursor, just in general, also brainstorming. We use it a lot. Um, I think it's a great brainstorm partner, but yeah, with, uh, with, with a lot of things that we've done done, we opted for different models.swyx [00:52:00]: What I'm trying to drive at is how much cheaper can you get if you go from cloud to cloud? Closed models to open models. And maybe it's like 0% cheaper, maybe it's 5% cheaper, or maybe it's like 50% cheaper. Do you have a sense?Kevin [00:52:13]: It's very difficult to, to judge that. I don't really have a sense, but I can, I can give you a couple of thoughts that have gone through our minds over the time, because obviously we do realize like, given that we, we have a couple of tasks where there are just so many tokens going in, um, at some point it will make sense to, to offload some of that. Uh, to an open source model, but going back to like, we're, we're a startup, right? Like we're not an AI lab or whatever, like for us, actually the most important thing is to iterate fast because we need to learn from our users, improve that. And yeah, just this velocity of this, these iterations. And for that, the closed models hosted by open AI, Google is, uh, and swapping, they're just unbeatable because you just, it's just an API call. Yeah. Um, so you don't need to worry about. Yeah. So much complexity behind that. So this is, I would say the biggest reason why we're not doing more in this space, but there are other thoughts, uh, also for the future. Like I see two different, like we basically have two different usage patterns of LLMs where one is this, this pre-processing of a podcast episode, like this initial processing, like the transcription, speaker diarization, chapterization. We do that once. And this, this usage pattern it's, it's quite predictable. Because we know how many podcasts get released when, um, so we can sort of have a certain capacity and we can, we, we're running that 24 seven, it's one big queue running 24 seven.swyx [00:53:44]: What's the queue job runner? Uh, is it a Django, just like the Python one?Kevin [00:53:49]: No, that, that's just our own, like our database and the backend talking to the database, picking up jobs, finding it back. I'm just curious in orchestration and queues. I mean, we, we of course have like, uh, a lot of other orchestration where we're, we're, where we use, uh, the Google pub sub, uh, thing, but okay. So we have this, this, this usage pattern of like very predictable, uh, usage, and we can max out the, the usage. And then there's this other pattern where it's, for example, the snippet where it's like a user, it's a user action that triggers an LLM call and it has to be real time. And there can be moments where it's by usage and there can be moments when there's very little usage for that. There. So that's, that's basically where these LLM API calls are just perfect because you don't need to worry about scaling this up, scaling this down, um, handling, handling these issues. Serverless versus serverful.swyx [00:54:44]: Yeah, exactly. Okay.Kevin [00:54:45]: Like I see them a bit, like I see open AI and all of these other providers, I see them a bit as the, like as the Amazon, sorry, AWS of, of AI. So it's a bit similar how like back before AWS, you would have to have your, your servers and buy new servers or get rid of servers. And then with AWS, it just became so much easier to just ramp stuff up and down. Yeah. And this is like the taking it even, even, uh, to the next level for AI. Yeah.swyx [00:55:18]: I am a big believer in this. Basically it's, you know, intelligence on demand. Yeah. We're probably not using it enough in our daily lives to do things. I should, we should be able to spin up a hundred things at once and go through things and then, you know, stop. And I feel like we're still trying to figure out how to use LLMs in our lives effectively. Yeah. Yeah.Kevin [00:55:38]: 100%. I think that goes back to the whole, like that, that's for me where the big opportunity is for, if you want to do a startup, um, it's not about, but you can let the big labs handleswyx [00:55:48]: the challenge of more intelligence, but, um, it's the... Existing intelligence. How do you integrate? How do you actually incorporate it into your life? AI engineering. Okay, cool. Cool. Cool. Cool. Um, the one, one other thing I wanted to touch on was multimodality in frontier models. Dwarcash had a interesting application of Gemini recently where he just fed raw audio in and got diarized transcription out or timestamps out. And I think that will come. So basically what we're saying here is another wave of transformers eating things because right now models are pretty much single modality things. You know, you have whisper, you have a pipeline and everything. Yeah. You can't just say, Oh, no, no, no, we only fit like the raw, the raw files. Do you think that will be realistic for you? I 100% agree. Okay.Kevin [00:56:38]: Basically everything that we talked about earlier with like the speaker diarization and heuristics and everything, I completely agree. Like in the, in the future that would just be put everything into a big multimodal LLM. Okay. And it will output, uh, everything that you want. Yeah. So I've also experimented with that. Like just... With, with Gemini 2? With Gemini 2.0 Flash. Yeah. Just for fun. Yeah. Yeah. Because the big difference right now is still like the cost difference of doing speaker diarization this way or doing transcription this way is a huge difference to the pipeline that we've built up. Huh. Okay.swyx [00:57:15]: I need to figure out what, what that cost is because in my mind 2.0 Flash is so cheap. Yeah. But maybe not cheap enough for you.Kevin [00:57:23]: Uh, no, I mean, if you compare it to, yeah, whisper and speaker diarization and especially self-hosting it and... Yeah. Yeah. Yeah.swyx [00:57:30]: Yeah.Kevin [00:57:30]: Okay. But we will get there, right? Like this is just a question of time.swyx [00:57:33]: And, um, at some point, as soon as that happens, we'll be the first ones to switch. Yeah. Awesome. Anything else that you're like sort of eyeing on the horizon as like, we are thinking about this feature, we're thinking about incorporating this new functionality of AI into our, into our app? Yeah.Kevin [00:57:50]: I mean, we, there's so many areas that we're thinking about, like our challenge is a bit more... Choosing. Yeah. Choosing. Yeah. So, I mean, I think for me, like looking into like the next couple of years, like the big areas that interest us a lot, basically four areas, like one is content. Um, right now it's, it's podcasts. I mean, you did mention, I think you mentioned like you can also upload audio books and YouTube videos. YouTube. I actually use the YouTube one a fair amount. But in the future, we, we want to also have audio books natively in the app. And, uh, we want to enable AI generated content. Like just think of, take deep research and notebook analysis. Like put these together. That should be, that should be in our app. The second area is discovery. I think in general. Yeah.swyx [00:58:38]: I noticed that you don't have, so you
Guy Royse, dev advocate at Redis, discusses going beyond the cache with Redis and Node.js. He explores its capabilities as a memory-first database, session management, and even fun use cases like the Bigfoot Tracker API. He also shares insights on Redis OM for object mapping and its future in the JavaScript ecosystem. Links http://guyroyse.com http://github.com/guyroyse https://www.twitch.tv/guyroyse https://www.youtube.com/channel/UCNt5SDc6LosO41E77jr59cQ https://x.com/guyroyse https://www.linkedin.com/in/groyse https://2024.connect.tech/session/693665 We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Let us know by sending an email to our producer, Emily, at emily.kochanekketner@logrocket.com (mailto:emily.kochanekketner@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understand where your users are struggling by trying it for free at [LogRocket.com]. Try LogRocket for free today.(https://logrocket.com/signup/?pdr) Special Guest: Guy Royse.
Show DescriptionIt's a speed run meeting edition episode and we're talking conspiracy theories, getting hypnotized, disinformation on TikTok vs the news, view transitions vs CSS animations vs the web animation API, follow ups on font-weight and attire, and classic autocomplete vs AI autocomplete. Listen on Website →Links SubwayTakes (@subwaytakes) | TikTok Motion - A modern animation library for JavaScript and React font-weight: 300 considered harmful | CSS-Tricks Sponsors
Head into the world of educational technology with Mikah Sargent and Rosemary Orchard as they explore cutting-edge iOS apps that make learning science, technology, engineering, and mathematics engaging and interactive. From virtual anatomy lessons to coding playgrounds, these apps transform your device into a powerful learning tool. Skeleton 3D Anatomy: A free app for exploring human anatomy in 3D, allowing users to tap and learn about different bones, their Latin names, and skeletal structures. Perfect for students, medical enthusiasts, or anyone curious about the human body. Swift Playground: Apple's coding education app that helps users learn Swift programming. Recent updates include the ability to create and publish entire apps directly from an iPad, making coding more accessible than ever. Khan Academy: A comprehensive learning platform offering free courses in mathematics, sciences, computing, and more. Features include partner content from NASA and the California Academy of Sciences, with the ability to track progress across devices. BrainPOP: An educational video platform featuring engaging content about scientific concepts, historical figures, and educational quizzes. Known for its animated robots and human characters that explain complex topics in an approachable manner. Enki: A coding learning app supporting multiple programming languages like Python, JavaScript, SQL, and CSS. Offers flexible subscription options for those wanting to expand their programming skills. The Elements by Theodore Gray: An interactive periodic table app with beautiful images, 3D representations, and fascinating stories about chemical elements. Froggipedia: A $3.99 iPad/iPhone app that provides a digital alternative to traditional frog dissection, teaching the amphibian life cycle through interactive experiences. Lab O Bundle: A collection of science apps including Beaker, Space, Chemist, and more.Shortcuts Corner VPN App Notification Shortcut: A listener seeks a way to create a notification/automation that reminds him to close work-related apps before launching NordVPN to avoid false security alerts. Hosts: Mikah Sargent and Rosemary Orchard Contact iOS Today at iOSToday@twit.tv. Download or subscribe to iOS Today at https://twit.tv/shows/ios-today Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
Head into the world of educational technology with Mikah Sargent and Rosemary Orchard as they explore cutting-edge iOS apps that make learning science, technology, engineering, and mathematics engaging and interactive. From virtual anatomy lessons to coding playgrounds, these apps transform your device into a powerful learning tool. Skeleton 3D Anatomy: A free app for exploring human anatomy in 3D, allowing users to tap and learn about different bones, their Latin names, and skeletal structures. Perfect for students, medical enthusiasts, or anyone curious about the human body. Swift Playground: Apple's coding education app that helps users learn Swift programming. Recent updates include the ability to create and publish entire apps directly from an iPad, making coding more accessible than ever. Khan Academy: A comprehensive learning platform offering free courses in mathematics, sciences, computing, and more. Features include partner content from NASA and the California Academy of Sciences, with the ability to track progress across devices. BrainPOP: An educational video platform featuring engaging content about scientific concepts, historical figures, and educational quizzes. Known for its animated robots and human characters that explain complex topics in an approachable manner. Enki: A coding learning app supporting multiple programming languages like Python, JavaScript, SQL, and CSS. Offers flexible subscription options for those wanting to expand their programming skills. The Elements by Theodore Gray: An interactive periodic table app with beautiful images, 3D representations, and fascinating stories about chemical elements. Froggipedia: A $3.99 iPad/iPhone app that provides a digital alternative to traditional frog dissection, teaching the amphibian life cycle through interactive experiences. Lab O Bundle: A collection of science apps including Beaker, Space, Chemist, and more.Shortcuts Corner VPN App Notification Shortcut: A listener seeks a way to create a notification/automation that reminds him to close work-related apps before launching NordVPN to avoid false security alerts. Hosts: Mikah Sargent and Rosemary Orchard Contact iOS Today at iOSToday@twit.tv. Download or subscribe to iOS Today at https://twit.tv/shows/ios-today Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
Wes and Scott talk with Aaron Francis about Fusion for Laravel, a new way to seamlessly integrate PHP into JavaScript. They discuss how Fusion expands on Inertia, its potential for React support, and how it simplifies full-stack development. Show Notes 00:00 Welcome to Syntax! 01:22 Aaron's background in PHP Yii Laravel 02:27 What is Fusion for Laravel? Fusion for Laravel 09:14 How Fusion works 13:57 The benefits of Laravel 19:18 Invalidation and caching 25:20 Brought to you by Sentry.io 25:32 Optimistic UI 28:28 React integration? 31:44 Fusion's original name (and the naming process) 33:30 Laravel's approach to frontend frameworks Livewire 37:32 Databases and scaling 41:27 Postgres extensibility and hosting options Crunchy Data Xata 47:44 The vision for Fusion 48:31 Sick Picks + Shameless Plugs Sick Picks Aaron: Better Display CLI Shameless Plugs Aaron: High Performance SQLite Mastering Postgres Screencasting.com Hit us up on Socials! Syntax: X Instagram Tiktok LinkedIn Threads Wes: X Instagram Tiktok LinkedIn Threads Scott: X Instagram Tiktok LinkedIn Threads Randy: X Instagram YouTube Threads
Scott and Wes answer your listener questions! They debate Axios vs. Fetch, discuss whether Next.js is overkill without a backend, talk htmx and Alpine, dive into tech career transitions, and tackle everything from podcast ads to password hashing myths. Show Notes 00:00 Welcome to Syntax! 00:55 Scott's health update. 04:11 Submit your questions. 04:26 Is Axios still worth using over Fetch? shiki. xior. ky. 10:17 Does Alpine.js solve HTMX's client-side limitations? Syntax Ep. 868: The State of JavaScript. Server Driven Web Apps With HTMX. Syntax Ep. 568: Supper Club × Caleb Porzio. Alpine.js. Inertia.js. 16:47 How should I host my database for a local-first app? Neon Tech 22:50 Brought to you by Sentry.io. 24:14 Should I use Next.js if I want a separate backend? Create Vite Extra. 32:08 Are ad networks like BuySellAds worth it for podcasts? 36:36 Can I transition from airline pilot to senior software developer? 41:23 Is Base64 encoding a valid alternative to password hashing? 45:43 How do I use unexported functions from a third-party package? 48:09 How do you stay on top of package and browser updates? Syntax Ep. 425: Updating Project Dependencies. npm-check-update. 52:38 Why are Chrome and Firefox's mobile presets outdated? 57:20 Should I give feedback on bad UX/UI designs from agencies? 01:01:53 Sick Picks + Shameless Plugs. Sick Picks Scott: Nothing Ear (a). Wes: SmallRig Phone Cage. Shameless Plugs Wes: Syntax on YouTube. Hit us up on Socials! Syntax: X Instagram Tiktok LinkedIn Threads Wes: X Instagram Tiktok LinkedIn Threads Scott: X Instagram Tiktok LinkedIn Threads Randy: X Instagram YouTube Threads