Hosting service for software projects using Git
POPULARITY
Categories
FBI investigates suspicious activities on agency network Over 100 GitHub repositories distributing BoryptGrab stealer Hackers abuse .arpa DNS and ipv6 to evade phishing defenses Get links to all the stories in our show notes: https://cisoseries.com/cybersecurity-news-fbi-network-breach-github-distributes-stealer-hackers-abuse-arpa/ Huge thanks to our sponsor, Dropzone AI Here is a number worth knowing before RSAC. The average enterprise SOC sees tens of thousands of alerts a day. Most get triaged. A fraction get thoroughly investigated. The rest sit in the queue or get auto-closed. Dropzone AI puts AI SOC agents on every one of those alerts. Every alert investigated, end to end, across your full tool stack, around the clock. Over 300 deployments in production today. They are at RSAC this year. Booth 455. dropzone.ai/rsa-2026-ai-diner
This episode explores vocabulary related to pathology (patologia), business systems (systemy biznesowe), technology (technologia), and digital operations (operacje cyfrowe) in Polish. We dive into how to discuss problems (problemy), solutions (rozwiązania), networks (sieci), and modern business infrastructure – all in practical, everyday Polish. Welcome to the Learn Polish Podcast – your immersive gateway to mastering Polish through real conversations, cultural insights, and practical everyday language. Each episode blends authentic Polish dialogue with clear English explanations, helping you build vocabulary naturally while exploring Polish business concepts, technology terms, and modern life topics. Whether you're a complete beginner or advancing your skills, join us as we make learning Polish engaging, practical, and fun. From pathology (patologia) to digital systems (systemy cyfrowe), we cover the phrases you actually need for today's world. Find more episodes, lesson materials, and resources at www.learnpolishpodcast.com. You can also find us on YouTube, Spotify, and Rumble. Looking for virtual assistance? Visit va.world. Join our school groups on Brain Upgrade and podcasting – links in the show notes. Need lessons in Polish or Spanish? Check the links in the description for both audio and video content. Try our free brain upgrade course at school.com/brainupgrade English Polish Pronunciation Example Usage Pathology Patologia pah-to-lo-GHEE-ah To jest patologia. (This is a mess/pathology.) System System SIS-tem System działa. (The system works.) Problem Problem PRO-blem Mamy problem. (We have a problem.) Solution Rozwiązanie roz-vy-ZA-nyeh Znajdźmy rozwiązanie. (Let's find a solution.) Network Sieć / Network seech / NET-work Sieć działa dobrze. (The network works well.) Technology Technologia tek-no-lo-GHEE-ah Nowa technologia. (New technology.) Digital Cyfrowy tsih-FRO-vih System cyfrowy. (Digital system.) Business Biznes BEES-nes Mój biznes rośnie. (My business is growing.) Product Produkt PRO-dukt Nowy produkt. (New product.) Service Usługa oo-SWOO-gah Dobra usługa. (Good service.) Agency Agencja ah-GEN-tsya Pracuję w agencji. (I work at an agency.) Marketing Marketing MAR-ke-ting Marketing internetowy. (Internet marketing.) Telephone Telefon teh-LEH-fon Zadzwoń na telefon. (Call the phone.) Call Połączenie / Zadzwonić po-won-CHEN-yeh / zad-ZVO-neech Zadzwoń do mnie. (Call me.) Object Obiekt / Obiekt OB-yekt Jaki to obiekt? (What object is this?) Version Wersja VER-shah Nowa wersja systemu. (New system version.) Target Cel / Target tsel / TAR-get Jaki jest cel? (What is the target?) Goal Cel tsel Mój cel to... (My goal is...) Bonus Bonus BO-nus Dostałem bonus. (I got a bonus.) Million Milion MEE-lyon Jeden milion. (One million.) Percent Procent PRO-tsent Dziesięć procent. (Ten percent.) Statistics Statystyka sta-TIS-ti-kah Statystyka pokazuje... (Statistics show...) Data Dane / Data DAH-neh / DAH-tah Analiza danych. (Data analysis.) Machine Maszyna mah-SHI-nah Maszyna działa. (The machine works.) Robot Robot RO-bot Robot automatyzuje. (The robot automates.) Automation Automatyzacja au-to-mah-ti-ZA-tsya Automatyzacja procesów. (Process automation.) Application Aplikacja ah-plee-KA-tsya Nowa aplikacja. (New application.) Software Oprogramowanie o-pro-gra-mo-VAH-nyeh Nowe oprogramowanie. (New software.) Hardware Sprzęt SPR-shent Nowy sprzęt. (New hardware.) GitHub GitHub GIT-hub Kod na GitHubie. (Code on GitHub.) Website Strona internetowa STRO-nah in-ter-ne-TO-vah Moja strona www. (My website.) Domain Domena do-MEN-nah Rejestracja domeny. (Domain registration.) Calendar Kalendarz kal-EN-darsh Sprawdź kalendarz. (Check the calendar.) Schedule Harmonogram / Grafik har-mo-NO-gram / GRA-fik Jaki jest grafik? (What's the schedule?) Event Wydarzenie / Event vih-dah-ZHEN-yeh / EH-vent Organizuję event. (I'm organizing an event.) Organization Organizacja or-ga-nee-ZA-tsya Dobra organizacja. (Good organization.) Union Unia / Związek OO-nya / ZVYON-zek Unia Europejska. (European Union.) Change Zmiana ZMYAH-nah Czas na zmianę. (Time for change.) Smart Smart / Inteligentny smart / in-te-li-GENT-nih Smart rozwiązanie. (Smart solution.) Positive Pozytywny po-zi-TIV-nih Pozytywne myślenie. (Positive thinking.) Logic Logika lo-GHEE-kah Logika biznesu. (Business logic.) Context Kontekst KON-tekst W kontekście... (In the context of...) Access Dostęp DOH-stemp Mam dostęp. (I have access.) Inspection Inspekcja / Kontrola in-SPEK-tsya / kon-TRO-lah Inspekcja jakości. (Quality inspection.) Quality Jakość YAH-koshch Wysoka jakość. (High quality.) Customer Klient KLEE-ent Klient jest ważny. (The customer is important.) Private Prywatny pri-VAT-nih Prywatna firma. (Private company.) Public Publiczny / Publiczny poo-BLEECH-nih Sektor publiczny. (Public sector.) National Narodowy / Krajowy na-ro-DO-vih / krai-YO-vih Krajowa sieć. (National network.) International Międzynarodowy myen-dza-na-ro-DO-vih Międzynarodowa firma. (International company.) AI AI / Sztuczna inteligencja ah-ee / SHTOOCH-nah in-te-li-GEN-tsya AI zmienia biznes. (AI is changing business.) Upgrade Upgrade / Aktualizacja UP-grade / ak-tu-a-li-ZA-tsya Czas na upgrade. (Time for an upgrade.) Training Trening / Szkolenie TRE-ning / shko-LEN-yeh Szkolenie online. (Online training.) Process Proces PRO-tses Proces automatyzacji. (Automation process.) Store Sklep / Magazyn sklep / ma-ga-ZIN Sklep internetowy. (Online store.) Source Źródło ZWOO-dwo Źródło danych. (Data source.)
Has the cost of software development officially dropped below the minimum wage? Andrew and Ben examine this economic shift alongside the rapid open-source growth and security implications of the OpenClaw project. They also explore Steve Yegge's concept of a federated wasteland for orchestrators and how the new Perplexity Computer is stepping up to act as a persistent, always-on digital coworker.Follow the show:Subscribe to our Substack Follow us on LinkedInSubscribe to our YouTube ChannelLeave us a ReviewFollow the hosts:Follow AndrewFollow BenFollow DanFollow today's stories:OpenClaw rocks to GitHub's most-starred status, but is it safe?Welcome to the Wasteland: A Thousand Gas TownsIntroducing Perplexity ComputerSoftware development now costs less than than the wage of a minimum wage workerScott Werner's Works on My machineTraffic Jam ExplorerOFFERS Start Free Trial: Get started with LinearB's AI productivity platform for free. Book a Demo: Learn how you can ship faster, improve DevEx, and lead with confidence in the AI era. LEARN ABOUT LINEARB AI Code Reviews: Automate reviews to catch bugs, security risks, and performance issues before they hit production. AI & Productivity Insights: Go beyond DORA with AI-powered recommendations and dashboards to measure and improve performance. AI-Powered Workflow Automations: Use AI-generated PR descriptions, smart routing, and other automations to reduce developer toil. MCP Server: Interact with your engineering data using natural language to build custom reports and get answers on the fly.
Ryan Carson taught over 1,000,000 people how to code at Treehouse and spent 25% of his entire life doing it. Now he says everything about that process needs to change.In this livestream, Ryan joins Corey Noles and Grant Harvey to rethink programming education from scratch. When AI agents can write production code, pass competitive coding challenges, and ship features while you sleep.We'll cover:
Stolen Gemini API Key Triggers $82K Bill, Accenture Buys Ookla, OpenAI vs GitHub, and Meta Smart Glasses Privacy Jim Love covers multiple tech stories: a three-developer startup in Mexico saw its Google Gemini bill jump from about $180/month to $82,314 in two days after attackers used a stolen API key, highlighting the financial and security risks of usage-based AI APIs, limits, and autonomous agents. Accenture is buying Ookla (Speedtest and Downdetector) for about $1.2B, aiming to monetize its large real-world internet performance dataset for consulting and infrastructure work. Reports say OpenAI may be developing a developer platform that could compete with Microsoft's GitHub, complicating their partnership. China's Minimax launches Max Claw, a cloud "always-on" AI agent deployable in 10 seconds, raising broader access and data-security concerns. Apple's MacBook Neo looks inexpensive but has fixed 8GB memory and paid storage upgrades. Meta's Ray-Ban smart glasses raise privacy questions around stored AI interactions and human review. Hashtag Trending would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at Meter.com/htt 00:00 Sponsor Message Meter 01:04 Gemini Key Bill Shock 04:46 Accenture Buys Ookla 06:26 OpenAI vs GitHub Rumors 08:07 Minimax Max Claw Agents 11:07 MacBook Neo Value Trap 12:51 Meta Smart Glasses Privacy 14:56 Wrap Up and Thanks
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
Full Audio at https://podcasts.apple.com/us/podcast/full-rundown-the-great-infrastructure-shift-apples/id1684415169?i=1000753128321
On this week's show, Patrick Gray, Adam Boileau and James WIlson discuss the week's cybersecurity news. They cover: The US-Israeli attack on Iran had a whole lot of cyber. It's clearly in the playbook now! The NSA Triangulation / L3 Harris Trenchant iOS exploit kit is on the loose, and being used by Chinese crypto scammers So long Maddhu Gottumukkala, but CISA's annus horribilis continues Adam “humbug” Boileau complains about the Airsnitch wifi attack just being three ethernets in a trenchcoat ASD's Cisco SD-WAN threat hunting guide is clearly borne of … experience This week's episode is sponsored by AI threat hunting platform Nebulock. Sydney Marrone joins to talk about how useful AI models are on the hunt, and her work building out an open source framework and maturity model. It's methodology agnostic, so you can adapt it for your environment, and the github link is in the show notes! This episode is also available on Youtube. Show notes Inside the plan to kill Ali Khamenei Hacked traffic cams and hijacked TVs: How cyber operations supported the war against Iran | TechCrunch Matthew Prince
The AI Breakdown: Daily Artificial Intelligence News and Discussions
Anthropic's surge and OpenAI's latest updates highlight how the consumer AI race is becoming about far more than model benchmarks. This episode explores the questions that will actually shape the outcome—from vibes vs performance to agents, multimodality, monetization, switching costs, and ecosystem lock-in. In the headlines: OpenAI reportedly building a GitHub rival, Meta reorganizes its AI teams, Amazon explores ads in AI chatbots, and Stripe introduces token-based billing for AI apps.PLEASE CONTRIBUTE TO OUR FEB AI USAGE PULSE SURVEY: https://aidailybrief.ai/pulse-surveyWant to build with OpenClaw?LEARN MORE ABOUT CLAW CAMP: https://campclaw.ai/Or for enterprises, check out: https://enterpriseclaw.ai/Brought to you by:KPMG – Agentic AI is powering a potential $3 trillion productivity shift, and KPMG's new paper, Agentic AI Untangled, gives leaders a clear framework to decide whether to build, buy, or borrow—download it at www.kpmg.us/NavigateMercury - Modern banking for business and now personal accounts. Learn more at https://mercury.com/personal-bankingRackspace Technology - Build, test and scale intelligent workloads faster with Rackspace AI Launchpad - http://rackspace.com/ailaunchpadBlitzy - Want to accelerate enterprise software development velocity by 5x? https://blitzy.com/Optimizely Agents in Action - Join the virtual event (with me!) free March 4 - https://www.optimizely.com/insights/agents-in-action/AssemblyAI - The best way to build Voice AI apps - https://www.assemblyai.com/briefLandfallIP - AI to Navigate the Patent Process - https://landfallip.com/Robots & Pencils - Cloud-native AI solutions that power results https://robotsandpencils.com/The Agent Readiness Audit from Superintelligent - Go to https://besuper.ai/ to request your company's agent readiness score.The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614Our Newsletter is BACK: https://aidailybrief.beehiiv.com/Interested in sponsoring the show? sponsors@aidailybrief.ai
Iran, Gavin Newsom and the NAACP Awards. What a weekend, man. Jump in with Janaya Future Khan. Project MVT on Github: https://github.com/mvt-project/mvt SUBSCRIBE + FOLLOW IG: www.instagram.com/darkwokejfk Youtube: www.youtube.com/@darkwoke TikTok: https://www.tiktok.com/@janayafk SUPPORT THE SHOW Patreon - https://patreon.com/@darkwoke Tip w/ a One Time Donation - https://buymeacoffee.com/janayafk Have a query? Comment? Reach out to us at: info@darkwoke.com and we may read it aloud on the show!
The Information's Aaron Holmes and Guild.ai CEO James Everingham talk with TITV Host Akash Pasricha about OpenAI's secret project to build a GitHub competitor and the release of GPT 5.4. We also talk with The Information's Juro Osawa and Jing Yang about how OpenClaw is sparking a "FOMO" fueled developer frenzy in China, and we get into the shifting constraints of the AI chip sector with Sriram Viswanathan, Founding Managing Partner at Celesta Capital.Articles discussed on this episode: https://www.theinformation.com/newsletters/ai-agenda/openais-next-ai-model-will-extreme-reasoning https://www.theinformation.com/articles/openai-developing-alternative-microsofts-githubhttps://www.theinformation.com/articles/openclaw-rips-chinas-tech-startup-landscapeSubscribe: YouTube: https://www.youtube.com/@theinformation The Information: https://www.theinformation.com/subscribe_hSign up for the AI Agenda newsletter: https://www.theinformation.com/features/ai-agendaTITV airs weekdays on YouTube, X and LinkedIn at 10AM PT / 1PM ET. Or check us out wherever you get your podcasts.Follow us:X: https://x.com/theinformationIG: https://www.instagram.com/theinformation/TikTok: https://www.tiktok.com/@titv.theinformationLinkedIn: https://www.linkedin.com/company/theinformation/
An airhacks.fm conversation with Thorsten Hoeger (@hoegertn) about: first computer experience with an IBM 8086 and learning programming by modifying the QBasic Gorilla game, early programming journey from QBasic to Visual Basic and the discovery of event-driven programming, building a password security script for autoexec.bat as a childhood project, transition from Visual Basic to Java around 2005 starting with Java 1.4.2, working at a small bank in Stuttgart building a core banking system, experience with Eclipse RCP rich client platform and the overhead of plugin architecture in business software, migration from Swing to Eclipse RCP frontend with JBoss application server backend, building a custom Spring-based microservice framework called Dwallin (Icelandic for dwarf) before Spring Boot existed, using Apache CXF for REST and RPC over messaging with ActiveMQ, comparison of Java development trajectories between annotation-based and XML-heavy approaches, discussion of the infamous Java and XML O'Reilly book that popularized XML configuration, xdoclet as a precursor to Java annotations, contrasting approaches of JBoss-based thin WAR deployments versus Spring-based embedded server microservices, university experience learning Ada programming language and its strict compiler as excellent for learning programming, PL/SQL's Ada-based origins, brief experience with OSGi and strong criticism of its complexity and poor developer experience, comparison of OSGi with Java Platform Module System (JPMS), founding Taimos consulting company 10 years ago originally building BlackBerry enterprise software, pivoting to AWS migration consulting for regulated industries including banks and insurance companies, strong preference for serverless architecture with lambda Step Functions API Gateway and DynamoDB, criticism of running kubernetes on AWS versus using native services like ECS Fargate, the distinction between running "in the cloud" versus "on the cloud", detailed discussion of why GraalVM native images are unnecessary on AWS Lambda due to compliance overhead and memory allocation model, quarkus and SnapStart as solutions for Lambda cold start problems, Java's cost efficiency on Lambda due to fast execution times, involvement with AWS CDK since 2018-2019 including building L2 constructs for EC2 and AppSync, shift from code contributions to community organizing and prioritization work with the CDK team, launching CDK Terrain as successor to CDK for Terraform, nuanced discussion of open source economics when the project primarily benefits a paid cloud provider, using GitHub as a personal index and dashboard for reusable project templates, consulting perspective on contributing to open source for code reuse across multiple clients, teaser for a future deep-dive episode on CDK internals and promoting Java usage with CDK Thorsten Hoeger on twitter: @hoegertn
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
Listen to FULL RUNDOWN at https://podcasts.apple.com/us/podcast/full-rundown-the-great-infrastructure-shift-apples/id1684415169?i=1000753128321Summary: Tension rises as OpenAI prepares to compete with Microsoft's GitHub. Simultaneously, Google launches Gemini 3.1 Flash-Lite, setting a new industry standard for low-cost, high-speed intelligence.Key Points:OpenAI's Move: Developing an alternative to GitHub to escape service disruptions.Google's Price War: Gemini 3.1 Flash-Lite launches at $0.25 per 1M tokens.The Conflict: The first major signs of a "rivalry" between OpenAI and its primary investor, Microsoft.Keywords: OpenAI GitHub, Gemini 3.1 Flash-Lite, Microsoft vs OpenAI, Developer Tools, AI Infrastructure, Cloud Wars.Resources: The OpenAI Code Repository: Your “Safe Harbor” Guide at https://enoumen.substack.com/p/the-openai-code-repository-your-safeThis episode is made possible by our sponsors:
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
Joël and Sally cover all the bases as they look at improving their test suite performances times. Our hosts lay out some spicy takes on various different test suites, comparing the key differences across the different forms of testing, where you might encounter pitfalls in each method, and how to make the most of each test. — Interested in exploring different test suites to see if they could improve your projects? Check out these articles on everything our hosts discussed today, as well as Joël's talk on slow tests. Avoiding Factory Bot - Why Factories? - Parallelisation in Testing - Joël's Talk Your hosts for this episode have been thoughtbot's own Joël Quenneville and Sally Hall. If you would like to support the show, head over to our GitHub page, or check out our website. Got a question or comment about the show? Why not write to our hosts: hosts@bikeshed.fm This has been a thoughtbot podcast. Stay up to date by following us on social media - YouTube - LinkedIn - Mastodon - BlueSky © 2026 thoughtbot, inc.
In this episode: Mark explains synesthesia and the experience of how it manifests in a Linux user, Alan spring cleans his GitHub, Martin gets busy with lazygit. You can send your feedback via show@linuxmatters.sh or the Contact Form. If you’d like to hang out with other listeners and share your feedback with the community, you can join us on: The Linux Matters Chatters on Telegram. The Linux Matters Subreddit. If you enjoy the show, please consider supporting us.
In this episode: Mark explains synesthesia and the experience of how it manifests in a Linux user, Alan spring cleans his GitHub, Martin gets busy with lazygit. You can send your feedback via show@linuxmatters.sh or the Contact Form. If you’d like to hang out with other listeners and share your feedback with the community, you can join us on: The Linux Matters Chatters on Telegram. The Linux Matters Subreddit. If you enjoy the show, please consider supporting us.
This week, we dive headfirst into Absolute Wonder Woman — a reimagining of Diana raised in hell by Circe — and we can't stop talking about how good this book is. We break down why this version finally captures the heart of Wonder Woman, why compassion is her real superpower, and why this heavy-metal redesign absolutely works. Along the way, we detour through Conan, grindhouse cinema, crocodile cult horror, and Peter's descent into AI-powered app building. It's a wild one — but mostly, we're here to say: go read this comic.Show NotesOpening Catch-Up
The collaborative nature of open source is often overlooked by both individuals and companies ate first. Contrary to that, many projects and initiatives only succeed when they are developed in an open and collaborative environment. Think about successful, long-living programming languages as an example.In this My Open Source Experience podcast episode Ildiko is chatting with Brad Chamberlain about the Chapel project. CHapel is an open source programming language that Brad and the community has been developing and maintaining for over 20 years now. It was originally created for supercomputers and HPC use cases to provide a language that is more efficient for machines and programmers alike. However, over the years PCs and laptops evolved to the level of parallelism that Chapel became more widely usable in the industry. Have you tried it already?Learn more about:- The Chapel programming language- The consideration and study behind the Chapel project being created as open source- Creating an open source project before GitHub existed- Sometimes you need to aim for the skies over simplicity- Moving Chapel under the High Performance Software Foundation (HPSF) Hosted on Acast. See acast.com/privacy for more information.
English Edition (ByteSized): In this first episode of the new ByteSized dRTP season, sponsored by the STEP-UP programme from the EPSRC (UK) you'll meet Richard Acton. Richard created a tool to help you keep track of all the steps you should take to make your software shareable and reproducible. With checklists, built right into your GitLab, GitHub repo. Linkshttps://rsspdc.org/ home page for the checklistshttps://rsspdc.gitlab.io/slides/bytesize-workshop_2026-02-26.html#/outline https://gitlab.com/rsspdc/checklists download the checklists MD files from herehttps://www.software.ac.uk/news/software-management-plans Software management plan (SMP) from the Software Sustainability Institutehttps://www.france-grilles.fr/presoft-software-management-plans-model/ another template of a SMP from Teresa Gomez-Diaz (Paris, France) - PRESOFThttps://hal.science/hal-01802565v1 I'd like to thank the STEP-UP project for their support of this podcast. STEP-UP is a collaboration between Imperial College London, King's College London, University College London and the University of Westminster. STEP-UP is funded by the Engineering and Science and Physical Research Council in the UK. Get in touchThank you for listening! Merci de votre écoute! Vielen Dank für´s Zuhören! Contact Details/ Coordonnées / Kontakt: Email mailto:peter@code4thought.org UK RSE Slack (ukrse.slack.com): @code4thought or @piddie Bluesky: https://bsky.app/profile/code4thought.bsky.social LinkedIn: https://www.linkedin.com/in/pweschmidt/ (personal Profile)LinkedIn: https://www.linkedin.com/company/codeforthought/ (Code for Thought Profile) This podcast is licensed under the Creative Commons Licence: https://creativecommons.org/licenses/by-sa/4.0/
Episode 800 is here! ? And we've got a packed show: ?? Nintendo issues DMCAs on GitHub targeting open-source Switch emulators?? Xenoblade Chronicles X is getting a Switch 2 Edition?? Hideki Sato passes away at 77 – the Sega legend credited with helping create every single Sega console ? In Change the System:– Brandon throws down in Tekken 8 and puzzles through Baba Is You– Eugene revisits Portal, Rogue Legacy, To the Moon, Sektori, and even the Virtual Boy– Justin brings the mystery game of the week Episode 800. Let's go.
¿Es posible tener una Stream Deck profesional en Linux por una fracción de lo que cuesta la marca líder? La respuesta es un rotundo sí, y en este episodio te cuento cómo lo he logrado. Muchos de nosotros hemos mirado con recelo los dispositivos de 150 euros , pensando que no dejan de ser una "botonera bonita". Yo mismo tenía ese prejuicio , pero tras probar las alternativas económicas y, sobre todo, descubrir el potencial del software OpenDeck, mi visión ha cambiado por completo. ¿Qué vas a aprender en este episodio?Adiós al hardware prohibitivo: Analizamos opciones como Soomfon y Mars Gaming que ofrecen la misma funcionalidad que Elgato por apenas 50€. OpenDeck, la salvación del Linuxero: Descubre esta herramienta de código abierto programada en Rust que permite gestionar cualquier Stream Controller en Linux, Windows y macOS. Compatibilidad total: Cómo utilizar los plugins del ecosistema de Elgato directamente en tu software libre. Tu móvil como mando: Te explico cómo usar Tacto para convertir cualquier Android en una Stream Deck sin gastar un céntimo. Integración avanzada: Mi setup personal con OBS para controlar grabaciones y mi configuración con el gestor de ventanas Niri usando potenciómetros para el scroll y cambio de escritorio. Contenido detallado:00:00:00 Introducción y el teclado de 15 euros00:01:48 Mis prejuicios con las Stream Deck00:03:10 La magia de los botones dinámicos y LCD00:05:31 OpenDeck: El corazón de tu Stream Deck en Linux00:06:41 Alternativas económicas: Soomfon y Mars Gaming00:08:43 ¿Por qué elegí el modelo con potenciómetros?00:10:47 OpenDeck a fondo: Plugins y compatibilidad00:13:37 Personalización y Multi OBS Controller00:16:10 La opción gratuita: Convierte tu Android en un controlador con Tacto00:18:55 Mi flujo de trabajo: Integración con OBS y el gestor Niri00:21:51 Despedida y conclusiones finalesSi alguna vez has querido automatizar tus tareas, lanzar sonidos en tus podcasts o simplemente controlar tu escritorio con un giro de rueda, este episodio te dará todas las claves técnicas y económicas para hacerlo posible bajo Linux. No olvides dejar tu valoración en Spotify o Apple Podcasts si te ha gustado el contenido. ¡Disfruta del episodio! Más información, enlaces y notas en https://atareao.es/podcast/775
This show starts with an Android review, looking at Jonathan's newest tablet. It also covers the coming Android apocalypse, the age verification legislation, and the sudo-rs asterisk fight. Mesa is grappling with AI, Ardour has a couple of point releases, and Gnome is redirecting traffic to GitHub. Fedora has a new mobile experiment in PocketBlue, and the 0 A.D. game has a stable release. For tips we have PyNetscan for IP scanning, snapper for BTRFS snapshots, mediainfo for media file investigations, and espanso for automatic text expansion. Find the show notes at https://bit.ly/3N9X3ys and enjoy! Host: Jonathan Bennett Co-Hosts: Ken McDonald, Rob Campbell, and Jeff Massie Download or subscribe to Untitled Linux Show at https://twit.tv/shows/untitled-linux-show Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord. Sponsor: bitwarden.com/twit
This show starts with an Android review, looking at Jonathan's newest tablet. It also covers the coming Android apocalypse, the age verification legislation, and the sudo-rs asterisk fight. Mesa is grappling with AI, Ardour has a couple of point releases, and Gnome is redirecting traffic to GitHub. Fedora has a new mobile experiment in PocketBlue, and the 0 A.D. game has a stable release. For tips we have PyNetscan for IP scanning, snapper for BTRFS snapshots, mediainfo for media file investigations, and espanso for automatic text expansion. Find the show notes at https://bit.ly/3N9X3ys and enjoy! Host: Jonathan Bennett Co-Hosts: Ken McDonald, Rob Campbell, and Jeff Massie Download or subscribe to Untitled Linux Show at https://twit.tv/shows/untitled-linux-show Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord. Sponsor: bitwarden.com/twit
Why did the academic elite fail to see Bitcoin coming? Dr. Adam Back (@adam3us), the inventor of Hashcash, explains that professors were too obsessed with centralized bank models to conceive of a proof of work system that replaces central authority. While the ivory tower refined flawed systems, cypherpunks built a reality that does not require a middleman.Adam's journey started on the front lines of digital privacy, using his PhD as a license to hack. He laid the foundation for electronic cash by prioritizing sovereign rights. As the founder of Blockstream, he is an OG who never sold out, famously moving past shitcoin bribes to protect his ethical reputation.We tackle the reality of scaling. Adam argues that while Bitcoin is hard to change by design, the lightning network and sidechains allow for high-speed trade without risking the base layer. This modular approach lets Bitcoin evolve into a global financial layer while staying decentralized, proving the skeptics wrong one block at a time.In El Salvador, the government rejected shitcoin pitches to double down on Bitcoin. Adam notes this homegrown success could see the country rival major powers like Germany. It is a blueprint for using sound money to leapfrog the legacy financial system.Adam is now focused on filling innovation gaps from privacy to treasury reserves. The mission to replace fiat is just beginning. Subscribe and comment. Is El Salvador the next Singapore?—Bitcoin Beach TeamConnect and Learn more about Adam BackX: https://x.com/adam3usBlockstream X: https://x.com/BlockstreamHashcash Web: http://www.hashcash.org/Cypherspace Web: http://www.cypherspace.org/adam/Blockstream web: https://blockstream.com/Github: https://github.com/BlockstreamThe Liquid Network: https://liquid.net/Blockstream Jade: https://blockstream.com/jade/ Support and follow Bitcoin Beach:X: https://www.twitter.com/BitcoinBeach IG: https://www.instagram.com/bitcoinbeach_sv TikTok: https://www.tiktok.com/@livefrombitcoinbeach Web: https://www.bitcoinbeach.com Browse through this quick guide to learn more about the episode:00:00 Intro01:33 How Bitcoin achieves decentralized trust without banks. 04:49 Has institutional Bitcoin ruined the cypherpunk mission? 10:48 How Adam Back spotted and rejected shitcoin scams. 12:43 Is Tether (USDT) a systemic risk to the Bitcoin ecosystem? 15:44 Why El Salvador succeeded where other nations failed. 20:32 Scaling Bitcoin via Blockstream, sidechains, and Lightning. 25:50 Adam Back on the 100-year mission for sound money.Live From Bitcoin BeachLive From Bitcoin Beach
This is a recap of the top 10 posts on Hacker News on February 26, 2026. This podcast was generated by wondercraft.ai (00:30): Statement from Dario Amodei on our discussions with the Department of WarOriginal post: https://news.ycombinator.com/item?id=47173121&utm_source=wondercraft_ai(01:58): Tell HN: YC companies scrape GitHub activity, send spam emails to usersOriginal post: https://news.ycombinator.com/item?id=47163885&utm_source=wondercraft_ai(03:27): Layoffs at BlockOriginal post: https://news.ycombinator.com/item?id=47172119&utm_source=wondercraft_ai(04:56): Nano Banana 2: Google's latest AI image generation modelOriginal post: https://news.ycombinator.com/item?id=47167858&utm_source=wondercraft_ai(06:24): Tech companies shouldn't be bullied into doing surveillanceOriginal post: https://news.ycombinator.com/item?id=47160226&utm_source=wondercraft_ai(07:53): RAM now represents 35 percent of bill of materials for HP PCsOriginal post: https://news.ycombinator.com/item?id=47161160&utm_source=wondercraft_ai(09:22): Will vibe coding end like the maker movement?Original post: https://news.ycombinator.com/item?id=47167931&utm_source=wondercraft_ai(10:50): AirSnitch: Demystifying and breaking client isolation in Wi-Fi networks [pdf]Original post: https://news.ycombinator.com/item?id=47167763&utm_source=wondercraft_ai(12:19): What Claude Code ChoosesOriginal post: https://news.ycombinator.com/item?id=47169757&utm_source=wondercraft_ai(13:48): Show HN: Terminal Phone – E2EE Walkie Talkie from the Command LineOriginal post: https://news.ycombinator.com/item?id=47164270&utm_source=wondercraft_aiThis is a third-party project, independent from HN and YC. Text and audio generated using AI, by wondercraft.ai. Create your own studio quality podcast with text as the only input in seconds at app.wondercraft.ai. Issues or feedback? We'd love to hear from you: team@wondercraft.ai
In this talk, Juan, Analytics Engineer and author of Fundamentals of Analytics Engineering share his professional journey from studying psychological research in Colombia to becoming one of the first analytics engineers in the Netherlands. We explore the evolution of the role, the shift toward engineering rigor in data modeling, and how the landscape of tools like dbt and Databricks is changing the way teams work.You'll learn about:- The fundamental differences between traditional BI engineering and modern analytics engineering.- How to bridge the gap between business stakeholders and technical data infrastructure.- The technical "glue" that connects Python and SQL for robust data pipelines.- The importance of automated testing (generic vs. singular tests) to prevent "silent" data failures.- Strategies for modeling messy, fragmented source data into a unified "business reality."- The current state of the "Lakehouse" paradigm and how it impacts storage and compute costs.- Expert advice on navigating the dbt ecosystem and its emerging competitors.Links:- DE Course: https://github.com/DataTalksClub/data-engineering-zoomcamp- Luma: https://luma.com/0uf7mmupTIMECODES:0:00 Juan's psychological research and transition to data4:36 Riding the wave: The early days of analytics engineering7:56 Breaking down the gap between analysts and engineers11:03 The art of turning business reality into clean data16:25 Why data engineering is about safety, not just speed20:53 Reimagining data modeling in the modern era26:53 To split or not to split: Finding the right team roles30:35 Python, SQL, and the technical toolkit for success38:41 How to stop manually testing your data dashboards46:34 Bringing software engineering rigor to data workflows49:50 Must-read books and resources for mastering the craft55:42 The future of dbt and the shifting tool landscape1:00:29 Deciphering the lakehouse: Warehousing in the cloud1:11:16 Pro-tips for starting your data engineering journey1:14:40 The big debate: Databricks vs. Snowflake1:18:28 Why every data professional needs a local communityThis talk is designed for data analysts looking to level up their engineering skills, data engineers interested in the business-logic layer, and data leaders trying to structure their teams more effectively. It is particularly valuable for those preparing for the Data Engineering Zoomcamp or anyone looking to transition into an Analytics Engineering role.Connect with Juan- Linkedin - https://www.linkedin.com/in/jmperafan/ - Website - https://juanalytics.com/Connect with DataTalks.Club:- Join the community - https://datatalks.club/slack.html- Subscribe to our Google calendar to have all our events in your calendar - https://calendar.google.com/calendar/r?cid=ZjhxaWRqbnEwamhzY3A4ODA5azFlZ2hzNjBAZ3JvdXAuY2FsZW5kYXIuZ29vZ2xlLmNvbQ- Check other upcoming events - https://lu.ma/dtc-events- GitHub: https://github.com/DataTalksClub- LinkedIn - https://www.linkedin.com/company/datatalks-club/ - Twitter - https://twitter.com/DataTalksClub - Website - https://datatalks.club/
### 本期简介你有没有一个 Figma 落地页,设计早就做完了,却一直没上线?拦住你的往往不是设计本身,而是环境配置、响应式适配、部署和域名这些"技术活"。这期节目里,Bear 完整拆解了自己用 **Claude Code** 将 Figma 静态设计稿发布为真实网站的全过程——零编程经验,半天完成,全程自然语言操作。适用于落地页、作品集、案例研究等静态网站场景。---### 核心流程拆解**第一步:用 Plan Mode 规划全局**在 Claude Code 中按 `Shift + Tab × 2` 进入 Plan Mode,先让 AI 制定完整方案,不急着执行。框架、步骤、依赖项一次看清楚,满意再开始。**第二步:连接 Figma MCP,提取设计 Token**把 Figma 设计链接丢给 Claude,让它连接 MCP 自动识别颜色、字体、间距等设计 Token,以及各个页面区块的结构。**第三步:搭建本地环境并还原设计**框架选用 **Next.js + Tailwind CSS**,大约 20 分钟,Claude 就能把 90% 的设计稿还原成本地可运行的网站。**第四步:做响应式,但别全靠 AI**移动端适配时,如果 AI 在同一个问题上反复循环(比如 Hero 图片裁剪方式),不要硬耗 Token——**手动去 Figma 做好裁好的图,直接替换**,效率更高。这是本期最重要的一个教训。**第五步:截图 + 粘贴调整细节**发现哪里和设计稿不对,直接截图 `Ctrl+V` 粘到 Claude,描述问题,它会自动对照原始设计修复。加箭头标注效果更好,就像和坐在旁边的开发一起协作。**第六步:上传 GitHub,部署到 Vercel,连接域名**一切搞定后,让 Claude 把代码推到 GitHub,连接 Vercel 托管,再绑定自己的域名。还顺带生成了 README 和博客草稿。---### 三条关键收获1. **控制范围**:你在发布落地页,不是在造产品,保持克制2. **先规划,再迭代**:Plan Mode 先行,配合小步视觉检查3. **知道边界**:AI 在主观视觉判断上容易卡壳,这时候人工介入反而更快---### 提到的工具与资源-
Bryan Cantrill is the co-founder and CTO of Oxide Computer Company. We discuss why the biggest cloud providers don't use off the shelf hardware, how scaling data centers at samsung's scale exposed problems with hard drive firmware, how the values of NodeJS are in conflict with robust systems, choosing Rust, and the benefits of Oxide Computer's rack scale approach. This is an extended version of an interview posted on Software Engineering Radio. Related links Oxide Computer Oxide and Friends Illumos Platform as a Reflection of Values RFD 26 bhyve CockroachDB Heterogeneous Computing with Raja Koduri Transcript You can help correct transcripts on GitHub. Intro [00:00:00] Jeremy: Today I am talking to Bryan Cantrill. He's the co-founder and CTO of Oxide computer company, and he was previously the CTO of Joyent and he also co-authored the DTrace Tracing framework while he was at Sun Microsystems. [00:00:14] Jeremy: Bryan, welcome to Software Engineering radio. [00:00:17] Bryan: Uh, awesome. Thanks for having me. It's great to be here. [00:00:20] Jeremy: You're the CTO of a company that makes computers. But I think before we get into that, a lot of people who built software, now that the actual computer is abstracted away, they're using AWS or they're using some kind of cloud service. So I thought we could start by talking about, data centers. [00:00:41] Jeremy: 'cause you were. Previously working at Joyent, and I believe you got bought by Samsung and you've previously talked about how you had to figure out, how do I run things at Samsung's scale. So how, how, how was your experience with that? What, what were the challenges there? Samsung scale and migrating off the cloud [00:01:01] Bryan: Yeah, I mean, so at Joyent, and so Joyent was a cloud computing pioneer. Uh, we competed with the likes of AWS and then later GCP and Azure. Uh, and we, I mean, we were operating at a scale, right? We had a bunch of machines, a bunch of dcs, but ultimately we know we were a VC backed company and, you know, a small company by the standards of, certainly by Samsung standards. [00:01:25] Bryan: And so when, when Samsung bought the company, I mean, the reason by the way that Samsung bought Joyent is Samsung's. Cloud Bill was, uh, let's just say it was extremely large. They were spending an enormous amount of money every year on, on the public cloud. And they realized that in order to secure their fate economically, they had to be running on their own infrastructure. [00:01:51] Bryan: It did not make sense. And there's not, was not really a product that Samsung could go buy that would give them that on-prem cloud. Uh, I mean in that, in that regard, like the state of the market was really no different. And so they went looking for a company, uh, and bought, bought Joyent. And when we were on the inside of Samsung. [00:02:11] Bryan: That we learned about Samsung scale. And Samsung loves to talk about Samsung scale. And I gotta tell you, it is more than just chest thumping. Like Samsung Scale really is, I mean, just the, the sheer, the number of devices, the number of customers, just this absolute size. they really wanted to take us out to, to levels of scale, certainly that we had not seen. [00:02:31] Bryan: The reason for buying Joyent was to be able to stand up on their own infrastructure so that we were gonna go buy, we did go buy a bunch of hardware. Problems with server hardware at scale [00:02:40] Bryan: And I remember just thinking, God, I hope Dell is somehow magically better. I hope the problems that we have seen in the small, we just. You know, I just remember hoping and hope is hope. It was of course, a terrible strategy and it was a terrible strategy here too. Uh, and the we that the problems that we saw at the large were, and when you scale out the problems that you see kind of once or twice, you now see all the time and they become absolutely debilitating. [00:03:12] Bryan: And we saw a whole series of really debilitating problems. I mean, many ways, like comically debilitating, uh, in terms of, of showing just how bad the state-of-the-art. Yes. And we had, I mean, it should be said, we had great software and great software expertise, um, and we were controlling our own system software. [00:03:35] Bryan: But even controlling your own system software, your own host OS, your own control plane, which is what we had at Joyent, ultimately, you're pretty limited. You go, I mean, you got the problems that you can obviously solve, the ones that are in your own software, but the problems that are beneath you, the, the problems that are in the hardware platform, the problems that are in the componentry beneath you become the problems that are in the firmware. IO latency due to hard drive firmware [00:04:00] Bryan: Those problems become unresolvable and they are deeply, deeply frustrating. Um, and we just saw a bunch of 'em again, they were. Comical in retrospect, and I'll give you like a, a couple of concrete examples just to give, give you an idea of what kinda what you're looking at. one of the, our data centers had really pathological IO latency. [00:04:23] Bryan: we had a very, uh, database heavy workload. And this was kind of right at the period where you were still deploying on rotating media on hard drives. So this is like, so. An all flash buy did not make economic sense when we did this in, in 2016. This probably, it'd be interesting to know like when was the, the kind of the last time that that actual hard drives made sense? [00:04:50] Bryan: 'cause I feel this was close to it. So we had a, a bunch of, of a pathological IO problems, but we had one data center in which the outliers were actually quite a bit worse and there was so much going on in that system. It took us a long time to figure out like why. And because when, when you, when you're io when you're seeing worse io I mean you're naturally, you wanna understand like what's the workload doing? [00:05:14] Bryan: You're trying to take a first principles approach. What's the workload doing? So this is a very intensive database workload to support the, the object storage system that we had built called Manta. And that the, the metadata tier was stored and uh, was we were using Postgres for that. And that was just getting absolutely slaughtered. [00:05:34] Bryan: Um, and ultimately very IO bound with these kind of pathological IO latencies. Uh, and as we, you know, trying to like peel away the layers to figure out what was going on. And I finally had this thing. So it's like, okay, we are seeing at the, at the device layer, at the at, at the disc layer, we are seeing pathological outliers in this data center that we're not seeing anywhere else. [00:06:00] Bryan: And that does not make any sense. And the thought occurred to me. I'm like, well, maybe we are. Do we have like different. Different rev of firmware on our HGST drives, HGST. Now part of WD Western Digital were the drives that we had everywhere. And, um, so maybe we had a different, maybe I had a firmware bug. [00:06:20] Bryan: I, this would not be the first time in my life at all that I would have a drive firmware issue. Uh, and I went to go pull the firmware, rev, and I'm like, Toshiba makes hard drives? So we had, I mean. I had no idea that Toshiba even made hard drives, let alone that they were our, they were in our data center. [00:06:38] Bryan: I'm like, what is this? And as it turns out, and this is, you know, part of the, the challenge when you don't have an integrated system, which not to pick on them, but Dell doesn't, and what Dell would routinely put just sub make substitutes, and they make substitutes that they, you know, it's kind of like you're going to like, I don't know, Instacart or whatever, and they're out of the thing that you want. [00:07:03] Bryan: So, you know, you're, someone makes a substitute and like sometimes that's okay, but it's really not okay in a data center. And you really want to develop and validate a, an end-to-end integrated system. And in this case, like Toshiba doesn't, I mean, Toshiba does make hard drives, but they are a, or the data they did, uh, they basically were, uh, not competitive and they were not competitive in part for the reasons that we were discovering. [00:07:29] Bryan: They had really serious firmware issues. So the, these were drives that would just simply stop a, a stop acknowledging any reads from the order of 2,700 milliseconds. Long time, 2.7 seconds. Um. And that was a, it was a drive firmware issue, but it was highlighted like a much deeper issue, which was the simple lack of control that we had over our own destiny. [00:07:53] Bryan: Um, and it's an, it's, it's an example among many where Dell is making a decision. That lowers the cost of what they are providing you marginally, but it is then giving you a system that they shouldn't have any confidence in because it's not one that they've actually designed and they leave it to the customer, the end user, to make these discoveries. [00:08:18] Bryan: And these things happen up and down the stack. And for every, for whether it's, and, and not just to pick on Dell because it's, it's true for HPE, it's true for super micro, uh, it's true for your switch vendors. It's, it's true for storage vendors where the, the, the, the one that is left actually integrating these things and trying to make the the whole thing work is the end user sitting in their data center. AWS / Google are not buying off the shelf hardware but you can't use it [00:08:42] Bryan: There's not a product that they can buy that gives them elastic infrastructure, a cloud in their own DC The, the product that you buy is the public cloud. Like when you go in the public cloud, you don't worry about the stuff because that it's, it's AWS's issue or it's GCP's issue. And they are the ones that get this to ground. [00:09:02] Bryan: And they, and this was kind of, you know, the eye-opening moment. Not a surprise. Uh, they are not Dell customers. They're not HPE customers. They're not super micro customers. They have designed their own machines. And to varying degrees, depending on which one you're looking at. But they've taken the clean sheet of paper and the frustration that we had kind of at Joyent and beginning to wonder and then Samsung and kind of wondering what was next, uh, is that, that what they built was not available for purchase in the data center. [00:09:35] Bryan: You could only rent it in the public cloud. And our big belief is that public cloud computing is a really important revolution in infrastructure. Doesn't feel like a different, a deep thought, but cloud computing is a really important revolution. It shouldn't only be available to rent. You should be able to actually buy it. [00:09:53] Bryan: And there are a bunch of reasons for doing that. Uh, one in the one we we saw at Samsung is economics, which I think is still the dominant reason where it just does not make sense to rent all of your compute in perpetuity. But there are other reasons too. There's security, there's risk management, there's latency. [00:10:07] Bryan: There are a bunch of reasons why one might wanna to own one's own infrastructure. But, uh, that was very much the, the, so the, the genesis for oxide was coming out of this very painful experience and a painful experience that, because, I mean, a long answer to your question about like what was it like to be at Samsung scale? [00:10:27] Bryan: Those are the kinds of things that we, I mean, in our other data centers, we didn't have Toshiba drives. We only had the HDSC drives, but it's only when you get to this larger scale that you begin to see some of these pathologies. But these pathologies then are really debilitating in terms of those who are trying to develop a service on top of them. [00:10:45] Bryan: So it was, it was very educational in, in that regard. And you're very grateful for the experience at Samsung in terms of opening our eyes to the challenge of running at that kind of scale. [00:10:57] Jeremy: Yeah, because I, I think as software engineers, a lot of times we, we treat the hardware as a, as a given where, [00:11:08] Bryan: Yeah. [00:11:08] Bryan: Yeah. There's software in chard drives [00:11:09] Jeremy: It sounds like in, in this case, I mean, maybe the issue is not so much that. Dell or HP as a company doesn't own every single piece that they're providing you, but rather the fact that they're swapping pieces in and out without advertising them, and then when it becomes a problem, they're not necessarily willing to, to deal with the, the consequences of that. [00:11:34] Bryan: They just don't know. I mean, I think they just genuinely don't know. I mean, I think that they, it's not like they're making a deliberate decision to kind of ship garbage. It's just that they are making, I mean, I think it's exactly what you said about like, not thinking about the hardware. It's like, what's a hard drive? [00:11:47] Bryan: Like what's it, I mean, it's a hard drive. It's got the same specs as this other hard drive and Intel. You know, it's a little bit cheaper, so why not? It's like, well, like there's some reasons why not, and one of the reasons why not is like, uh, even a hard drive, whether it's rotating media or, or flash, like that's not just hardware. [00:12:05] Bryan: There's software in there. And that the software's like not the same. I mean, there are components where it's like, there's actually, whether, you know, if, if you're looking at like a resistor or a capacitor or something like this Yeah. If you've got two, two parts that are within the same tolerance. Yeah. [00:12:19] Bryan: Like sure. Maybe, although even the EEs I think would be, would be, uh, objecting that a little bit. But the, the, the more complicated you get, and certainly once you get to the, the, the, the kind of the hardware that we think of like a, a, a microprocessor, a a network interface card, a a, a hard driver, an NVME drive. [00:12:38] Bryan: Those things are super complicated and there's a whole bunch of software inside of those things, the firmware, and that's the stuff that, that you can't, I mean, you say that software engineers don't think about that. It's like you, no one can really think about that because it's proprietary that's kinda welded shut and you've got this abstraction into it. [00:12:55] Bryan: But the, the way that thing operates is very core to how the thing in aggregate will behave. And I think that you, the, the kind of, the, the fundamental difference between Oxide's approach and the approach that you get at a Dell HP Supermicro, wherever, is really thinking holistically in terms of hardware and software together in a system that, that ultimately delivers cloud computing to a user. [00:13:22] Bryan: And there's a lot of software at many, many, many, many different layers. And it's very important to think about, about that software and that hardware holistically as a single system. [00:13:34] Jeremy: And during that time at Joyent, when you experienced some of these issues, was it more of a case of you didn't have enough servers experiencing this? So if it would happen, you might say like, well, this one's not working, so maybe we'll just replace the hardware. What, what was the thought process when you were working at that smaller scale and, and how did these issues affect you? UEFI / Baseboard Management Controller [00:13:58] Bryan: Yeah, at the smaller scale, you, uh, you see fewer of them, right? You just see it's like, okay, we, you know, what you might see is like, that's weird. We kinda saw this in one machine versus seeing it in a hundred or a thousand or 10,000. Um, so you just, you just see them, uh, less frequently as a result, they are less debilitating. [00:14:16] Bryan: Um, I, I think that it's, when you go to that larger scale, those things that become, that were unusual now become routine and they become debilitating. Um, so it, it really is in many regards a function of scale. Uh, and then I think it was also, you know, it was a little bit dispiriting that kind of the substrate we were building on really had not improved. [00:14:39] Bryan: Um, and if you look at, you know, the, if you buy a computer server, buy an x86 server. There is a very low layer of firmware, the BIOS, the basic input output system, the UEFI BIOS, and this is like an abstraction layer that has, has existed since the eighties and hasn't really meaningfully improved. Um, the, the kind of the transition to UEFI happened with, I mean, I, I ironically with Itanium, um, you know, two decades ago. [00:15:08] Bryan: but beyond that, like this low layer, this lowest layer of platform enablement software is really only impeding the operability of the system. Um, you look at the baseboard management controller, which is the kind of the computer within the computer, there is a, uh, there is an element in the machine that needs to handle environmentals, that needs to handle, uh, operate the fans and so on. [00:15:31] Bryan: Uh, and that traditionally has this, the space board management controller, and that architecturally just hasn't improved in the last two decades. And, you know, that's, it's a proprietary piece of silicon. Generally from a company that no one's ever heard of called a Speed, uh, which has to be, is written all on caps, so I guess it needs to be screamed. [00:15:50] Bryan: Um, a speed has a proprietary part that has a, there is a root password infamously there, is there, the root password is encoded effectively in silicon. So, uh, which is just, and for, um, anyone who kind of goes deep into these things, like, oh my God, are you kidding me? Um, when we first started oxide, the wifi password was a fraction of the a speed root password for the bmc. [00:16:16] Bryan: It's kinda like a little, little BMC humor. Um, but those things, it was just dispiriting that, that the, the state-of-the-art was still basically personal computers running in the data center. Um, and that's part of what, what was the motivation for doing something new? [00:16:32] Jeremy: And for the people using these systems, whether it's the baseboard management controller or it's the The BIOS or UF UEFI component, what are the actual problems that people are seeing seen? Security vulnerabilities and poor practices in the BMC [00:16:51] Bryan: Oh man, I, the, you are going to have like some fraction of your listeners, maybe a big fraction where like, yeah, like what are the problems? That's a good question. And then you're gonna have the people that actually deal with these things who are, did like their heads already hit the desk being like, what are the problems? [00:17:06] Bryan: Like what are the non problems? Like what, what works? Actually, that's like a shorter answer. Um, I mean, there are so many problems and a lot of it is just like, I mean, there are problems just architecturally these things are just so, I mean, and you could, they're the problems spread to the horizon, so you can kind of start wherever you want. [00:17:24] Bryan: But I mean, as like, as a really concrete example. Okay, so the, the BMCs that, that the computer within the computer that needs to be on its own network. So you now have like not one network, you got two networks that, and that network, by the way, it, that's the network that you're gonna log into to like reset the machine when it's otherwise unresponsive. [00:17:44] Bryan: So that going into the BMC, you can are, you're able to control the entire machine. Well it's like, alright, so now I've got a second net network that I need to manage. What is running on the BMC? Well, it's running some. Ancient, ancient version of Linux it that you got. It's like, well how do I, how do I patch that? [00:18:02] Bryan: How do I like manage the vulnerabilities with that? Because if someone is able to root your BMC, they control the system. So it's like, this is not you've, and now you've gotta go deal with all of the operational hair around that. How do you upgrade that system updating the BMC? I mean, it's like you've got this like second shadow bad infrastructure that you have to go manage. [00:18:23] Bryan: Generally not open source. There's something called open BMC, um, which, um, you people use to varying degrees, but you're generally stuck with the proprietary BMC, so you're generally stuck with, with iLO from HPE or iDRAC from Dell or, or, uh, the, uh, su super micros, BMC, that H-P-B-M-C, and you are, uh, it is just excruciating pain. [00:18:49] Bryan: Um, and that this is assuming that by the way, that everything is behaving correctly. The, the problem is that these things often don't behave correctly, and then the consequence of them not behaving correctly. It's really dire because it's at that lowest layer of the system. So, I mean, I'll give you a concrete example. [00:19:07] Bryan: a customer of theirs reported to me, so I won't disclose the vendor, but let's just say that a well-known vendor had an issue with their, their temperature sensors were broken. Um, and the thing would always read basically the wrong value. So it was the BMC that had to like, invent its own ki a different kind of thermal control loop. [00:19:28] Bryan: And it would index on the, on the, the, the, the actual inrush current. It would, they would look at that at the current that's going into the CPU to adjust the fan speed. That's a great example of something like that's a, that's an interesting idea. That doesn't work. 'cause that's actually not the temperature. [00:19:45] Bryan: So like that software would crank the fans whenever you had an inrush of current and this customer had a workload that would spike the current and by it, when it would spike the current, the, the, the fans would kick up and then they would slowly degrade over time. Well, this workload was spiking the current faster than the fans would degrade, but not fast enough to actually heat up the part. [00:20:08] Bryan: And ultimately over a very long time, in a very painful investigation, it's customer determined that like my fans are cranked in my data center for no reason. We're blowing cold air. And it's like that, this is on the order of like a hundred watts, a server of, of energy that you shouldn't be spending and like that ultimately what that go comes down to this kind of broken software hardware interface at the lowest layer that has real meaningful consequence, uh, in terms of hundreds of kilowatts, um, across a data center. So this stuff has, has very, very, very real consequence and it's such a shadowy world. Part of the reason that, that your listeners that have dealt with this, that our heads will hit the desk is because it is really aggravating to deal with problems with this layer. [00:21:01] Bryan: You, you feel powerless. You don't control or really see the software that's on them. It's generally proprietary. You are relying on your vendor. Your vendor is telling you that like, boy, I don't know. You're the only customer seeing this. I mean, the number of times I have heard that for, and I, I have pledged that we're, we're not gonna say that at oxide because it's such an unaskable thing to say like, you're the only customer saying this. [00:21:25] Bryan: It's like, it feels like, are you blaming me for my problem? Feels like you're blaming me for my problem? Um, and what you begin to realize is that to a degree, these folks are speaking their own truth because the, the folks that are running at real scale at Hyperscale, those folks aren't Dell, HP super micro customers. [00:21:46] Bryan: They're actually, they've done their own thing. So it's like, yeah, Dell's not seeing that problem, um, because they're not running at the same scale. Um, but when you do run, you only have to run at modest scale before these things just become. Overwhelming in terms of the, the headwind that they present to people that wanna deploy infrastructure. The problem is felt with just a few racks [00:22:05] Jeremy: Yeah, so maybe to help people get some perspective at, at what point do you think that people start noticing or start feeling these problems? Because I imagine that if you're just have a few racks or [00:22:22] Bryan: do you have a couple racks or the, or do you wonder or just wondering because No, no, no. I would think, I think anyone who deploys any number of servers, especially now, especially if your experience is only in the cloud, you're gonna be like, what the hell is this? I mean, just again, just to get this thing working at all. [00:22:39] Bryan: It is so it, it's so hairy and so congealed, right? It's not designed. Um, and it, it, it, it's accreted it and it's so obviously accreted that you are, I mean, nobody who is setting up a rack of servers is gonna think to themselves like, yes, this is the right way to go do it. This all makes sense because it's, it's just not, it, I, it feels like the kit, I mean, kit car's almost too generous because it implies that there's like a set of plans to work to in the end. [00:23:08] Bryan: Uh, I mean, it, it, it's a bag of bolts. It's a bunch of parts that you're putting together. And so even at the smallest scales, that stuff is painful. Just architecturally, it's painful at the small scale then, but at least you can get it working. I think the stuff that then becomes debilitating at larger scale are the things that are, are worse than just like, I can't, like this thing is a mess to get working. [00:23:31] Bryan: It's like the, the, the fan issue that, um, where you are now seeing this over, you know, hundreds of machines or thousands of machines. Um, so I, it is painful at more or less all levels of scale. There's, there is no level at which the, the, the pc, which is really what this is, this is a, the, the personal computer architecture from the 1980s and there is really no level of scale where that's the right unit. Running elastic infrastructure is the hardware but also, hypervisor, distributed database, api, etc [00:23:57] Bryan: I mean, where that's the right thing to go deploy, especially if what you are trying to run. Is elastic infrastructure, a cloud. Because the other thing is like we, we've kinda been talking a lot about that hardware layer. Like hardware is, is just the start. Like you actually gotta go put software on that and actually run that as elastic infrastructure. [00:24:16] Bryan: So you need a hypervisor. Yes. But you need a lot more than that. You, you need to actually, you, you need a distributed database, you need web endpoints. You need, you need a CLI, you need all the stuff that you need to actually go run an actual service of compute or networking or storage. I mean, and for, for compute, even for compute, there's a ton of work to be done. [00:24:39] Bryan: And compute is by far, I would say the simplest of the, of the three. When you look at like networks, network services, storage services, there's a whole bunch of stuff that you need to go build in terms of distributed systems to actually offer that as a cloud. So it, I mean, it is painful at more or less every LE level if you are trying to deploy cloud computing on. What's a control plane? [00:25:00] Jeremy: And for someone who doesn't have experience building or working with this type of infrastructure, when you talk about a control plane, what, what does that do in the context of this system? [00:25:16] Bryan: So control plane is the thing that is, that is everything between your API request and that infrastructure actually being acted upon. So you go say, Hey, I, I want a provision, a vm. Okay, great. We've got a whole bunch of things we're gonna provision with that. We're gonna provision a vm, we're gonna get some storage that's gonna go along with that, that's got a network storage service that's gonna come out of, uh, we've got a virtual network that we're gonna either create or attach to. [00:25:39] Bryan: We've got a, a whole bunch of things we need to go do for that. For all of these things, there are metadata components that need, we need to keep track of this thing that, beyond the actual infrastructure that we create. And then we need to go actually, like act on the actual compute elements, the hostos, what have you, the switches, what have you, and actually go. [00:25:56] Bryan: Create these underlying things and then connect them. And there's of course, the challenge of just getting that working is a big challenge. Um, but getting that working robustly, getting that working is, you know, when you go to provision of vm, um, the, all the, the, the steps that need to happen and what happens if one of those steps fails along the way? [00:26:17] Bryan: What happens if, you know, one thing we're very mindful of is these kind of, you get these long tails of like, why, you know, generally our VM provisioning happened within this time, but we get these long tails where it takes much longer. What's going on? What, where in this process are we, are we actually spending time? [00:26:33] Bryan: Uh, and there's a whole lot of complexity that you need to go deal with that. There's a lot of complexity that you need to go deal with this effectively, this workflow that's gonna go create these things and manage them. Um, we use a, a pattern that we call, that are called sagas, actually is a, is a database pattern from the eighties. [00:26:51] Bryan: Uh, Katie McCaffrey is a, is a database reCrcher who, who, uh, I, I think, uh, reintroduce the idea of, of sagas, um, in the last kind of decade. Um, and this is something that we picked up, um, and I've done a lot of really interesting things with, um, to allow for, to this kind of, these workflows to be, to be managed and done so robustly in a way that you can restart them and so on. [00:27:16] Bryan: Uh, and then you guys, you get this whole distributed system that can do all this. That whole distributed system, that itself needs to be reliable and available. So if you, you know, you need to be able to, what happens if you, if you pull a sled or if a sled fails, how does the system deal with that? [00:27:33] Bryan: How does the system deal with getting an another sled added to the system? Like how do you actually grow this distributed system? And then how do you update it? How do you actually go from one version to the next? And all of that has to happen across an air gap where this is gonna run as part of the computer. [00:27:49] Bryan: So there are, it, it is fractally complicated. There, there is a lot of complexity here in, in software, in the software system and all of that. We kind of, we call the control plane. Um, and it, this is the what exists at AWS at GCP, at Azure. When you are hitting an endpoint that's provisioning an EC2 instance for you. [00:28:10] Bryan: There is an AWS control plane that is, is doing all of this and has, uh, some of these similar aspects and certainly some of these similar challenges. Are vSphere / Proxmox / Hyper-V in the same category? [00:28:20] Jeremy: And for people who have run their own servers with something like say VMware or Hyper V or Proxmox, are those in the same category? [00:28:32] Bryan: Yeah, I mean a little bit. I mean, it kind of like vSphere Yes. Via VMware. No. So it's like you, uh, VMware ESX is, is kind of a key building block upon which you can build something that is a more meaningful distributed system. When it's just like a machine that you're provisioning VMs on, it's like, okay, well that's actually, you as the human might be the control plane. [00:28:52] Bryan: Like, that's, that, that's, that's a much easier problem. Um, but when you've got, you know, tens, hundreds, thousands of machines, you need to do it robustly. You need something to coordinate that activity and you know, you need to pick which sled you land on. You need to be able to move these things. You need to be able to update that whole system. [00:29:06] Bryan: That's when you're getting into a control plane. So, you know, some of these things have kind of edged into a control plane, certainly VMware. Um, now Broadcom, um, has delivered something that's kind of cloudish. Um, I think that for folks that are truly born on the cloud, it, it still feels somewhat, uh, like you're going backwards in time when you, when you look at these kind of on-prem offerings. [00:29:29] Bryan: Um, but, but it, it, it's got these aspects to it for sure. Um, and I think that we're, um, some of these other things when you're just looking at KVM or just looks looking at Proxmox you kind of need to, to connect it to other broader things to turn it into something that really looks like manageable infrastructure. [00:29:47] Bryan: And then many of those projects are really, they're either proprietary projects, uh, proprietary products like vSphere, um, or you are really dealing with open source projects that are. Not necessarily aimed at the same level of scale. Um, you know, you look at a, again, Proxmox or, uh, um, you'll get an OpenStack. [00:30:05] Bryan: Um, and you know, OpenStack is just a lot of things, right? I mean, OpenStack has got so many, the OpenStack was kind of a, a free for all, for every infrastructure vendor. Um, and I, you know, there was a time people were like, don't you, aren't you worried about all these companies together that, you know, are coming together for OpenStack? [00:30:24] Bryan: I'm like, haven't you ever worked for like a company? Like, companies don't get along. By the way, it's like having multiple companies work together on a thing that's bad news, not good news. And I think, you know, one of the things that OpenStack has definitely struggled with, kind of with what, actually the, the, there's so many different kind of vendor elements in there that it's, it's very much not a product, it's a project that you're trying to run. [00:30:47] Bryan: But that's, but that very much is in, I mean, that's, that's similar certainly in spirit. [00:30:53] Jeremy: And so I think this is kind of like you're alluding to earlier, the piece that allows you to allocate, compute, storage, manage networking, gives you that experience of I can go to a web console or I can use an API and I can spin up machines, get them all connected. At the end of the day, the control plane. Is allowing you to do that in hopefully a user-friendly way. [00:31:21] Bryan: That's right. Yep. And in the, I mean, in order to do that in a modern way, it's not just like a user-friendly way. You really need to have a CLI and a web UI and an API. Those all need to be drawn from the same kind of single ground truth. Like you don't wanna have any of those be an afterthought for the other. [00:31:39] Bryan: You wanna have the same way of generating all of those different endpoints and, and entries into the system. Building a control plane now has better tools (Rust, CockroachDB) [00:31:46] Jeremy: And if you take your time at Joyent as an example. What kind of tools existed for that versus how much did you have to build in-house for as far as the hypervisor and managing the compute and all that? [00:32:02] Bryan: Yeah, so we built more or less everything in house. I mean, what you have is, um, and I think, you know, over time we've gotten slightly better tools. Um, I think, and, and maybe it's a little bit easier to talk about the, kind of the tools we started at Oxide because we kind of started with a, with a clean sheet of paper at oxide. [00:32:16] Bryan: We wanted to, knew we wanted to go build a control plane, but we were able to kind of go revisit some of the components. So actually, and maybe I'll, I'll talk about some of those changes. So when we, at, For example, at Joyent, when we were building a cloud at Joyent, there wasn't really a good distributed database. [00:32:34] Bryan: Um, so we were using Postgres as our database for metadata and there were a lot of challenges. And Postgres is not a distributed database. It's running. With a primary secondary architecture, and there's a bunch of issues there, many of which we discovered the hard way. Um, when we were coming to oxide, you have much better options to pick from in terms of distributed databases. [00:32:57] Bryan: You know, we, there was a period that now seems maybe potentially brief in hindsight, but of a really high quality open source distributed databases. So there were really some good ones to, to pick from. Um, we, we built on CockroachDB on CRDB. Um, so that was a really important component. That we had at oxide that we didn't have at Joyent. [00:33:19] Bryan: Um, so we were, I wouldn't say we were rolling our own distributed database, we were just using Postgres and uh, and, and dealing with an enormous amount of pain there in terms of the surround. Um, on top of that, and, and, you know, a, a control plane is much more than a database, obviously. Uh, and you've gotta deal with, uh, there's a whole bunch of software that you need to go, right. [00:33:40] Bryan: Um, to be able to, to transform these kind of API requests into something that is reliable infrastructure, right? And there, there's a lot to that. Uh, especially when networking gets in the mix, when storage gets in the mix, uh, there are a whole bunch of like complicated steps that need to be done, um, at Joyent. [00:33:59] Bryan: Um, we, in part because of the history of the company and like, look. This, this just is not gonna sound good, but it just is what it is and I'm just gonna own it. We did it all in Node, um, at Joyent, which I, I, I know it sounds really right now, just sounds like, well, you, you built it with Tinker Toys. You Okay. [00:34:18] Bryan: Uh, did, did you think it was, you built the skyscraper with Tinker Toys? Uh, it's like, well, okay. We actually, we had greater aspirations for the Tinker Toys once upon a time, and it was better than, you know, than Twisted Python and Event Machine from Ruby, and we weren't gonna do it in Java. All right. [00:34:32] Bryan: So, but let's just say that that experiment, uh, that experiment did ultimately end in a predictable fashion. Um, and, uh, we, we decided that maybe Node was not gonna be the best decision long term. Um, Joyent was the company behind node js. Uh, back in the day, Ryan Dahl worked for Joyent. Uh, and then, uh, then we, we, we. [00:34:53] Bryan: Uh, landed that in a foundation in about, uh, what, 2015, something like that. Um, and began to consider our world beyond, uh, beyond Node. Rust at Oxide [00:35:04] Bryan: A big tool that we had in the arsenal when we started Oxide is Rust. Um, and so indeed the name of the company is, is a tip of the hat to the language that we were pretty sure we were gonna be building a lot of stuff in. [00:35:16] Bryan: Namely Rust. And, uh, rust is, uh, has been huge for us, a very important revolution in programming languages. you know, there, there, there have been different people kind of coming in at different times and I kinda came to Rust in what I, I think is like this big kind of second expansion of rust in 2018 when a lot of technologists were think, uh, sick of Node and also sick of Go. [00:35:43] Bryan: And, uh, also sick of C++. And wondering is there gonna be something that gives me the, the, the performance, of that I get outta C. The, the robustness that I can get out of a C program but is is often difficult to achieve. but can I get that with kind of some, some of the velocity of development, although I hate that term, some of the speed of development that you get out of a more interpreted language. [00:36:08] Bryan: Um, and then by the way, can I actually have types, I think types would be a good idea? Uh, and rust obviously hits the sweet spot of all of that. Um, it has been absolutely huge for us. I mean, we knew when we started the company again, oxide, uh, we were gonna be using rust in, in quite a, quite a. Few places, but we weren't doing it by fiat. [00:36:27] Bryan: Um, we wanted to actually make sure we're making the right decision, um, at, at every different, at every layer. Uh, I think what has been surprising is the sheer number of layers at which we use rust in terms of, we've done our own embedded firmware in rust. We've done, um, in, in the host operating system, which is still largely in C, but very big components are in rust. [00:36:47] Bryan: The hypervisor Propolis is all in rust. Uh, and then of course the control plane, that distributed system on that is all in rust. So that was a very important thing that we very much did not need to build ourselves. We were able to really leverage, uh, a terrific community. Um. We were able to use, uh, and we've done this at Joyent as well, but at Oxide, we've used Illumos as a hostos component, which, uh, our variant is called Helios. [00:37:11] Bryan: Um, we've used, uh, bhyve um, as a, as as that kind of internal hypervisor component. we've made use of a bunch of different open source components to build this thing, um, which has been really, really important for us. Uh, and open source components that didn't exist even like five years prior. [00:37:28] Bryan: That's part of why we felt that 2019 was the right time to start the company. And so we started Oxide. The problems building a control plane in Node [00:37:34] Jeremy: You had mentioned that at Joyent, you had tried to build this in, in Node. What were the, what were the, the issues or the, the challenges that you had doing that? [00:37:46] Bryan: Oh boy. Yeah. again, we, I kind of had higher hopes in 2010, I would say. When we, we set on this, um, the, the, the problem that we had just writ large, um. JavaScript is really designed to allow as many people on earth to write a program as possible, which is good. I mean, I, I, that's a, that's a laudable goal. [00:38:09] Bryan: That is the goal ultimately of such as it is of JavaScript. It's actually hard to know what the goal of JavaScript is, unfortunately, because Brendan Ike never actually wrote a book. so that there is not a canonical, you've got kind of Doug Crockford and other people who've written things on JavaScript, but it's hard to know kind of what the original intent of JavaScript is. [00:38:27] Bryan: The name doesn't even express original intent, right? It was called Live Script, and it was kind of renamed to JavaScript during the Java Frenzy of the late nineties. A name that makes no sense. There is no Java in JavaScript. that is kind of, I think, revealing to kind of the, uh, the unprincipled mess that is JavaScript. [00:38:47] Bryan: It, it, it's very pragmatic at some level, um, and allows anyone to, it makes it very easy to write software. The problem is it's much more difficult to write really rigorous software. So, uh, and this is what I should differentiate JavaScript from TypeScript. This is really what TypeScript is trying to solve. [00:39:07] Bryan: TypeScript is like. How can, I think TypeScript is a, is a great step forward because TypeScript is like, how can we bring some rigor to this? Like, yes, it's great that it's easy to write JavaScript, but that's not, we, we don't wanna do that for Absolutely. I mean that, that's not the only problem we solve. [00:39:23] Bryan: We actually wanna be able to write rigorous software and it's actually okay if it's a little harder to write rigorous software that's actually okay if it gets leads to, to more rigorous artifacts. Um, but in JavaScript, I mean, just a concrete example. You know, there's nothing to prevent you from referencing a property that doesn't actually exist in JavaScript. [00:39:43] Bryan: So if you fat finger a property name, you are relying on something to tell you. By the way, I think you've misspelled this because there is no type definition for this thing. And I don't know that you've got one that's spelled correctly, one that's spelled incorrectly, that's often undefined. And then the, when you actually go, you say you've got this typo that is lurking in your what you want to be rigorous software. [00:40:07] Bryan: And if you don't execute that code, like you won't know that's there. And then you do execute that code. And now you've got a, you've got an undefined object. And now that's either gonna be an exception or it can, again, depends on how that's handled. It can be really difficult to determine the origin of that, of, of that error, of that programming. [00:40:26] Bryan: And that is a programmer error. And one of the big challenges that we had with Node is that programmer errors and operational errors, like, you know, I'm out of disk space as an operational error. Those get conflated and it becomes really hard. And in fact, I think the, the language wanted to make it easier to just kind of, uh, drive on in the event of all errors. [00:40:53] Bryan: And it's like, actually not what you wanna do if you're trying to build a reliable, robust system. So we had. No end of issues. [00:41:01] Bryan: We've got a lot of experience developing rigorous systems, um, again coming out of operating systems development and so on. And we want, we brought some of that rigor, if strangely, to JavaScript. So one of the things that we did is we brought a lot of postmortem, diagnos ability and observability to node. [00:41:18] Bryan: And so if, if one of our node processes. Died in production, we would actually get a core dump from that process, a core dump that we could actually meaningfully process. So we did a bunch of kind of wild stuff. I mean, actually wild stuff where we could actually make sense of the JavaScript objects in a binary core dump. JavaScript values ease of getting started over robustness [00:41:41] Bryan: Um, and things that we thought were really important, and this is the, the rest of the world just looks at this being like, what the hell is this? I mean, it's so out of step with it. The problem is that we were trying to bridge two disconnected cultures of one developing really. Rigorous software and really designing it for production, diagnosability and the other, really designing it to software to run in the browser and for anyone to be able to like, you know, kind of liven up a webpage, right? [00:42:10] Bryan: Is kinda the origin of, of live script and then JavaScript. And we were kind of the only ones sitting at the intersection of that. And you begin when you are the only ones sitting at that kind of intersection. You just are, you're, you're kind of fighting a community all the time. And we just realized that we are, there were so many things that the community wanted to do that we felt are like, no, no, this is gonna make software less diagnosable. It's gonna make it less robust. The NodeJS split and why people left [00:42:36] Bryan: And then you realize like, I'm, we're the only voice in the room because we have got, we have got desires for this language that it doesn't have for itself. And this is when you realize you're in a bad relationship with software. It's time to actually move on. And in fact, actually several years after, we'd already kind of broken up with node. [00:42:55] Bryan: Um, and it was like, it was a bit of an acrimonious breakup. there was a, uh, famous slash infamous fork of node called IoJS Um, and this was viewed because people, the community, thought that Joyent was being what was not being an appropriate steward of node js and was, uh, not allowing more things to come into to, to node. [00:43:19] Bryan: And of course, the reason that we of course, felt that we were being a careful steward and we were actively resisting those things that would cut against its fitness for a production system. But it's some way the community saw it and they, and forked, um, and, and I think the, we knew before the fork that's like, this is not working and we need to get this thing out of our hands. Platform is a reflection of values node summit talk [00:43:43] Bryan: And we're are the wrong hands for this? This needs to be in a foundation. Uh, and so we kind of gone through that breakup, uh, and maybe it was two years after that. That, uh, friend of mine who was um, was running the, uh, the node summit was actually, it's unfortunately now passed away. Charles er, um, but Charles' venture capitalist great guy, and Charles was running Node Summit and came to me in 2017. [00:44:07] Bryan: He is like, I really want you to keynote Node Summit. And I'm like, Charles, I'm not gonna do that. I've got nothing nice to say. Like, this is the, the, you don't want, I'm the last person you wanna keynote. He's like, oh, if you have nothing nice to say, you should definitely keynote. You're like, oh God, okay, here we go. [00:44:22] Bryan: He's like, no, I really want you to talk about, like, you should talk about the Joyent breakup with NodeJS. I'm like, oh man. [00:44:29] Bryan: And that led to a talk that I'm really happy that I gave, 'cause it was a very important talk for me personally. Uh, called Platform is a reflection of values and really looking at the values that we had for Node and the values that Node had for itself. And they didn't line up. [00:44:49] Bryan: And the problem is that the values that Node had for itself and the values that we had for Node are all kind of positives, right? Like there's nobody in the node community who's like, I don't want rigor, I hate rigor. It's just that if they had the choose between rigor and making the language approachable. [00:45:09] Bryan: They would choose approachability every single time. They would never choose rigor. And, you know, that was a, that was a big eye-opener. I do, I would say, if you watch this talk. [00:45:20] Bryan: because I knew that there's, like, the audience was gonna be filled with, with people who, had been a part of the fork in 2014, I think was the, the, the, the fork, the IOJS fork. And I knew that there, there were, there were some, you know, some people that were, um, had been there for the fork and. [00:45:41] Bryan: I said a little bit of a trap for the audience. But the, and the trap, I said, you know what, I, I kind of talked about the values that we had and the aspirations we had for Node, the aspirations that Node had for itself and how they were different. [00:45:53] Bryan: And, you know, and I'm like, look in, in, in hindsight, like a fracture was inevitable. And in 2014 there was finally a fracture. And do people know what happened in 2014? And if you, if you, you could listen to that talk, everyone almost says in unison, like IOJS. I'm like, oh right. IOJS. Right. That's actually not what I was thinking of. [00:46:19] Bryan: And I go to the next slide and is a tweet from a guy named TJ Holloway, Chuck, who was the most prolific contributor to Node. And it was his tweet also in 2014 before the fork, before the IOJS fork explaining that he was leaving Node and that he was going to go. And you, if you turn the volume all the way up, you can hear the audience gasp. [00:46:41] Bryan: And it's just delicious because the community had never really come, had never really confronted why TJ left. Um, there. And I went through a couple folks, Felix, bunch of other folks, early Node folks. That were there in 2010, were leaving in 2014, and they were going to go primarily, and they were going to go because they were sick of the same things that we were sick of. [00:47:09] Bryan: They, they, they had hit the same things that we had hit and they were frustrated. I I really do believe this, that platforms do reflect their own values. And when you are making a software decision, you are selecting value. [00:47:26] Bryan: You should select values that align with the values that you have for that software. That is, those are, that's way more important than other things that people look at. I think people look at, for example, quote unquote community size way too frequently, community size is like. Eh, maybe it can be fine. [00:47:44] Bryan: I've been in very large communities, node. I've been in super small open source communities like AUMs and RAs, a bunch of others. there are strengths and weaknesses to both approaches just as like there's a strength to being in a big city versus a small town. Me personally, I'll take the small community more or less every time because the small community is almost always self-selecting based on values and just for the same reason that I like working at small companies or small teams. [00:48:11] Bryan: There's a lot of value to be had in a small community. It's not to say that large communities are valueless, but again, long answer to your question of kind of where did things go south with Joyent and node. They went south because the, the values that we had and the values the community had didn't line up and that was a very educational experience, as you might imagine. [00:48:33] Jeremy: Yeah. And, and given that you mentioned how, because of those values, some people moved from Node to go, and in the end for much of what oxide is building. You ended up using rust. What, what would you say are the, the values of go and and rust, and how did you end up choosing Rust given that. Go's decisions regarding generics, versioning, compilation speed priority [00:48:56] Bryan: Yeah, I mean, well, so the value for, yeah. And so go, I mean, I understand why people move from Node to Go, go to me was kind of a lateral move. Um, there were a bunch of things that I, uh, go was still garbage collected, um, which I didn't like. Um, go also is very strange in terms of there are these kind of like. [00:49:17] Bryan: These autocratic kind of decisions that are very bizarre. Um, there, I mean, generics is kind of a famous one, right? Where go kind of as a point of principle didn't have generics, even though go itself actually the innards of go did have generics. It's just that you a go user weren't allowed to have them. [00:49:35] Bryan: And you know, it's kind of, there was, there was an old cartoon years and years ago about like when a, when a technologist is telling you that something is technically impossible, that actually means I don't feel like it. Uh, and there was a certain degree of like, generics are technically impossible and go, it's like, Hey, actually there are. [00:49:51] Bryan: And so there was, and I just think that the arguments against generics were kind of disingenuous. Um, and indeed, like they ended up adopting generics and then there's like some super weird stuff around like, they're very anti-assertion, which is like, what, how are you? Why are you, how is someone against assertions, it doesn't even make any sense, but it's like, oh, nope. [00:50:10] Bryan: Okay. There's a whole scree on it. Nope, we're against assertions and the, you know, against versioning. There was another thing like, you know, the Rob Pike has kind of famously been like, you should always just run on the way to commit. And you're like, does that, is that, does that make sense? I mean this, we actually built it. [00:50:26] Bryan: And so there are a bunch of things like that. You're just like, okay, this is just exhausting and. I mean, there's some things about Go that are great and, uh, plenty of other things that I just, I'm not a fan of. Um, I think that the, in the end, like Go cares a lot about like compile time. It's super important for Go Right? [00:50:44] Bryan: Is very quick, compile time. I'm like, okay. But that's like compile time is not like, it's not unimportant, it's doesn't have zero importance. But I've got other things that are like lots more important than that. Um, what I really care about is I want a high performing artifact. I wanted garbage collection outta my life. Don't think garbage collection has good trade offs [00:51:00] Bryan: I, I gotta tell you, I, I like garbage collection to me is an embodiment of this like, larger problem of where do you put cognitive load in the software development process. And what garbage collection is saying to me it is right for plenty of other people and the software that they wanna develop. [00:51:21] Bryan: But for me and the software that I wanna develop, infrastructure software, I don't want garbage collection because I can solve the memory allocation problem. I know when I'm like, done with something or not. I mean, it's like I, whether that's in, in C with, I mean it's actually like, it's really not that hard to not leak memory in, in a C base system. [00:51:44] Bryan: And you can. give yourself a lot of tooling that allows you to diagnose where memory leaks are coming from. So it's like that is a solvable problem. There are other challenges with that, but like, when you are developing a really sophisticated system that has garbage collection is using garbage collection. [00:51:59] Bryan: You spend as much time trying to dork with the garbage collector to convince it to collect the thing that you know is garbage. You are like, I've got this thing. I know it's garbage. Now I need to use these like tips and tricks to get the garbage collector. I mean, it's like, it feels like every Java performance issue goes to like minus xx call and use the other garbage collector, whatever one you're using, use a different one and using a different, a different approach. [00:52:23] Bryan: It's like, so you're, you're in this, to me, it's like you're in the worst of all worlds where. the reason that garbage collection is helpful is because the programmer doesn't have to think at all about this problem. But now you're actually dealing with these long pauses in production. [00:52:38] Bryan: You're dealing with all these other issues where actually you need to think a lot about it. And it's kind of, it, it it's witchcraft. It, it, it's this black box that you can't see into. So it's like, what problem have we solved exactly? And I mean, so the fact that go had garbage collection, it's like, eh, no, I, I do not want, like, and then you get all the other like weird fatwahs and you know, everything else. [00:52:57] Bryan: I'm like, no, thank you. Go is a no thank you for me, I, I get it why people like it or use it, but it's, it's just, that was not gonna be it. Choosing Rust [00:53:04] Bryan: I'm like, I want C. but I, there are things I didn't like about C too. I was looking for something that was gonna give me the deterministic kind of artifact that I got outta C. But I wanted library support and C is tough because there's, it's all convention. you know, there's just a bunch of other things that are just thorny. And I remember thinking vividly in 2018, I'm like, well, it's rust or bust. Ownership model, algebraic types, error handling [00:53:28] Bryan: I'm gonna go into rust. And, uh, I hope I like it because if it's not this, it's gonna like, I'm gonna go back to C I'm like literally trying to figure out what the language is for the back half of my career. Um, and when I, you know, did what a lot of people were doing at that time and people have been doing since of, you know, really getting into rust and really learning it, appreciating the difference in the, the model for sure, the ownership model people talk about. [00:53:54] Bryan: That's also obviously very important. It was the error handling that blew me away. And the idea of like algebraic types, I never really had algebraic types. Um, and the ability to, to have. And for error handling is one of these really, uh, you, you really appreciate these things where it's like, how do you deal with a, with a function that can either succeed and return something or it can fail, and the way c deals with that is bad with these kind of sentinels for errors. [00:54:27] Bryan: And, you know, does negative one mean success? Does negative one mean failure? Does zero mean failure? Some C functions, zero means failure. Traditionally in Unix, zero means success. And like, what if you wanna return a file descriptor, you know, it's like, oh. And then it's like, okay, then it'll be like zero through positive N will be a valid result. [00:54:44] Bryan: Negative numbers will be, and like, was it negative one and I said airo, or is it a negative number that did not, I mean, it's like, and that's all convention, right? People do all, all those different things and it's all convention and it's easy to get wrong, easy to have bugs, can't be statically checked and so on. Um, and then what Go says is like, well, you're gonna have like two return values and then you're gonna have to like, just like constantly check all of these all the time. Um, which is also kind of gross. Um, JavaScript is like, Hey, let's toss an exception. If, if we don't like something, if we see an error, we'll, we'll throw an exception. [00:55:15] Bryan: There are a bunch of reasons I don't like that. Um, and you look, you'll get what Rust does, where it's like, no, no, no. We're gonna have these algebra types, which is to say this thing can be a this thing or that thing, but it, but it has to be one of these. And by the way, you don't get to process this thing until you conditionally match on one of these things. [00:55:35] Bryan: You're gonna have to have a, a pattern match on this thing to determine if it's a this or a that, and if it in, in the result type that you, the result is a generic where it's like, it's gonna be either the thing that you wanna return. It's gonna be an okay that contains the thing you wanna return, or it's gonna be an error that contains your error and it forces your code to deal with that. [00:55:57] Bryan: And what that does is it shifts the cognitive load from the person that is operating this thing in production to the, the actual developer that is in development. And I think that that, that to me is like, I, I love that shift. Um, and that shift to me is really important. Um, and that's what I was missing, that that's what Rust gives you. [00:56:23] Bryan: Rust forces you to think about your code as you write it, but as a result, you have an artifact that is much more supportable, much more sustainable, and much faster. Prefer to frontload cognitive load during development instead of at runtime [00:56:34] Jeremy: Yeah, it sounds like you would rather take the time during the development to think about these issues because whether it's garbage collection or it's error handling at runtime when you're trying to solve a problem, then it's much more difficult than having dealt with it to start with. [00:56:57] Bryan: Yeah, absolutely. I, and I just think that like, why also, like if it's software, if it's, again, if it's infrastructure software, I mean the kinda the question that you, you should have when you're writing software is how long is this software gonna live? How many people are gonna use this software? Uh, and if you are writing an operating system, the answer for this thing that you're gonna write, it's gonna live for a long time. [00:57:18] Bryan: Like, if we just look at plenty of aspects of the system that have been around for a, for decades, it's gonna live for a long time and many, many, many people are gonna use it. Why would we not expect people writing that software to have more cognitive load when they're writing it to give us something that's gonna be a better artifact? [00:57:38] Bryan: Now conversely, you're like, Hey, I kind of don't care about this. And like, I don't know, I'm just like, I wanna see if this whole thing works. I've got, I like, I'm just stringing this together. I don't like, no, the software like will be lucky if it survives until tonight, but then like, who cares? Yeah. Yeah. [00:57:52] Bryan: Gar garbage clock. You know, if you're prototyping something, whatever. And this is why you really do get like, you know, different choices, different technology choices, depending on the way that you wanna solve the problem at hand. And for the software that I wanna write, I do like that cognitive load that is upfront. With LLMs maybe you can get the benefit of the robust artifact with less cognitive load [00:58:10] Bryan: Um, and although I think, I think the thing that is really wild that is the twist that I don't think anyone really saw coming is that in a, in an LLM age. That like the cognitive load upfront almost needs an asterisk on it because so much of that can be assisted by an LLM. And now, I mean, I would like to believe, and maybe this is me being optimistic, that the the, in the LLM age, we will see, I mean, rust is a great fit for the LLMH because the LLM itself can get a lot of feedback about whether the software that's written is correct or not. [00:58:44] Bryan: Much more so than you can for other environments. [00:58:48] Jeremy: Yeah, that is a interesting point in that I think when people first started trying out the LLMs to code, it was really good at these maybe looser languages like Python or JavaScript, and initially wasn't so good at something like Rust. But it sounds like as that improves, if. It can write it then because of the rigor or the memory management or the error handling that the language is forcing you to do, it might actually end up being a better choice for people using LLMs. [00:59:27] Bryan: absolutely. I, it, it gives you more certainty in the artifact that you've delivered. I mean, you know a lot about a Rust program that compiles correctly. I mean, th there are certain classes of errors that you don't have, um, that you actually don't know on a C program or a GO program or a, a JavaScript program. [00:59:46] Bryan: I think that's gonna be really important. I think we are on the cusp. Maybe we've already seen it, this kind of great bifurcation in the software that we writ
ML engineering demand remains high with a 3.2 to 1 job-to-candidate ratio, but entry-level hiring is collapsing as AI automates routine programming and data tasks. Career longevity requires shifting from model training to production operations, deep domain expertise, and mastering AI-augmented workflows before standard implementation becomes a commodity. Links Notes and resources at ocdevel.com/mlg/mla-30 Try a walking desk - stay healthy & sharp while you learn & code Generate a podcast - use my voice to listen to any AI generated content you want Market Data and Displacement ML engineering demand rose 89% in early 2025. Median salary is $187,500, with senior roles reaching $550,000. There are 3.2 open jobs for every qualified candidate. AI-exposed roles for workers aged 22 to 25 declined 13 to 16%, while workers over 30 saw 6 to 12% growth. Professional service job openings dropped 20% year-over-year by January 2025. Microsoft cut 15,000 roles, targeting software engineers, and 30% of its code is now AI-generated. Salesforce reduced support headcount from 9,000 to 5,000 after AI handled 30 to 50% of its workload. Sector Comparisons Creative: Chinese illustrator jobs fell 70% in one year. AI increased output from 1 to 40 scenes per day, crashing commission rates by 90%. Trades: US construction lacks 1.7 million workers. Licensing takes 5 years, and the career fatality risk is 1 in 200. High suicide rates (56 per 100,000) and emerging robotics like the $5,900 Unitree R1 indicate a 10 to 15 year window before automation. Orchestration: Prompt engineering roles paying $375,000 became nearly obsolete in 24 months. Claude Code solves 72% of GitHub issues in under eight minutes. Technical Specialization Priorities Model Ops: Move from training to deployment using vLLM or TensorRT. Set up drift detection and monitoring via MLflow or Weights & Biases. Evaluation: Use DeepEval or RAGAS to test for hallucinations, PII leaks, and adversarial robustness. Agentic Workflows: Build multi-step systems with LangGraph or CrewAI. Include human-in-the-loop checkpoints and observability. Optimization: Focus on quantization and distillation for on-device, air-gapped deployment. Domain Expertise: 57.7% of ML postings prefer specialists in healthcare, finance, or climate over generalists. Industry Perspectives Accelerationists (Amodei, Altman): Predict major disruption within 1 to 5 years. Skeptics (LeCun, Marcus): Argue LLMs lack causal reasoning, extending the adoption timeline to 10 to 15 years. Pragmatists (Andrew Ng): Argue that as code gets cheap, the bottleneck shifts from implementation to specification.
AI is already displacing workers in targeted ways - entry-level knowledge workers are being quietly erased from hiring pipelines, freelancers are getting crushed, and the career ladder is being sawed off at the bottom rungs. Yet ML engineer demand has surged 89% with a 3.2:1 talent deficit and $187K median salary. Covers the real displacement data, lessons from the artist bloodbath, the trades escape hatch, the orchestrator treadmill, expert disagreements on timelines, and concrete short- and long-term career moves for ML engineers. Links Notes and resources at ocdevel.com/mlg/mla-4 Try a walking desk - stay healthy & sharp while you learn & code Generate a podcast - use my voice to listen to any AI generated content you want Market Metrics and Displacement Dynamics ML Market: H1 2025 demand rose 89% with a 3.2 to 1 talent deficit. Median salary is $187,500, while Generative AI specialists earn a 40 to 60 percent premium. The "Quiet" Decline: Macro data shows only 4.5% of total layoffs are AI-attributed, but entry-level hiring is collapsing. Stanford/ADP data shows a 13 to 16 percent employment drop for workers aged 22 to 25 in AI-exposed roles since late 2022. UK graduate job postings fell 67%. Corporate Attrition: Salesforce cut 4,000 roles after AI absorbed 30 to 50 percent of workloads. Microsoft cut 15,000 roles as AI began generating 30% of its code. Amazon cut 30,000 jobs while spending $100 billion on AI infrastructure. Sector Analysis: Creative and Trades Illustrators: Jobs in China's gaming sector fell 70% in one year. Clients accept "good enough" work (80% quality) at 5% of the cost. Western freelance graphic design and writing jobs fell 18.5% and 30% respectively within eight months of ChatGPT's launch. Manual Labor: The U.S. construction industry lacks 1.7 million workers annually, but apprenticeships take five years. Humanoid robotics are advancing, with Unitree's R1 priced at $5,900 and Figure AI robots completing 1,250 runtime hours at BMW. Full automation is 10 to 15 years away, but partial displacement via smaller crews is closer. The Orchestration Treadmill Obsolescence Speed: Prompt engineering roles went from $375,000 salaries to obsolescence in 24 months. AI coding agents like Claude Code now resolve 72% of medium-complexity GitHub issues autonomously. Fragile Expertise: Replacing junior workers with AI prevents the development of future senior talent. New engineers risk "fragile expertise," directed by tools they cannot debug during novel failure modes. Economic and Expert Outlook Macro Risks: Daron Acemoglu warns of "so-so automation" that cuts costs without raising productivity, predicting only 0.66% growth over ten years. "Ghost GDP" describes AI-inflated accounts that fail to circulate because machines do not consume. Expert Camps: Accelerationists (Anthropic, OpenAI) predict human-level AI by 2027. Skeptics (LeCun, Marcus) argue LLMs are a dead end lacking world models. Pragmatists (Andrew Ng) suggest shifting from implementation to specification as the cost of code nears zero. Tactical Adaptation for ML Engineers Immediate Skills: Master production ML systems, MLOps, LLM evaluation, and safety engineering. Ability to manage deployment risks and hallucination detection is the primary hiring differentiator. Long-term Moats: Focus on "Small AI" (on-device, private), mechanistic interpretability, and deep domain knowledge in healthcare, logistics, or climate science. The Playbook: Optimize for the current three to five year window. Move from being a model builder to a product-focused engineer who understands business tradeoffs and regulatory compliance.
Jason Martin, Director of Adversarial Research at HiddenLayer, returns to discuss the security implications of OpenClaw, a viral open-source AI personal assistant that was entirely vibe-coded and has exploded to 180,000 GitHub stars. Subscribe to the Gradient Flow Newsletter
In this episode of Startup Hustle, Matt Watson interviews Krishna Oza, founder and COO of Git Hired, discussing the challenges of hiring software engineers, particularly for startups. Krishna shares his personal experiences that led to the creation of GitHired, an AI-driven platform designed to help startups find the right technical talent based on proof of work. The conversation delves into the unique needs of early-stage developers, the importance of product thinking, and how GitHired identifies and surfaces 10x engineers. Krishna also discusses the business model of GitHired and the struggles faced by startup founders in finding suitable engineering talent.TAKEAWAYSKrishna's personal experience with hiring challenges inspired GitHired.Startups need engineers who can match their fast-paced environment.Early-stage developers are builders who understand product development.Product thinking is crucial in today's AI-driven landscape.10x engineers possess product vision and minimal organizational friction.Get Hired surfaces hidden engineering talent through GitHub analysis.The platform creates one-page portfolios for applicants based on their work.Complexity of projects is a key factor in evaluating candidates.The business model includes a flat fee for successful hires.Startup founders often struggle to find engineers who can build for users.⏱️ Episode Breakdown00:00 The Genesis of GitHired03:01 The Ideal Early Stage Developer07:01 The Importance of Product Thinking10:10 Identifying 10x Engineers12:52 The Role of Proof of Work20:09 Business Model and Market Fit23:40 Startup Founder StrugglesLinks & ResourcesConnect with Krishna Oza on LinkedInWhat Smart CTOs Are Doing Differently With Offshore Teams in 2025Subscribe to the Global Talent SprintFull Scale – Build your dev team quickly and affordablyIf you're trying to get your team out of the basement and into real product ownership, this episode is your playbook. Stop being a ticket factory. Build teams that think, create, and lead.Follow the show, rate it, and send this to someone who's still trying to do “real Scrum.” They need it more than you do.
The PodRocket panel is back for their February roundup! Paige, Paul, Jack and Noel dig into the biggest stories reshaping the web development landscape right now. The panel kicks off with a deep dive into OpenClaw, it's transition to a foundation, and Peter Steinberger joining OpenAI. Is a foundation the right long-term home for fast-moving AI projects? And what does the continuing flow of talent into big AI labs mean for the open source ecosystem? From there, the conversation shifts to the browser's changing role in the web, how the lines between native and web experiences continue to blur, and what that means for developers building for the future. The panel also tackles growing pressures on open source sustainability and the widening gap between developers who are deeply integrating AI agents into their workflows and everyone else who hasn't even heard of these tools yet. Resources TechCrunch: OpenClaw creator Peter Steinberger joins OpenAI: https://techcrunch.com/2026/02/15/openclaw-creator-peter-steinberger-joins-openai Interop 2026 report and dashboard: https://web.dev/blog/interop-2026 Google Chrome announcement on Gemini auto-browsing: https://blog.google/products-and-platforms/products/chrome/gemini-3-auto-browse/ What to expect for open source in 2026, Github blog: https://github.blog/open-source/maintainers/what-to-expect-for-open-source-in-2026/?ref=thecodebrew.net We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Fill out our listener survey! https://t.co/oKVAEXipxu Let us know by sending an email to our producer, Elizabeth, at elizabeth.becz@logrocket.com, or tweet at us at PodRocketPod. Check out our newsletter! https://blog.logrocket.com/the-replay-newsletter/ Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form, and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understanding where your users are struggling by trying it for free at LogRocket.com. Try LogRocket for free today. Chapters 00:00 Intro and Panel Welcome 01:00 What Is OpenClaw 03:00 Moving to a Foundation and OpenAI Concerns 08:00 AI Security Risks and Malware Issues 13:00 AI Haves vs Have Nots 18:00 Evaluating Open Source AI Stability 26:00 Browser Interop 2026 and Compatibility Gaps 31:00 Designing for AI Agents First 37:00 AI Search vs Google 42:00 Gemini in Chrome and Browser Lock In 49:00 Hot Takes 55:00 AI Burnout and Developer Mental HealthSpecial Guest: Jack Herrington.
Bienvenidos a un nuevo episodio de Atareao con Linux. Soy Lorenzo y hoy vamos a profundizar en un aspecto que hace que la migración de Docker a Podman sea no solo recomendable, sino necesaria para quienes buscamos estabilidad: la gestión nativa de actualizaciones y la seguridad de los rollbacks.En el ecosistema de contenedores, la actualización es vital para la seguridad y el rendimiento, pero siempre conlleva el riesgo de romper el servicio. Muchos hemos confiado en Watchtower, pero hoy descubriremos por qué Podman juega en otra liga. Al estar integrado directamente con SystemD, Podman nos permite automatizar todo el proceso sin dependencias externas.¿Qué aprenderás en este episodio?Podman Auto-Update: Cómo configurar tus contenedores para que se mantengan al día usando etiquetas de registro y locales.Quadlets y SystemD: La forma profesional de gestionar infraestructura como código en tu propia máquina Linux.Timers inteligentes: Cómo programar actualizaciones para que no ocurran todas a la vez y cómo verificar estas programaciones con herramientas nativas.Rollbacks Reactivos: La capacidad de Podman para detectar un fallo en una nueva versión y volver automáticamente a la imagen anterior, garantizando la disponibilidad del servicio.Notificaciones de estado: Cómo integrar avisos en Telegram o Matrix para estar siempre informado de lo que ocurre en tu servidor.Hablaremos sobre la importancia de los Health Checks y cómo estos actúan como el disparador perfecto para que el sistema decida si una actualización ha sido exitosa o si debe retroceder. Es la solución definitiva a esos problemas de compatibilidad que a veces surgen cuando una imagen nueva cambia sus requisitos sin previo aviso.Si te apasiona el auto-hosting, la administración de sistemas o simplemente quieres optimizar tu flujo de trabajo con contenedores, este episodio te dará las claves técnicas para Capítulos:00:00:00 Introducción al episodio 774: Migración a Podman00:00:32 El problema de Watchtower en Docker00:01:42 Podman: Actualizaciones automáticas de caja00:02:10 La magia de la integración con SystemD00:03:13 El comando Podman Auto-Update y etiquetas00:03:57 Registry vs Local: Opciones de actualización00:04:22 Cómo etiquetar contenedores y Quadlets00:05:30 Configuración de timers con SystemD00:06:33 Opciones avanzadas de programación y aleatoriedad00:07:40 Verificación de timers con SystemD-Analyze00:08:29 Ventajas sobre servicios externos00:09:04 Rollbacks Reactivos: La gran ventaja de Podman00:09:44 Requisitos técnicos para el rollback automático00:10:33 Health Checks: Garantizando la salud del servicio00:11:44 Solución a problemas de compatibilidad en imágenes00:12:51 Notificaciones automáticas en Telegram y Matrix00:14:11 Conclusiones y superioridad frente a Watchtower00:14:40 Recursos en las notas y etiquetas detalladas00:15:34 Próximos pasos de la migración: Traefik y Logs00:16:32 Despedida y red de podcastsRecuerda que tienes todos los detalles técnicos y las etiquetas mencionadas en las notas del episodio en atareao.es. ¡Disfruta del episodio!Más información y enlaces en las notas del episodio
I guess let's talk about the BAFTAs... again. Jump in with Janaya Future Khan. Project MVT on Github: https://github.com/mvt-project/mvt SUBSCRIBE + FOLLOW IG: www.instagram.com/darkwokejfk Youtube: www.youtube.com/@darkwoke TikTok: https://www.tiktok.com/@janayafk SUPPORT THE SHOW Patreon - https://patreon.com/@darkwoke Tip w/ a One Time Donation - https://buymeacoffee.com/janayafk Have a query? Comment? Reach out to us at: info@darkwoke.com and we may read it aloud on the show!
In this episode of the Ardan Labs Podcast, Ale Kennedy talks with Jens Neuse, CEO and co-founder of WunderGraph, about his unconventional path into technology and entrepreneurship. After a life-altering accident ended his carpentry career, Jens taught himself to code during recovery and eventually built WunderGraph to solve modern API challenges.Jens shares the evolution of WunderGraph from an early-stage startup to a successful open-source platform, including pivotal moments like securing eBay as a customer. The conversation highlights the importance of resilience, community-driven development, and balancing startup life with family, offering insight into what it takes to build meaningful technology through adversity and persistence.00:00 Introduction and Current Life07:19 Dropping Out and Carpentry Career10:52 Life-Altering Accident and Recovery18:01 Learning to Walk and Finding Direction27:46 Discovering Coding and Technology31:17 Starting the Startup Journey33:07 Discovering the Power of APIs40:50 Building a Team and Leadership Growth48:17 Founding WunderGraph59:07 Pivoting to Open Source01:05:32 eBay Breakthrough and Validation01:10:08 Balancing Family and Startup LifeConnect with Jens: LinkedIn: https://www.linkedin.com/in/jens-neuseMentioned in this Episode:Wundergraph: https://wundergraph.comWant more from Ardan Labs? You can learn Go, Kubernetes, Docker & more through our video training, live events, or through our blog!Online Courses : https://ardanlabs.com/education/ Live Events : https://www.ardanlabs.com/live-training-events/ Blog : https://www.ardanlabs.com/blog Github : https://github.com/ardanlabs
SolarWinds patches four critical remote code execution vulnerabilities. A ransomware attack on Conduant puts the data of over 25 million Americans at risk. RoguePilot enables Github repository takeovers. ZeroDayRat targets Android and iOS devices. North Korea's Lazarus group deploy Medusa ransomware against organizations in the U.S. and the Middle East. Attackers' breakout times drop to under half an hour. CISA maintains its mission despite staffing challenges. Russian satellites draw fresh scrutiny. Two South Korean teenagers are charged with breaching Seoul's public bike service. Krishna Sai, CTO at SolarWinds, discusses why leaders should focus less on speculating about an AI bubble, and more on how to quantify AI's tangible contributions. The Pope pushes prayerful priests past predictable programs. Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you'll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. CyberWire Guest Today we are joined by Krishna Sai, CTO at SolarWinds, discussing why leaders should focus less on speculating about an AI bubble, and more on how to quantify AI's tangible contributions. Selected Reading Critical SolarWinds Serv-U flaws offer root access to servers (Bleeping Computer) Massive Conduent Data Breach Exfiltrates 8 TB Affects Over 25 Million Americans (GB Hackers) GitHub Issues Abused in Copilot Attack Leading to Repository Takeover (SecurityWeek) New ZeroDayRAT Malware Claims Full Monitoring of Android and iOS Devices (Hackread) North Korean state hackers seen using Medusa ransomware in attacks on US, Middle East (The Record) CrowdStrike says attackers are moving through networks in under 30 minutes (CyberScoop) Shutdown at D.H.S. Extends to Cyber Agency, Adding to Setbacks (The New York Times) From Cold War interceptors to Ukraine: how Russia came to park spy satellites next to the West's most sensitive tech in orbit (Meduza) Korean cops charge two teens over Seoul bike hire breach (The Register) Pope tells priests to use their brains, not AI, to write homilies (EWTN News) Share your feedback. What do you think about CyberWire Daily? Please take a few minutes to share your thoughts with us by completing our brief listener survey. Thank you for helping us continue to improve our show. Want to hear your company in the show? N2K CyberWire helps you reach the industry's most influential leaders and operators, while building visibility, authority, and connectivity across the cybersecurity community. Learn more at sponsor.thecyberwire.com. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices
Sally and Aji flick through thoughtbot's guide to best practices in a bid to brush up on their coding habits. Our hosts discuss key ideas from the guides that stand out to them the most, why they're considered to be good practice, as well as reviewing the cons of complex writing and the benefits of simple coding. — Be sure to check out Sally's new repo Michel if you're looking to create an appointment database, and check out the thoughtbot guides for more general coding advice. If you've got some spare time and want to hear Aji's talk on breaking the enigma code you can watch that here. Your hosts for this episode have been thoughtbot's own Sally Hall and Aji Slater. If you would like to support the show, head over to our GitHub page, or check out our website. Got a question or comment about the show? Why not write to our hosts: hosts@bikeshed.fm This has been a thoughtbot podcast. Stay up to date by following us on social media - YouTube - LinkedIn - Mastodon - BlueSky © 2026 thoughtbot, inc.
This is a free preview of a paid episode. To hear more, visit www.latent.spaceFirst speakers for AIE Europe and AIEi Miami have been announced. If you're in Asia/Aus, come by Singapore and Melbourne. AI Engineering is going global!One year ago today, Anthropic launched Claude Code, to not much fanfare:The word of mouth was incredibly strong however, and so we were glad to be one of the first podcasts to invite Boris and Cat on in early May:As we discussed on the pod, all CC usage was API-based and therefore it was ridiculously expensive to do anything. This was then fixed by the team including Claude Code in the Claude Pro plan in early June, and then the virality caused us to make a rare trend call in late June:Now, 6 months on, Doug has just calculated that around 4% of GitHub is written by Claude Code:We talk about how Doug uses Claude Code to do SemiAnalysis work.Memory ManiaIn the second part of this episode, we also check in on Memory Mania, which is going to affect you (yes, you) at home if it hasn't already:Full Episode on YouTubeTimestamps00:00 AI as Junior Analyst00:59 Meet Swyx and Doug03:30 From Value Mule to Semis06:28 Moore's Law Ends Thesis12:02 Claude Code Awakening32:02 Agent Swarms Reality Check32:53 Kimi Swarm Benchmarks37:31 Bots vs Zapier Automation39:44 Claude Code Workflow Setup57:54 AGI Metrics and GDP01:04:48 Railroad CapEx Analogy01:06:00 Funding Bubbles and Demand01:08:11 Agents Replace Work Tools01:13:56 Codex vs Claude Race01:21:15 Microsoft and TPU Strategy01:34:13 TPU Window vs Nvidia01:36:30 HBM Supply Chain Squeeze01:39:41 Memory Shock and CXL01:45:20 Context Rationing Future01:54:37 Writing and Trail LessonsTranscript[00:00:00] AI as Junior Analyst[00:00:00] Doug: This crap makes mistakes all the time. All the time. It is still just like a, like I think of it once again as like a junior analyst, right? The analyst goes and does all this like really pain in the ass information and you bring it all together to make a good decision at the top. Historically what happens is that junior analyst, who I once was, went and gathered all that information, and after doing this enough times, there's a meta level thinking that's happening where it's like, okay, here's what I really understand and how this type of analysis, I'm an expert in, actually I'm very good at, I consistently have a hit rate.[00:00:28] Now I'm the expert, right? I don't think that meta level learning is there yet. We'll see if l ones do it, right? Everyone who's spending one quadrillion dollars in the world thinks it will, it better, it better happen by if you're spending, you know, a trillion dollars and there's not meta level learning.[00:00:44] But for me, in our firm, that massively amplifies everyone who is an expert. ‘cause like you have to still do something that you can just like lop it up. It's very obvious to me. What It's slop.[00:00:59] Meet Swyx and Doug
Cafes, restaurants and car washes all use points and rewards to drive behaviors. Can we do the same with our Learning Management Systems? In this week's episode of The Mindtools L&D Podcast, Incentli's Jeff Campbell speaks to Ross G and Ross D about: how digital currencies give LMS administrators levers they can pull to drive behavior the role of extrinsic and intrinsic motivation on learning the impact of branded swag on learner advocacy In 'What I Learned This Week', Ross D discussed GitHub commits (super fun to bet on!) For more from Incentli, visit incentli.com. Incentli are a Mindtools Kineo partner, so if you would like to discuss integrating points and rewards with our Totara LMS please do get in touch by contacting custom@mindtools.com. For more from Mindtools Kineo, visit mindtools.com or kineo.com. There, you'll also find details of our Learning Management Systems, Content Hub for leaders and managers, and custom learning design service. Like the show? You'll LOVE our newsletter! Subscribe to The L&D Dispatch at lddispatch.com Connect with our speakers If you'd like to share your thoughts on this episode, connect with us on LinkedIn: Ross Garner Ross Dickie Jeff Campbell
Hi family, let's talk about the BAFTAs and Charlemagne's 'response' ... Jump in with Janaya Future Khan. Project MVT on Github: https://github.com/mvt-project/mvt SUBSCRIBE + FOLLOW IG: www.instagram.com/darkwokejfk Youtube: www.youtube.com/@darkwoke TikTok: https://www.tiktok.com/@janayafk SUPPORT THE SHOW Patreon - https://patreon.com/@darkwoke Tip w/ a One Time Donation - https://buymeacoffee.com/janayafk Have a query? Comment? Reach out to us at: info@darkwoke.com and we may read it aloud on the show!
March 3rd, Computer History Museum CODING AGENTS CONFERENCE, come join us while there are still tickets left.https://luma.com/codingagentsChris Fregly is currently focused on building and scaling high-performance AI systems, writing and teaching about AI infrastructure, helping organizations adopt generative AI and performance engineering principles on AWS, and fostering large developer communities around these topics.Performance Optimization and Software/Hardware Co-design across PyTorch, CUDA, and NVIDIA GPUs // MLOps Podcast #363 with Chris Fregly, Founder, AI Performance Engineer, and InvestorJoin the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletterMLOps GPU Guide: https://go.mlops.community/gpuguide// AbstractIn today's era of massive generative models, it's important to understand the full scope of AI systems' performance engineering. This talk discusses the new O'Reilly book, AI Systems Performance Engineering, and the accompanying GitHub repo (https://github.com/cfregly/ai-performance-engineering). This talk provides engineers, researchers, and developers with a set of actionable optimization strategies. You'll learn techniques to co-design and co-optimize hardware, software, and algorithms to build resilient, scalable, and cost-effective AI systems for both training and inference. // BioChris Fregly is an AI performance engineer and startup founder with experience at AWS, Databricks, and Netflix. He's the author of three (3) O'Reilly books, including Data Science on AWS (2021), Generative AI on AWS (2023), and AI Systems Performance Engineering (2025). He also runs the global AI Performance Engineering meetup and speaks at many AI-related conferences, including Nvidia GTC, ODSC, Big Data London, and more.// Related LinksAI Systems Performance Engineering: Optimizing Model Training and Inference Workloads with GPUs, CUDA, and PyTorch 1st Edition by Chris Fregly: https://www.amazon.com/Systems-Performance-Engineering-Optimizing-Algorithms/dp/B0F47689K8/Coding Agents Conference: https://luma.com/codingagents~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Chris on LinkedIn: /cfreglyTimestamps:[00:00] SageMaker HyperPod Resilience[00:27] Book Creation and Software Engineering[04:57] Software Engineers and Maintenance[11:49] AI Systems Performance Engineering[22:03] Cognitive Biases and Optimization / "Mechanical Sympathy"[29:36] GPU Rack-Scale Architecture[33:58] Data Center Reliability Issues[43:52] AI Compute Platforms[49:05] Hardware vs Ecosystem Choice[1:00:05] Claude vs Codex vs Gemini[1:14:53] Kernel Budget Allocation[1:18:49] Steerable Reasoning Challenges[1:24:18] Data Chain Value Awareness
What if your AI assistant could negotiate your next car purchase, trade prediction markets, manage your inbox, and research crypto… while you sleep? This week we break down the insane rise of ClaudeBot → Moltbot → OpenClaw, the open-source AI agent that rocketed past 200,000 GitHub stars and sparked a massive wave of autonomous agents almost overnight. But of course, crypto had to crypto. Along the way, the project triggered a triple rebrand, a $16M scam token spun up by opportunists, and a security mess that unfolded in real time as attackers hunted for poorly-secured instances and exposed keys. We also get into the bigger picture: as AI agents start doing real work for people (and eventually paying bills, trading, and moving value), crypto becomes the natural payment rail. Permissionless. Programmable. Always on. Which is exciting… and also a whole new playground for scammers. We cover what happened, why it matters, and what you should do if you’re experimenting with agents: lock it down, separate machines/accounts, protect your keys, and don’t chase random tokens. Show notes and links: http://badco.in/804 Leave a comment with the best OpenClaw tutorial you’ve found — we’ll dig in.Support the show: https://badcryptopodcast.comSee omnystudio.com/listener for privacy information.
Topics covered in this episode: Better Python tests with inline-snapshot jolt Battery intelligence for your laptop Markdown code formatting with ruff act - run your GitHub actions locally Extras Joke Watch on YouTube About the show Sponsored by us! Support our work through: Our courses at Talk Python Training The Complete pytest Course Patreon Supporters Connect with the hosts Michael: @mkennedy@fosstodon.org / @mkennedy.codes (bsky) Brian: @brianokken@fosstodon.org / @brianokken.bsky.social Show: @pythonbytes@fosstodon.org / @pythonbytes.fm (bsky) Join us on YouTube at pythonbytes.fm/live to be part of the audience. Usually Monday at 11am PT. Older video versions available there too. Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to our friends of the show list, we'll never share it. Brian #1: Better Python tests with inline-snapshot Alex Hall, on Pydantic blog Great for testing complex data structures Allows you to write a test like this: from inline_snapshot import snapshot def test_user_creation(): user = create_user(id=123, name="test_user") assert user.dict() == snapshot({}) Then run pytest --inline-snapshot=fix And the library updates the test source code to look like this: def test_user_creation(): user = create_user(id=123, name="test_user") assert user.dict() == snapshot({ "id": 123, "name": "test_user", "status": "active" }) Now, when you run the code without “fix” the collected data is used for comparison Awesome to be able to visually inspect the test data right there in the test code. Projects mentioned inline-snapshot pytest-examples syrupy dirty-equals executing Michael #2: jolt Battery intelligence for your laptop Support for both macOS and Linux Battery Status — Charge percentage, time remaining, health, and cycle count Power Monitoring — System power draw with CPU/GPU breakdown Process Tracking — Processes sorted by energy impact with color-coded severity Historical Graphs — Track battery and power trends over time Themes — 10+ built-in themes with dark/light auto-detection Background Daemon — Collect historical data even when the TUI isn't running Process Management — Kill energy-hungry processes directly Brian #3: Markdown code formatting with ruff Suggested by Matthias Schoettle ruff can now format code within markdown files Will format valid Python code in code blocks marked with python, py, python3 or py3. Also recognizes pyi as Python type stub files. Includes the ability to turn off formatting with comment [HTML_REMOVED] , [HTML_REMOVED] blocks. Requires preview mode [tool.ruff.lint] preview = true Michael #4: act - run your GitHub actions locally Run your GitHub Actions locally! Why would you want to do this? Two reasons: Fast Feedback - Rather than having to commit/push every time you want to test out the changes you are making to your .github/workflows/ files (or for any changes to embedded GitHub actions), you can use act to run the actions locally. The environment variables and filesystem are all configured to match what GitHub provides. Local Task Runner - I love make. However, I also hate repeating myself. With act, you can use the GitHub Actions defined in your .github/workflows/ to replace your Makefile! When you run act it reads in your GitHub Actions from .github/workflows/ and determines the set of actions that need to be run. Uses the Docker API to either pull or build the necessary images, as defined in your workflow files and finally determines the execution path based on the dependencies that were defined. Once it has the execution path, it then uses the Docker API to run containers for each action based on the images prepared earlier. The environment variables and filesystem are all configured to match what GitHub provides. Extras Michael: Winter is coming: Frozendict accepted Django ORM stand-alone Command Book app announcement post Joke: Plug ‘n Paste
OpenClaw is a self-hosted AI agent daemon that executes autonomous tasks through messaging apps like WhatsApp and Telegram using persistent memory. It integrates with Claude Code to enable software development and administrative automation directly from mobile devices. Links Notes and resources at ocdevel.com/mlg/mla-29 Try a walking desk - stay healthy & sharp while you learn & code Generate a podcast - use my voice to listen to any AI generated content you want OpenClaw is a self-hosted AI agent daemon (Node.js, port 18789) that executes autonomous tasks via messaging apps like WhatsApp or Telegram. Developed by Peter Steinberger in November 2025, the project reached 196,000 GitHub stars in three months. Architecture and Persistent Memory Operational Loop: Gateway receives message, loads SOUL.md (personality), USER.md (user context), and MEMORY.md (persistent history), calls LLM for tool execution, streams response, and logs data. Memory System: Compounds context over months. Users should prompt the agent to remember specific preferences to update MEMORY.md. Heartbeats: Proactive cron-style triggers for automated actions, such as 6:30 AM briefings or inbox triage. Skills: 5,705+ community plugins via ClawHub. The agent can author its own skills by reading API documentation and writing TypeScript scripts. Claude Code Integration Mobile to Deploy Workflow: The claude-code-skill bridge provides OpenClaw access to Bash, Read, Edit, and Git tools via Telegram. Agent Teams: claude-team manages multiple workers in isolated git worktrees to perform parallel refactors or issue resolution. Interoperability: Use mcporter to share MCP servers between Claude Code and OpenClaw. Industry Comparisons vs n8n: Use n8n for deterministic, zero-variance pipelines. Use OpenClaw for reasoning and ambiguous natural language tasks. vs Claude Cowork: Cowork is a sandboxed, desktop-only proprietary app. OpenClaw is an open-source, mobile-first, 24/7 daemon with full system access. Professional Applications Therapy: Voice to SOAP note transcription. PHI requires local Ollama models due to a lack of encryption at rest in OpenClaw. Marketing: claw-ads for multi-platform ad management, Mixpost for scheduling, and SearXNG for search. Finance: Receipt OCR and Google Drive filing. Requires human review to mitigate non-deterministic LLM errors. Real Estate: Proactive transaction deadline monitoring and memory-driven buyer matching. Security and Operations Hardening: Bind to localhost, set auth tokens, and use Tailscale for remote access. Default settings are unsafe, exposing over 135,000 instances. Injection Defense: Add instructions to SOUL.md to treat external emails and web pages as hostile. Costs: Software is MIT-licensed. API costs are paid per-token or bundled via a Claude subscription key. Onboarding: Run the BOOTSTRAP.md flow immediately after installation to define agent personality before requesting tasks.
This week, we're sharing two segments. First up, a chat with Cooper Quintin, a senior staff technologist at the Electronic Frontier Foundation and developer of the Rayhunter. Rayhunter is open-source firmware to turn specific hotspots into IMSI-catcher, effectively scanning for and logging any signs of fake cell towers (often known under the brand-name of Stingrays) in the area. Law enforcement has at times deployed these as a way of collecting information about phones in the area and could use it to intercept some communications like sms or phone calls. Cooper talks about what's known of law enforcement use of IMSI-catchers, what has been observed of the data collected by deployed Rayhunters, phone security at demonstrations and related topics. Then you'll hear Radio Ausbruch from Frieberg from this month's B(A)D News podcast from the A-Radio Network talking about the repression and deBanking of anti-repression projects like ABC Dresden and Rote Hilfe in Germany based on pressure from the US government related to the so-called Antifa Ost case. This carries heavy implications for prisoner support, anti-racist and other social struggles. Links Cooper at DefCon talking about Rayhunter: https://m.youtube.com/watch?v=meC2JqNAbCA EFF on what Rayhunter has found so far: https://www.eff.org/deeplinks/2025/09/rayhunter-what-we-have-found-so-far Github for Rayhunter: https://github.com/EFForg/rayhunter EFF Mattermost chat platform: https://opensource.eff.org/ A project for detecting Meta Rayban sunglasses: https://github.com/NullPxl/banrays Ouispy bluetooth scanning and notification tool: https://github.com/colonelpanichacks/oui-spy . ... . .. Featured Track: TFSR by The Willows Whisper
Boris Cherny is the creator and head of Claude Code at Anthropic. What began as a simple terminal-based prototype just a year ago has transformed the role of software engineering and is increasingly transforming all professional work.We discuss:1. How Claude Code grew from a quick hack to 4% of public GitHub commits, with daily active users doubling last month2. The counterintuitive product principles that drove Claude Code's success3. Why Boris believes coding is “solved”4. The latent demand that shaped Claude Code and Cowork5. Practical tips for getting the most out of Claude Code and Cowork6. How underfunding teams and giving them unlimited tokens leads to better AI products7. Why Boris briefly left Anthropic for Cursor, then returned after just two weeks8. Three principles Boris shares with every new team member—Brought to you by:DX—The developer intelligence platform designed by leading researchers: https://getdx.com/lennySentry—Code breaks, fix it faster: https://sentry.io/lennyMetaview—The AI platform for recruiting: https://metaview.ai/lenny—Episode transcript: https://www.lennysnewsletter.com/p/head-of-claude-code-what-happens—Archive of all Lenny's Podcast transcripts: https://www.dropbox.com/scl/fo/yxi4s2w998p1gvtpu4193/AMdNPR8AOw0lMklwtnC0TrQ?rlkey=j06x0nipoti519e0xgm23zsn9&st=ahz0fj11&dl=0—Where to find Boris Cherny:• X: https://x.com/bcherny• LinkedIn: https://www.linkedin.com/in/bcherny• Website: https://borischerny.com—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Introduction to Boris and Claude Code(03:45) Why Boris briefly left Anthropic for Cursor (and what brought him back)(05:35) One year of Claude Code(08:41) The origin story of Claude Code(13:29) How fast AI is transforming software development(15:01) The importance of experimentation in AI innovation(16:17) Boris's current coding workflow (100% AI-written)(17:32) The next frontier(22:24) The downside of rapid innovation (24:02) Principles for the Claude Code team(26:48) Why you should give engineers unlimited tokens(27:55) Will coding skills still matter in the future?(32:15) The printing press analogy for AI's impact(36:01) Which roles will AI transform next?(40:41) Tips for succeeding in the AI era(44:37) Poll: Which roles are enjoying their jobs more with AI(46:32) The principle of latent demand in product development(51:53) How Cowork was built in just 10 days(54:04) The three layers of AI safety at Anthropic(59:35) Anxiety when AI agents aren't working(01:02:25) Boris's Ukrainian roots(01:03:21) Advice for building AI products(01:08:38) Pro tips for using Claude Code effectively(01:11:16) Thoughts on Codex(01:12:13) Boris's post-AGI plans(01:14:02) Lightning round and final thoughts—References: https://www.lennysnewsletter.com/p/head-of-claude-code-what-happens—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. To hear more, visit www.lennysnewsletter.com
OpenClaw's creator makes headlines by joining OpenAI after GitHub fame and a whirlwind of VC and big tech offers, redefining what's possible for independent developers in the AI arms race. Is this the year agentic AI goes mainstream, and are the big players ready for that disruption? OpenClaw, OpenAI and the future | Peter Steinberger OpenAI disbands mission alignment team Opinion | I Left My Job at OpenAI. Putting Ads on ChatGPT Was the Last Straw. - The New York Times Introducing GPT‑5.3‑Codex‑Spark Anthropic releases Sonnet 4.6 Exclusive: Pentagon threatens to cut off Anthropic in AI safeguards dispute Google's Pixel 10a Launches on March 5 for $499 Google's AI drug discovery spinoff Isomorphic Labs claims major leap beyond AlphaFold 3 Gemini 3 Deep Think: AI model update designed for science Radio host David Greene says Google's NotebookLM tool stole his voice A new way to express yourself: Gemini can now create music Why an A.I. Video of Tom Cruise Battling Brad Pitt Spooked Hollywood GPT-5 outperforms federal judges 100% to 52% in legal reasoning experiment An AI project is creating videos to go with Supreme Court justices' real words I used Claude to negotiate $163,000 off a hospital bill. In a complex healthcare system, AI is giving patients power. Sony Tech Can Identify Original Music in AI-Generated Songs AI Pioneer Fei-Fei Li's Startup World Labs Raises $1 Billion Yann v. Yoshua on directed systems Dr. Oz pushes AI avatars as a fix for rural health care. Not so fast, critics say An AI Agent Published a Hit Piece on Me An Ars Technica Reporter Blamed A.I. Tools for Fabricating Quotes in a Bizarre A.I. Story Plain Dealer using AI to write reporters' stories Mediahuis trials use of AI agents to carry out 'first-line' news reporting DJI's first robovac is an autonomous cleaning drone you can't trust Leaked Email Suggests Ring Plans to Expand 'Search Party' Surveillance Beyond Dogs ai;dr I hate my AI pet with every fiber of my being Thanks a lot, AI: Hard drives are sold out for the year, says WD Students Are Being Treated Like Guinea Pigs:' Inside an AI-Powered Private School peon-ping — Stop babysitting your terminal Hugo Barra makes a to-do agent Raspberry Pi soars 40% as CEO buys stock, AI chatter builds Hosts: Leo Laporte, Jeff Jarvis, and Emily Forlini Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: monarch.com with code IM bitwarden.com/twit preview.modulate.ai spaceship.com/twit
This is my second conversation with Josh Kushner, founder and managing partner of Thrive Capital. I recorded this conversation in October after publishing the Colossus cover story about him and Thrive. Given the overwhelming response, we created some breathing room before releasing it. Josh started Thrive in 2011. The firm now manages approximately $50 billion with a very small investment team. What makes Thrive different is how concentrated they are and how involved they get with their portfolio companies. We cover the iconic investments that defined Thrive: Instagram, Stripe, GitHub, and spend a lot of time on OpenAI. Josh explains how Thrive thinks about investing today and the three categories they're currently focused on. Josh also talks about building the firm, why they keep the team small, and what he's learned from A24 about enabling artists to do their best work. He shares personal stories that shaped him, including his grandmother's experience surviving the Holocaust, and lessons from Stan Druckenmiller, Jon Winkelried, and others at formative moments in Thrive's history. Please enjoy my great conversation with Josh Kushner. For the full show notes, transcript, and links to mentioned content, check out the episode page here. ----- Become a Colossus member to get our quarterly print magazine and private audio experience, including exclusive profiles and early access to select episodes. Subscribe at colossus.com/subscribe. ----- Ramp's mission is to help companies manage their spend in a way that reduces expenses and frees up time for teams to work on more valuable projects. Go to ramp.com/invest to sign up for free and get a $250 welcome bonus. ----- Trusted by thousands of businesses, Vanta continuously monitors your security posture and streamlines audits so you can win enterprise deals and build customer trust without the traditional overhead. Visit vanta.com/invest. ----- WorkOS is a developer platform that enables SaaS companies to quickly add enterprise features to their applications. Visit WorkOS.com to transform your application into an enterprise-ready solution in minutes, not months. ----- Rogo is an AI-powered platform that automates accounts payable workflows, enabling finance teams to process invoices faster and with greater accuracy. Learn more at Rogo.ai/invest. ----- Ridgeline has built a complete, real-time, modern operating system for investment managers. It handles trading, portfolio management, compliance, customer reporting, and much more through an all-in-one real-time cloud platform. Visit ridgelineapps.com. ----- Editing and post-production work for this episode was provided by The Podcast Consultant (https://thepodcastconsultant.com). Timestamps: (00:00:00) Welcome to Invest Like the Best (00:02:43) Intro: Josh Kushner (00:03:46) How Thrive Has Changed Since 2023 (00:05:18) Thrive's Entrepreneurial Culture (00:12:22) The Power of Small Teams (00:13:35) Sponsors (00:14:35) Concentration as Differentiation (00:16:16) The Github Deal (00:18:08) Lesson from Stan Druckenmiller (00:20:37) Leading Stripe's $50 Billion Round (00:23:16) Instagram: Doubling an Investment in Days (00:25:43) Isomorphic: Thrive as an Enabling Technology (00:27:04) Thrive & A24 (00:28:19) OpenAI: The Product Josh Couldn't Unsee (00:32:09) Pricing the OpenAI Investment (00:33:40) OpenAI and Power (00:35:26) Finding Joy in Hard Work (00:39:15) Inside View of the Tech & AI Landscape (00:42:28) Three Investment Categories Thrive is Focused On (00:44:37) Thrive Holdings: Inside-Out Disruption (00:48:54) Competition in Venture (00:50:49) Sponsors (00:51:48) Thrive's Immutable Values (00:54:21) A Family Story of Survival (00:56:43) The American Dream (00:58:03) What Artists Can Teach Investors (01:00:26) Never Compromise Your Values (01:01:33) The Story Behind Josh's Forever Watch
Peter Steinberger is the creator of OpenClaw, an open-source AI agent framework that’s the fastest-growing project in GitHub history. Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep491-sc See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc. Transcript: https://lexfridman.com/peter-steinberger-transcript CONTACT LEX: Feedback – give feedback to Lex: https://lexfridman.com/survey AMA – submit questions, videos or call-in: https://lexfridman.com/ama Hiring – join our team: https://lexfridman.com/hiring Other – other ways to get in touch: https://lexfridman.com/contact EPISODE LINKS: Peter’s X: https://x.com/steipete Peter’s GitHub: https://github.com/steipete Peter’s Website: https://steipete.com Peter’s LinkedIn: https://www.linkedin.com/in/steipete OpenClaw Website: https://openclaw.ai OpenClaw GitHub: https://github.com/openclaw/openclaw OpenClaw Discord: https://discord.gg/openclaw SPONSORS: To support this podcast, check out our sponsors & get discounts: Perplexity: AI-powered answer engine. Go to https://perplexity.ai/ Quo: Phone system (calls, texts, contacts) for businesses. Go to https://quo.com/lex CodeRabbit: AI-powered code reviews. Go to https://coderabbit.ai/lex Fin: AI agent for customer service. Go to https://fin.ai/lex Blitzy: AI agent for large enterprise codebases. Go to https://blitzy.com/lex Shopify: Sell stuff online. Go to https://shopify.com/lex LMNT: Zero-sugar electrolyte drink mix. Go to https://drinkLMNT.com/lex OUTLINE: (00:00) – Introduction (03:51) – Sponsors, Comments, and Reflections (15:29) – OpenClaw origin story (18:48) – Mind-blowing moment (28:15) – Why OpenClaw went viral (32:12) – Self-modifying AI agent (36:57) – Name-change drama (54:07) – Moltbook saga (1:02:26) – OpenClaw security concerns (1:11:07) – How to code with AI agents (1:42:02) – Programming setup (1:48:45) – GPT Codex 5.3 vs Claude Opus 4.6 (1:57:52) – Best AI agent for programming (2:19:52) – Life story and career advice (2:23:49) – Money and happiness (2:27:41) – Acquisition offers from OpenAI and Meta (2:44:51) – How OpenClaw works (2:56:09) – AI slop (3:02:13) – AI agents will replace 80% of apps (3:10:50) – Will AI replace programmers? (3:22:50) – Future of OpenClaw community