POPULARITY
Categories
Jason Lemkin is the founder of SaaStr, the world's largest community for software founders, and a veteran SaaS investor who has deployed over $200 million into B2B startups. After his last salesperson quit, Jason made a radical decision: replace his entire go-to-market team with AI agents. What started as an experiment has transformed into a new operating model, where 20 AI agents managed by just 1.2 humans now do the work previously handled by a team of 10 SDRs and AEs. In this conversation, Jason shares his hands-on experience implementing AI to run his sales org, including what works, what doesn't, and how the GTM landscape is quickly being transformed.We discuss:1. How AI is fundamentally changing the sales function2. Why most SDRs and BDRs will be “extinct” within a year3. What Jason is observing across his portfolio about AI adoption in GTM4. How to become “hyper-employable” in the age of AI5. The specific AI tools and tactics he's using that have been working best6. Practical frameworks for integrating AI into your sales motion without losing what works7. Jason's 2026 predictions on where SaaS and GTM are heading next—Brought to you by:DX—The developer intelligence platform designed by leading researchersVercel—Your collaborative AI assistant to design, iterate, and scale full-stack applications for the webDatadog—Now home to Eppo, the leading experimentation and feature flagging platform—Transcript: https://www.lennysnewsletter.com/p/we-replaced-our-sales-team-with-20-ai-agents—My biggest takeaways (for paid newsletter subscribers): https://www.lennysnewsletter.com/i/182902716/my-biggest-takeaways-from-this-conversation—Where to find Jason Lemkin:• X: https://x.com/jasonlk• LinkedIn: https://www.linkedin.com/in/jasonmlemkin• Website: https://www.saastr.com• Substack: https://substack.com/@cloud—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Introduction to Jason Lemkin(04:36) What SaaStr does(07:13) AI's impact on sales teams(10:11) How SaaStr's AI agents work and their performance(14:18) How go-to-market is changing in the AI era(19:19) The future of SDRs, BDRs, and AEs in sales(22:03) Why leadership roles are safe(23:43) How to be in the 20% who thrive in the AI sales future(28:40) Why you shouldn't build your own AI tools(30:10) Specific AI agents and their applications(36:40) Challenges and learnings in AI deployment(42:11) Making AI-generated emails good (not just acceptable)(47:31) When humans still beat AI in sales(52:39) An overview of SaaStr's org(53:50) The role of human oversight in AI operations(58:37) Advice for salespeople and founders in the AI era(01:05:40) Forward-deployed engineers(01:08:08) What's changing and what's staying the same in sales(01:16:21) Why AI is creating more work, not less(01:19:32) Why Jason says these are magical times(01:25:25) The "incognito mode test" for finding AI opportunities(01:27:19) The impact of AI on jobs(01:30:18) Lightning round and final thoughts—Referenced:• Building a world-class sales org | Jason Lemkin (SaaStr): https://www.lennysnewsletter.com/p/building-a-world-class-sales-org• SaaStr Annual: https://www.saastrannual.com• Delphi: https://www.delphi.ai/saastr/talk• Amelia Lerutte on LinkedIn: https://www.linkedin.com/in/amelialerutte/• Vercel: https://vercel.com• What world-class GTM looks like in 2026 | Jeanne DeWitt Grosser (Vercel, Stripe, Google): https://www.lennysnewsletter.com/p/what-the-best-gtm-teams-do-differently• Everyone's an engineer now: Inside v0's mission to create a hundred million builders | Guillermo Rauch (founder and CEO of Vercel, creators of v0 and Next.js): https://www.lennysnewsletter.com/p/everyones-an-engineer-now-guillermo-rauch• Replit: https://replit.com• Behind the product: Replit | Amjad Masad (co-founder and CEO): https://www.lennysnewsletter.com/p/behind-the-product-replit-amjad-masad• ElevenLabs: https://elevenlabs.io• The exact AI playbook (using MCPs, custom GPTs, Granola) that saved ElevenLabs $100k+ and helps them ship daily | Luke Harries (Head of Growth): https://www.lennysnewsletter.com/p/the-ai-marketing-stack• Bolt: https://bolt.new• Lovable: https://lovable.dev• Harvey: https://www.harvey.ai• Samsara: https://www.samsara.com/products/platform/ai-samsara-intelligence• UiPath: https://www.uipath.com• Denise Dresser on LinkedIn: https://www.linkedin.com/in/denisedresser• Agentforce: https://www.salesforce.com/form/agentforce• SaaStr's AI Agent Playbook: https://saastr.ai/agents• Brian Halligan on LinkedIn: https://www.linkedin.com/in/brianhalligan• Brian Halligan's AI: https://www.delphi.ai/minds/bhalligan• Sierra: https://sierra.ai• Fin: https://fin.ai• Deccan: https://www.deccan.ai• Artisan: https://www.artisan.co• Qualified: https://www.qualified.com• Claude: https://claude.ai• HubSpot: https://www.hubspot.com• Gamma: https://gamma.app• Sam Blond on LinkedIn: https://www.linkedin.com/in/sam-blond-791026b• Brex: https://www.brex.com• Outreach: https://www.outreach.io• Gong: https://www.gong.io• Salesloft: https://www.salesloft.com• Mixmax: https://www.mixmax.com• “Sell the alpha, not the feature”: The enterprise sales playbook for $1M to $10M ARR | Jen Abel: https://www.lennysnewsletter.com/p/the-enterprise-sales-playbook-1m-to-10m-arr• Clay: https://www.clay.com• Owner: https://www.owner.com• Momentum: https://www.momentum.io• Attention: https://www.attention.com• Granola: https://www.granola.ai• Behind the founder: Marc Benioff: https://www.lennysnewsletter.com/p/behind-the-founder-marc-benioff• Palantir: https://www.palantir.com• Databricks: https://www.databricks.com• Garry Tan on LinkedIn: https://www.linkedin.com/in/garrytan• Rippling: https://www.rippling.com• Cursor: https://cursor.com• The rise of Cursor: The $300M ARR AI tool that engineers can't stop using | Michael Truell (co-founder and CEO): https://www.lennysnewsletter.com/p/the-rise-of-cursor-michael-truell• The new AI growth playbook for 2026: How Lovable hit $200M ARR in one year | Elena Verna (Head of Growth): https://www.lennysnewsletter.com/p/the-new-ai-growth-playbook-for-2026-elena-verna• Pluribus on AppleTV+: https://tv.apple.com/us/show/pluribus/umc.cmc.37axgovs2yozlyh3c2cmwzlza• Sora: https://openai.com/sora• Reve: https://app.reve.com• Everything That Breaks on the Way to $1B ARR, with Mailchimp Co-Founder Ben Chestnut: https://www.saastr.com/everything-that-breaks-on-the-way-to-1b-arr-with-mailchimp-co-founder-ben-chestnut/• The Revenue Playbook: Rippling's Top 3 Growth Tactics at Scale, with Rippling CRO Matt Plank: https://www.youtube.com/watch?v=h3eYtzBpjRw• 10 contrarian leadership truths every leader needs to hear | Matt MacInnis (Rippling): https://www.lennysnewsletter.com/p/10-contrarian-leadership-truths—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. To hear more, visit www.lennysnewsletter.com
From creating SWE-bench in a Princeton basement to shipping CodeClash, SWE-bench Multimodal, and SWE-bench Multilingual, John Yang has spent the last year and a half watching his benchmark become the de facto standard for evaluating AI coding agents—trusted by Cognition (Devin), OpenAI, Anthropic, and every major lab racing to solve software engineering at scale. We caught up with John live at NeurIPS 2025 to dig into the state of code evals heading into 2026: why SWE-bench went from ignored (October 2023) to the industry standard after Devin's launch (and how Walden emailed him two weeks before the big reveal), how the benchmark evolved from Django-heavy to nine languages across 40 repos (JavaScript, Rust, Java, C, Ruby), why unit tests as verification are limiting and long-running agent tournaments might be the future (CodeClash: agents maintain codebases, compete in arenas, and iterate over multiple rounds), the proliferation of SWE-bench variants (SWE-bench Pro, SWE-bench Live, SWE-Efficiency, AlgoTune, SciCode) and how benchmark authors are now justifying their splits with curation techniques instead of just "more repos," why Tau-bench's "impossible tasks" controversy is actually a feature not a bug (intentionally including impossible tasks flags cheating), the tension between long autonomy (5-hour runs) vs. interactivity (Cognition's emphasis on fast back-and-forth), how Terminal-bench unlocked creativity by letting PhD students and non-coders design environments beyond GitHub issues and PRs, the academic data problem (companies like Cognition and Cursor have rich user interaction data, academics need user simulators or compelling products like LMArena to get similar signal), and his vision for CodeClash as a testbed for human-AI collaboration—freeze model capability, vary the collaboration setup (solo agent, multi-agent, human+agent), and measure how interaction patterns change as models climb the ladder from code completion to full codebase reasoning. We discuss: John's path: Princeton → SWE-bench (October 2023) → Stanford PhD with Diyi Yang and the Iris Group, focusing on code evals, human-AI collaboration, and long-running agent benchmarks The SWE-bench origin story: released October 2023, mostly ignored until Cognition's Devin launch kicked off the arms race (Walden emailed John two weeks before: "we have a good number") SWE-bench Verified: the curated, high-quality split that became the standard for serious evals SWE-bench Multimodal and Multilingual: nine languages (JavaScript, Rust, Java, C, Ruby) across 40 repos, moving beyond the Django-heavy original distribution The SWE-bench Pro controversy: independent authors used the "SWE-bench" name without John's blessing, but he's okay with it ("congrats to them, it's a great benchmark") CodeClash: John's new benchmark for long-horizon development—agents maintain their own codebases, edit and improve them each round, then compete in arenas (programming games like Halite, economic tasks like GDP optimization) SWE-Efficiency (Jeffrey Maugh, John's high school classmate): optimize code for speed without changing behavior (parallelization, SIMD operations) AlgoTune, SciCode, Terminal-bench, Tau-bench, SecBench, SRE-bench: the Cambrian explosion of code evals, each diving into different domains (security, SRE, science, user simulation) The Tau-bench "impossible tasks" debate: some tasks are underspecified or impossible, but John thinks that's actually a feature (flags cheating if you score above 75%) Cognition's research focus: codebase understanding (retrieval++), helping humans understand their own codebases, and automatic context engineering for LLMs (research sub-agents) The vision: CodeClash as a testbed for human-AI collaboration—vary the setup (solo agent, multi-agent, human+agent), freeze model capability, and measure how interaction changes as models improve — John Yang SWE-bench: https://www.swebench.com X: https://x.com/jyangballin Chapters 00:00:00 Introduction: John Yang on SWE-bench and Code Evaluations 00:00:31 SWE-bench Origins and Devon's Impact on the Coding Agent Arms Race 00:01:09 SWE-bench Ecosystem: Verified, Pro, Multimodal, and Multilingual Variants 00:02:17 Moving Beyond Django: Diversifying Code Evaluation Repositories 00:03:08 Code Clash: Long-Horizon Development Through Programming Tournaments 00:04:41 From Halite to Economic Value: Designing Competitive Coding Arenas 00:06:04 Ofir's Lab: SWE-ficiency, AlgoTune, and SciCode for Scientific Computing 00:07:52 The Benchmark Landscape: TAU-bench, Terminal-bench, and User Simulation 00:09:20 The Impossible Task Debate: Refusals, Ambiguity, and Benchmark Integrity 00:12:32 The Future of Code Evals: Long Autonomy vs Human-AI Collaboration 00:14:37 Call to Action: User Interaction Data and Codebase Understanding Research
From investing through the modern data stack era (DBT, Fivetran, and the analytics explosion) to now investing at the frontier of AI infrastructure and applications at Amplify Partners, Sarah Catanzaro has spent years at the intersection of data, compute, and intelligence—watching categories emerge, merge, and occasionally disappoint. We caught up with Sarah live at NeurIPS 2025 to dig into the state of AI startups heading into 2026: why $100M+ seed rounds with no near-term roadmap are now the norm (and why that terrifies her), what the DBT-Fivetran merger really signals about the modern data stack (spoiler: it's not dead, just ready for IPO), how frontier labs are using DBT and Fivetran to manage training data and agent analytics at scale, why data catalogs failed as standalone products but might succeed as metadata services for agents, the consumerization of AI and why personalization (memory, continual learning, K-factor) is the 2026 unlock for retention and growth, why she thinks RL environments are a fad and real-world logs beat synthetic clones every time, and her thesis for the most exciting AI startups: companies that marry hard research problems (RAG, rule-following, continual learning) with killer applications that were simply impossible before. We discuss: The DBT-Fivetran merger: not the death of the modern data stack, but a path to IPO scale (targeting $600M+ combined revenue) and a signal that both companies were already winning their categories How frontier labs use data infrastructure: DBT and Fivetran for training data curation, agent analytics, and managing increasingly complex interactions—plus the rise of transactional databases (RocksDB) and efficient data loading (Vortex) for GPU-bound workloads Why data catalogs failed: built for humans when they should have been built for machines, focused on discoverability when the real opportunity was governance, and ultimately subsumed as features inside Snowflake, DBT, and Fivetran The $100M+ seed phenomenon: raising massive rounds at billion-dollar valuations with no 6-month roadmap, seven-day decision windows, and founders optimizing for signal ("we're a unicorn") over partnership or dilution discipline Why world models are overhyped but underspecified: three competing definitions, unclear generalization across use cases (video games ≠ robotics ≠ autonomous driving), and a research problem masquerading as a product category The 2026 theme: consumerization of AI via personalization—memory management, continual learning, and solving retention/churn by making products learn skills, preferences, and adapt as the world changes (not just storing facts in cursor rules) Why RL environments are a fad: labs are paying 7–8 figures for synthetic clones when real-world logs, traces, and user activity (à la Cursor) are richer, cheaper, and more generalizable Sarah's investment thesis: research-driven applications that solve hard technical problems (RAG for Harvey, rule-following for Sierra, continual learning for the next killer app) and unlock experiences that were impossible before Infrastructure bets: memory, continual learning, stateful inference, and the systems challenges of loading/unloading personalized weights at scale Why K-factor and growth fundamentals matter again: AI felt magical in 2023–2024, but as the magic fades, retention and virality are back—and most AI founders have never heard of K-factor — Sarah Catanzaro X: https://x.com/sarahcat21 Amplify Partners: https://amplifypartners.com/ Where to find Latent Space X: https://x.com/latentspacepod Substack: https://www.latent.space/ Chapters 00:00:00 Introduction: Sarah Catanzaro's Journey from Data to AI 00:01:02 The DBT-Fivetran Merger: Not the End of the Modern Data Stack 00:05:26 Data Catalogs and What Went Wrong 00:08:16 Data Infrastructure at AI Labs: Surprising Insights 00:10:13 The Crazy Funding Environment of 2024-2025 00:17:18 World Models: Hype, Confusion, and Market Potential 00:18:59 Memory Management and Continual Learning: The Next Frontier 00:23:27 Agent Environments: Just a Fad? 00:25:48 The Perfect AI Startup: Research Meets Application 00:28:02 Closing Thoughts and Where to Find Sarah
From Berkeley robotics and OpenAI's 2017 Dota-era internship to shipping RL breakthroughs on GPT-4o, o1, and o3, and now leading model development at Cursor, Ashvin Nair has done it all. We caught up with Ashvin at NeurIPS 2025 to dig into the inside story of OpenAI's reasoning team (spoiler: it went from a dozen people to 300+), why IOI Gold felt reachable in 2022 but somehow didn't change the world when o1 actually achieved it, how RL doesn't generalize beyond the training distribution (and why that means you need to bring economically useful tasks into distribution by co-designing products and models), the deeper lessons from the RL research era (2017–2022) and why most of it didn't pan out because the community overfitted to benchmarks, how Cursor is uniquely positioned to do continual learning at scale with policy updates every two hours and product-model co-design that keeps engineers in the loop instead of context-switching into ADHD hell, and his bet that the next paradigm shift is continual learning with infinite memory—where models experience something once (a bug, a mistake, a user pattern) and never forget it, storing millions of deployment tokens in weights without overloading capacity. We discuss: Ashvin's path: Berkeley robotics PhD → OpenAI 2017 intern (Dota era) → o1/o3 reasoning team → Cursor ML lead in three months Why robotics people are the most grounded at NeurIPS (they work with the real world) and simulation people are the most unhinged (Lex Fridman's take) The IOI Gold paradox: "If you told me we'd achieve IOI Gold in 2022, I'd assume we could all go on vacation—AI solved, no point working anymore. But life is still the same." The RL research era (2017–2022) and why most of it didn't pan out: overfitting to benchmarks, too many implicit knobs to tune, and the community rewarding complex ideas over simple ones that generalize Inside the o1 origin story: a dozen people, conviction from Ilya and Jakob Pachocki that RL would work, small-scale prototypes producing "surprisingly accurate reasoning traces" on math, and first-principles belief that scaled The reasoning team grew from ~12 to 300+ people as o1 became a product and safety, tooling, and deployment scaled up Why Cursor is uniquely positioned for continual learning: policy updates every two hours (online RL on tab), product and ML sitting next to each other, and the entire software engineering workflow (code, logs, debugging, DataDog) living in the product Composer as the start of product-model co-design: smart enough to use, fast enough to stay in the loop, and built by a 20–25 person ML team with high-taste co-founders who code daily The next paradigm shift: continual learning with infinite memory—models that experience something once (a bug, a user mistake) and store it in weights forever, learning from millions of deployment tokens without overloading capacity (trillions of pretraining tokens = plenty of room) Why off-policy RL is unstable (Ashvin's favorite interview question) and why Cursor does two-day work trials instead of whiteboard interviews The vision: automate software engineering as a process (not just answering prompts), co-design products so the entire workflow (write code, check logs, debug, iterate) is in-distribution for RL, and make models that never make the same mistake twice — Ashvin Nair Cursor: https://cursor.com X: https://x.com/ashvinnair_ Chapters 00:00:00 Introduction: From Robotics to Cursor via OpenAI 00:01:58 The Robotics to LLM Agent Transition: Why Code Won 00:09:11 RL Research Winter and Academic Overfitting 00:11:45 The Scaling Era and Moving Goalposts: IOI Gold Doesn't Mean AGI 00:21:30 OpenAI's Reasoning Journey: From Codex to O1 00:20:03 The Blip: Thanksgiving 2023 and OpenAI Governance 00:22:39 RL for Reasoning: The O-Series Conviction and Scaling 00:25:47 O1 to O3: Smooth Internal Progress vs External Hype Cycles 00:33:07 Why Cursor: Co-Designing Products and Models for Real Work 00:34:14 Composer and the Future: Online Learning Every Two Hours 00:35:15 Continual Learning: The Missing Paradigm Shift 00:44:00 Hiring at Cursor and Why Off-Policy RL is Unstable
En 2025, une nouvelle expression s'est imposée dans le vocabulaire de la tech : le « vibe coding ». Derrière ce terme intrigant se cache une pratique qui transforme en profondeur la manière de développer des logiciels.Le vibe coding, que l'on peut traduire par « programmation intuitive », désigne une approche où le développeur ne code plus ligne par ligne, mais décrit simplement ce qu'il souhaite obtenir à une intelligence artificielle. Popularisé par Andrei Karpathy, ancien responsable de l'IA chez Tesla et cofondateur d'OpenAI, ce concept est né dans les communautés de développeurs avant de se diffuser largement dans l'écosystème numérique.Concrètement, il suffit désormais de formuler une demande en langage naturel : créer un script Python, concevoir une page web avec un formulaire, modifier l'interface d'une application ou même développer un jeu ou une application mobile complète. Cette méthode permet un gain de temps spectaculaire et ouvre la création logicielle à des non-développeurs, capables de produire des outils fonctionnels pour le web, le mobile ou des usages métiers comme des CMS ou des ERP.De nombreux outils incarnent cette tendance, à commencer par GitHub Copilot, mais aussi Cursor, Windsurf ou des assistants généralistes comme ChatGPT, Claude ou Gemini, qui génèrent du code à intégrer ensuite de manière classique. D'autres solutions vont plus loin encore, en produisant directement des applications prêtes à l'emploi, comme le propose la startup suédoise Lovable.Dans cet épisode, Sébastien Stormacq, responsable des relations développeurs chez AWS, partage une expérience concrète : la création, en une heure et sans écrire une seule ligne de code, d'un jeu inspiré de Pac-Man grâce au vibe coding. Un exemple révélateur de la puissance, mais aussi des limites de cette approche.Le phénomène soulève des questions cruciales : qualité et sécurité du code généré, risques de bugs majeurs, mais aussi impact sur l'emploi. Si le vibe coding accélère le travail des équipes et augmente la productivité des développeurs expérimentés, il fragilise davantage les profils juniors. Une chose est sûre : plus qu'un simple outil, le vibe coding redéfinit en profondeur le métier de développeur.-----------♥️ Soutien : https://mondenumerique.info/don
Tomek wrócił z nart, a w świecie tech... lawina newsów! ⛷️ Wojtek, Tomek i Sebastian biorą na warsztat tydzień pełen miliardowych transakcji, nowych unicornów i agentów, którzy sami naprawiają kod.O czym rozmawiamy?
Want to build your own website but don't know how to code? This episode is for you.Join JJ and Bubble expert Gio as they show how beginners can use AI-powered tools to design, build, and launch a personal website from scratch — for free.You'll learn how Vibe Coding works, how AI can help you write and edit code, and how tools like Cursor make building websites feel approachable, even if you've never coded before. JJ also shares his own journey from no-code tools to AI-assisted development, showing how anyone can level up their skills.By the end, you'll understand how to preview your site locally, save your work with GitHub, and deploy a live website on the internet — all with AI helping every step of the way.Perfect for students, creators, and curious beginners who want to build real projects using AI.What you'll learn:• What “Vibe Coding” is and why it's beginner-friendly• How AI helps you write, edit, and understand code • How to preview your website before publishing• How to host a personal website for free• How no-code and AI tools work togetherTimestamps:00:00 What is Vibe Coding?00:33 Gio's experience getting started03:59 Intro to GitHub (no stress)06:04 Creating and managing projects13:34 Using Cursor to build locally16:00 Editing and previewing with AI26:18 Deploying with Cloud tools28:30 Publishing your site live32:16 No-code vs AI-assisted building41:50 Where AI and no-code are headed48:10 Final thoughts + course update
Here's what I'm building with as of Dec 2025.Lovable - https://lovable.dev/Cursor - https://www.cursor.comClaude Code - https://claude.ai/ (Sonnet/Ops 4.5)Vercel - https://vercel.com/Railway - https://railway.com/Supabase - https://supabase.com/ Free Email Course - https://bootstrappersparadise.com/courseOnline Community - https://bootstrappersparadise.com/communityBootstrapper's Paradise - https://bootstrappersparadise.com/
Note: Steve and Gene's talk on Vibe Coding and the post IDE world was one of the top talks of AIE CODE: https://www.youtube.com/watch?v=7Dtu2bilcFs&t=1019s&pp=0gcJCU0KAYcqIYzv From building legendary platforms at Google and Amazon to authoring one of the most influential essays on AI-powered development (Revenge of the Junior Developer, quoted by Dario Amodei himself), Steve Yegge has spent decades at the frontier of software engineering—and now he's leading the charge into what he calls the "factory farming" era of code. After stints at SourceGraph and building Beads (a purely vibe-coded issue tracker with tens of thousands of users), Steve co-authored The Vibe Coding Book and is now building VC (VibeCoder), an agent orchestration dashboard designed to move developers from writing code to managing fleets of AI agents that coordinate, parallelize, and ship features while you sleep. We sat down with Steve at AI Engineer Summit to dig into why Claude Code, Cursor, and the entire 2024 stack are already obsolete, what it actually takes to trust an agent after 2,000 hours of practice (hint: they will delete your production database if you anthropomorphize them), why the real skill is no longer writing code but orchestrating agents like a NASCAR pit crew, how merging has become the new wall that every 10x-productive team is hitting (and why one company's solution is literally "one engineer per repo"), the rise of multi-agent workflows where agents reserve files, message each other via MCP, and coordinate like a little village, why Steve believes if you're still using an IDE to write code by January 1st, you're a bad engineer, how the 12–15 year experience bracket is the most resistant demographic (and why their identity is tied to obsolete workflows), the hidden chaos inside OpenAI, Anthropic, and Google as they scale at breakneck speed, why rewriting from scratch is now faster than refactoring for a growing class of codebases, and his 2025 prediction: we're moving from subsistence agriculture to John Deere-scale factory farming of code, and the Luddite backlash is only just beginning. We discuss: Why Claude Code, Cursor, and agentic coding tools are already last year's tech—and what comes next: agent orchestration dashboards where you manage fleets, not write lines The 2,000-hour rule: why it takes a full year of daily use before you can predict what an LLM will do, and why trust = predictability, not capability Steve's hot take: if you're still using an IDE to develop code by January 1st, 2025, you're a bad engineer—because the abstraction layer has moved from models to full-stack agents The demographic most resistant to vibe coding: 12–15 years of experience, senior engineers whose identity is tied to the way they work today, and why they're about to become the interns Why anthropomorphizing LLMs is the biggest mistake: the "hot hand" fallacy, agent amnesia, and how Steve's agent once locked him out of prod by changing his password to "fix" a problem Should kids learn to code? Steve's take: learn to vibe code—understand functions, classes, architecture, and capabilities in a language-neutral way, but skip the syntax The 2025 vision: "factory farming of code" where orchestrators run Cloud Code, scrub output, plan-implement-review-test in loops, and unlock programming for non-programmers at scale — Steve Yegge X: https://x.com/steve_yegge Substack (Stevie's Tech Talks): https://steve-yegge.medium.com/ GitHub (VC / VibeCoder): https://github.com/yegge-labs Where to find Latent Space X: https://x.com/latentspacepod Substack: https://www.latent.space/ Chapters 00:00:00 Introduction: Steve Yegge on Vibe Coding and AI Engineering 00:00:59 The Backlash: Who Resists Vibe Coding and Why 00:04:26 The 2000 Hour Rule: Building Trust with AI Coding Tools 00:03:31 The January 1st Deadline: IDEs Are Becoming Obsolete 00:02:55 10X Productivity at OpenAI: The Performance Review Problem 00:07:49 The Hot Hand Fallacy: When AI Agents Betray Your Trust 00:11:12 Claude Code Isn't It: The Need for Agent Orchestration 00:15:20 The Orchestrator Revolution: From Cloud Code to Agent Villages 00:18:46 The Merge Wall: The Biggest Unsolved Problem in AI Coding 00:26:33 Never Rewrite Your Code - Until Now: Joel Spolsky Was Wrong 00:22:43 Factory Farming Code: The John Deere Era of Software 00:29:27 Google's Gemini Turnaround and the AI Lab Chaos 00:33:20 Should Your Kids Learn to Code? The New Answer 00:34:59 Code MCP and the Gossip Rate: Latest Vibe Coding Discoveries
Nvidia übernimmt die Assets des Chip-Startups Groq für 20 Milliarden Dollar. KI hat 2025 über 50 neue Milliardäre geschaffen, darunter die Gründer von Cursor, Lovable und 11 Labs. OpenAI veröffentlicht Nutzungsdaten: 90 Prozent der User machen weniger als fünf Anfragen pro Tag, nur 5 Prozent zahlen für den Service. Die New York Times vergleicht Tesla und Waymo: Tesla hat 30 Robotaxis in Austin, Waymo 2500 insgesamt. Das Manager Magazin deckt den Closed-Skandal auf: Der CFO der Hamburger Modemarke hat sich mutmaßlich 20 Millionen Euro von der Firma geliehen. Die USA sanktionieren fünf europäische Bürger, darunter Ex-EU-Kommissar Thierry Breton und die Geschäftsführerinnen von HateAid. Elon Musk wird zum unbeliebtesten Tech-Leader 2025 gewählt. Apple muss durch den Digital Markets Act Proximity Pairing und Notifications für Drittanbieter öffnen. Und 61 Prozent der US-Pastoren nutzen inzwischen KI für ihre Predigten. Unterstütze unseren Podcast und entdecke die Angebote unserer Werbepartner auf doppelgaenger.io/werbung. Vielen Dank! Philipp Glöckler und Philipp Klöckner sprechen heute über: (00:00:00) Intro (00:01:15) Nvidia kauft Groq für 20 Mrd. (00:17:16) 50 neue KI-Milliardäre 2025 (00:27:29) OpenAI Nutzungsdaten: 90% unter 5 Anfragen/Tag (00:30:45) OpenAI Prompt Packs für Berufsgruppen (00:37:43) Tesla vs Waymo: 30 vs 2500 Robotaxis (00:44:52) Closed-Insolvenz: CFO leiht sich 20 Mio. (00:54:42) USA sanktionieren HateAid & Thierry Breton (01:01:04) Elon Musk unbeliebtester Tech-Leader (01:05:15) Epstein-Akten: Adobe-Schwärzung versagt (01:06:15) Apple öffnet AirPods-Kopplung (EU DMA) (01:09:21) 61% der Pastoren nutzen KI für Predigten Shownotes Nvidia wirbt Ingenieure von KI-Startup Groq ab - manager-magazin.de KI schuf über 50 neue Milliardäre 2025 - forbes.com Benedick Evans- linkedin.com OpenAI Prompt Packs - academy.openai.com Tesla Robotaxis in Austin: Konkurrenz für Waymo - nytimes.com Insolvenz der Modemarke: So ruinierten die Chefs alles - manager-magazin.de Breton plant Tech-Verbot - apnews.com Marco Rubio - patreon.com Elon Musk mochte Tech nicht - cybernews.com Musk Weihnachts Tweet - x.com Epstein-Akten: DOJ-Streichungen und Links - theverge.com will the robot shoot the human? - youtu.be iOS 26.3: AirPods-Kopplung verbessern - macrumors.com Pastors KI-Predigt - cybernews.com Glöcki KI Weihnachtsvideo - youtube.com
Aaron and Brian review the Year in AI, hand out AI awards, and discuss the biggest AI trends from 2025. Maybe a few predictions will be made as well.SHOW: 987SHOW TRANSCRIPT: The Cloudcast #987 TranscriptSHOW VIDEO: https://youtube.com/@TheCloudcastNET CLOUD NEWS OF THE WEEK: http://bit.ly/cloudcast-cnotwCHECK OUT OUR NEW PODCAST: "CLOUDCAST BASICS"SHOW SPONSORS:SHOW NOTESCLOUD & AI NEWS OF THE MONTH - NOV 2025 (show)CLOUD & AI NEWS OF THE MONTH - OCT 2025 (show)CLOUD & AI NEWS OF THE MONTH - SEPT 2025 (show)CLOUD & AI NEWS OF THE MONTH - AUG 2025 (show)CLOUD & AI NEWS OF THE MONTH - JUL 2025 (show)CLOUD & AI NEWS OF THE MONTH - JUN 2025 (show)CLOUD & AI NEWS OF THE MONTH - MAY 2025 (show)CLOUD & AI NEWS OF THE MONTH - APR 2025 (show)CLOUD & AI NEWS OF THE MONTH - MAR 2025 (show)CLOUD & AI NEWS OF THE MONTH - FEB 2025 (show)CLOUD & AI NEWS OF THE MONTH - JAN 2025 (show)2025 AI YEAR IN REVIEWThe Year of OpenAIThe Year of NVIDIAThe Year of MicrosoftThe Year of GoogleThe Year of OracleThe Year of China AIThe Year of AppleThe Year of Coding Agents (Anthropic, Cursor, Windsurf, CLIs, etc..)The Year of Data CentersAI Highlights and Lowlights (Corporate Layoffs, Acquihires, Funding, etc..)2026 AI DraftFEEDBACK?Email: show at the cloudcast dot netTwitter/X: @cloudcastpodBlueSky: @cloudcastpod.bsky.socialInstagram: @cloudcastpodTikTok: @cloudcastpod
Сегодня разбираем максимально хайповую неделю в ИИ: Amazon заходит к OpenAI с миллиардными деньгами, выходят GPT-5.2, Pro и Codex, ChatGPT внезапно получает Photoshop и редактирование PDF, а Disney добровольно отдаёт своих персонажей нейросетям. Google делает Gemini 3 Flash дефолтом для миллионов, Cursor начинает покупать компании, Grok рвёт всех в speech-to-speech, появляются «наркотики для AI», роботакси Tesla за $4.20, Waymo замирает на перекрёстках, а Пентагон официально начинает готовиться к AGI. Финал — слово года «slop» и ИИ-архитекторы как «Человек года». Лампово, тревожно и очень показательно.
Pour ce vingt-quatrième épisode, on retrouve Alexandre Talon qui partage son parcours entre No-Code, Low-Code et engagement social. Cofondateur de https://www.labastide.io/ et de l'association No-Code for Good, il accompagne des structures à impact positif grâce à des outils accessibles et solidaires.Alexandre raconte son quotidien dans l'économie sociale et solidaire, son intérêt grandissant pour le développement assisté par IA et son expérience récente avec Lovable, Windsurf, Bolt, Cursor et KiloCode. On découvre aussi la manière dont il combine Supabase, N8n et le vibe coding pour créer des solutions rapides et efficaces.Il revient également sur sa participation à la Grande Journée No-Code, sa table ronde sur l'avenir des agences à l'ère de l'IA et le plaisir de retrouver une communauté en constante évolution.Un épisode riche, clair et inspirant qui met en lumière une vision engagée du No-Code.https://www.linkedin.com/in/alexandre-talon-nocode/
Get featured on the show by leaving us a Voice Mail: https://bit.ly/MIPVM
The Information's Aaron Holmes talks with TITV Host Akash Pasricha about Satya Nadella's deep-dive into Microsoft's product management to fix Copilot. We also talk with Graphite CEO Merrill Lutsky about selling his startup to Cursor, and Madrona Ventures' Matt McIlwain about the future of software investing in 2026. AI Reporter Rocket Drew speaks about the safety risks of humanoid robots, and EV reporter Steve LeVine about Ford's decision to ditch EV production for AI data centers.Articles discussed on this episode: https://www.theinformation.com/articles/microsofts-nadella-pressures-deputies-accelerate-copilot-improvementshttps://www.theinformation.com/articles/electric-fords-leap-powering-ai-data-centers-reflects-industry-adrifthttps://www.theinformation.com/briefings/waymo-suspends-san-francisco-service-city-outageTITV airs on YouTube, X and LinkedIn at 10AM PT / 1PM ET. Or check us out wherever you get your podcasts.Subscribe to: - The Information on YouTube: https://www.youtube.com/@theinformation- The Information: https://www.theinformation.com/subscribe_hSign up for the AI Agenda newsletter: https://www.theinformation.com/features/ai-agenda
En IA, tres noticias que podrían cambiar el juego: Resolve AI, fundada por exdirectivos de Splunk, alcanza una valoración de 1.000 millones tras su Serie A y trabaja en un ingeniero de confiabilidad de sitio autónomo. Cursor, el asistente de codificación con IA, sigue expandiéndose con la compra de Graphite para mejorar las revisiones de código con IA. Y la semana no fue amable para el hardware, con iRobot, Luminar y Rad Power Bikes declarando quiebras. ¿Qué señales esconden estos giros y qué nos dicen sobre el futuro de IA y hardware?En redes, los cambios también cuentan: Instagram reduce el límite de hashtags por publicación a cinco, empujando a ser más estratégico con las etiquetas. YouTube anuncia actualizaciones para 2025 que amplían respuestas por voz, fijan nuevos objetivos de Superchat y añaden herramientas de creación con IA. TikTok Shop, por su parte, sube la comisión de venta al 9% a partir de enero de 2026. ¿Qué oportunidades y riesgos traen estas decisiones para emprendedores y creadores? Si quieres más historias de marketing radical con aprendizajes prácticos para tu negocio, suscríbete a la newsletter número uno de Marketing Radical en borjagiron.com. ¡Gracias por escuchar!Conviértete en un seguidor de este podcast: https://www.spreaker.com/podcast/noticias-marketing--5762806/support.Newsletter Marketing Radical: https://marketingradical.substack.com/welcomeNewsletter Negocios con IA: https://negociosconia.substack.com/welcomeMis Libros: https://borjagiron.com/librosSysteme Gratis: https://borjagiron.com/systemeSysteme 30% dto: https://borjagiron.com/systeme30Manychat Gratis: https://borjagiron.com/manychatMetricool 30 días Gratis Plan Premium (Usa cupón BORJA30): https://borjagiron.com/metricoolNoticias Redes Sociales: https://redessocialeshoy.comNoticias IA: https://inteligenciaartificialhoy.comClub: https://triunfers.com
In der letzten Episode dieses Jahres starten Tom und André mit dem Thema Gaming PCs und Linux. Danach geht es weiter mit n8n und weiteren AI Themen wie beispielsweise dem MCP Debugger Plugin in IntelliJ und dem Visual Editor in Cursor. Danach erzählt Tom von seinem 3D Drucker und den Projekten, die er damit gestartet hat.
Send us a textStefan Georgie made $30M by age 23 and has scaled multiple 8-figure businesses. In this conversation, he reveals why AI is creating the biggest wealth transfer opportunity of our generation and why most people are missing it.The marketing world is being rewritten in real time. AI is eliminating old skill sets while creating unprecedented opportunities for those who act fast. Stefan breaks down exactly how young entrepreneurs can leverage AI tools like vibe coding, Cursor, and Claude to build valuable solutions in days, not months.What You'll Learn:• Why AI makes it easier than ever to build profitable businesses from scratch• The exact AI skills that are in highest demand right now (and how to learn them fast)• How to use AI to solve real problems for established businesses willing to pay• Why aging "boomer businesses" are goldmines for AI-savvy entrepreneurs• The hiring story: How a 22-year-old with $40 got a $3K/month job using AI• Why teams are desperate for people who can think AND execute with AI• The fastest path to making your first $10K using AI toolsStefan doesn't hold back, he shares why 98% of your competition can't think critically, why the bar is lower than you think, and how curiosity + AI skills = unlimited opportunity.Connect with Stefan! https://www.stefanpaulgeorgi.com/Connect with Us!https://www.instagram.com/alchemists.library/https://twitter.com/RyanJAyala
This is a recap of the top 10 posts on Hacker News on December 18, 2025. This podcast was generated by wondercraft.ai (00:30): Beginning January 2026, all ACM publications will be made open accessOriginal post: https://news.ycombinator.com/item?id=46313991&utm_source=wondercraft_ai(01:53): We pwned X, Vercel, Cursor, and Discord through a supply-chain attackOriginal post: https://news.ycombinator.com/item?id=46317098&utm_source=wondercraft_ai(03:16): Your job is to deliver code you have proven to workOriginal post: https://news.ycombinator.com/item?id=46313297&utm_source=wondercraft_ai(04:39): Classical statues were not painted horriblyOriginal post: https://news.ycombinator.com/item?id=46311856&utm_source=wondercraft_ai(06:02): Are Apple gift cards safe to redeem?Original post: https://news.ycombinator.com/item?id=46313061&utm_source=wondercraft_ai(07:25): Please just try HTMXOriginal post: https://news.ycombinator.com/item?id=46312973&utm_source=wondercraft_ai(08:48): GPT-5.2-CodexOriginal post: https://news.ycombinator.com/item?id=46316367&utm_source=wondercraft_ai(10:11): Ask HN: Those making $500/month on side projects in 2025 – Show and tellOriginal post: https://news.ycombinator.com/item?id=46307973&utm_source=wondercraft_ai(11:34): Independent review of UK national security law warns of overreachOriginal post: https://news.ycombinator.com/item?id=46311355&utm_source=wondercraft_ai(12:58): History LLMs: Models trained exclusively on pre-1913 textsOriginal post: https://news.ycombinator.com/item?id=46319826&utm_source=wondercraft_aiThis is a third-party project, independent from HN and YC. Text and audio generated using AI, by wondercraft.ai. Create your own studio quality podcast with text as the only input in seconds at app.wondercraft.ai. Issues or feedback? We'd love to hear from you: team@wondercraft.ai
Elena Verna is the head of growth at Lovable, the leading AI-powered app builder that hit $200 million in annual recurring revenue in under a year with just 100 employees. In this record fourth appearance on the podcast, Elena shares how the traditional growth playbook has been completely rewritten for AI companies. She explains why Lovable focuses on innovation over optimization, how they've shifted from activation to building new features, and why giving away their product for free has become their most powerful growth strategy.We discuss:1. Why 60% to 70% of traditional growth tactics no longer apply in AI2. Why you have to re-find product-market fit every 3 months3. The specific growth tactics driving Lovable's unprecedented growth4. Why giving away product is a growth strategy that beats paid ads5. “Minimum lovable product” as the new standard (not minimum viable product)6. Why activation now belongs to product teams, not growth teams7. Whether you should join an AI startup (honest tradeoffs)—Brought to you by:WorkOS—Modern identity platform for B2B SaaS, free up to 1 million MAUsVercel—Your collaborative AI assistant to design, iterate, and scale full-stack applications for the webPersona—A global leader in digital identity verification—Transcript: https://www.lennysnewsletter.com/p/the-new-ai-growth-playbook-for-2026-elena-verna—My biggest takeaways (for paid newsletter subscribers): https://www.lennysnewsletter.com/i/181207556/my-biggest-takeaways-from-this-conversation—Where to find Elena Verna:• X: https://x.com/elenaverna• LinkedIn: https://www.linkedin.com/in/elenaverna• Newsletter: https://www.elenaverna.com—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Introduction to Elena Verna(05:19) The scale and growth of Lovable(08:55) Confidence in Lovable as a business(12:17) Retention at Lovable(15:02) Lovable's unique growth levers(28:13) The role of marketing in Lovable's success(38:09) Launching new features(40:59) Hiring and team dynamics(43:17) The value of vibe coding(49:46) The importance of community(51:47) Giving away your product for free(56:26) Tripling their company size(01:00:23) Product-market-fit challenges(01:08:50) Advice for joining AI companies(01:12:00) Work-life balance(01:15:20) What it's like to work at Lovable(01:19:45) Women in tech(01:25:29) Final thoughts and lightning round—Referenced:• Elena Verna on how B2B growth is changing, product-led growth, product-led sales, why you should go freemium not trial, what features to make free, and much more: https://www.lennysnewsletter.com/p/elena-verna-on-why-every-company• The ultimate guide to product-led sales | Elena Verna: https://www.lennysnewsletter.com/p/the-ultimate-guide-to-product-led• 10 growth tactics that never work | Elena Verna (Amplitude, Miro, Dropbox, SurveyMonkey): https://www.lennysnewsletter.com/p/10-growth-tactics-that-never-work-elena-verna• Lovable: https://lovable.dev• Building Lovable: $10M ARR in 60 days with 15 people | Anton Osika (co-founder and CEO): https://www.lennysnewsletter.com/p/building-lovable-anton-osika• Stripe: https://stripe.com• What differentiates the highest-performing product teams | John Cutler (Amplitude, The Beautiful Mess): https://www.lennysnewsletter.com/p/what-differentiates-the-highest-performing• How to win in the AI era: Ship a feature every week, embrace technical debt, ruthlessly cut scope, and create magic your competitors can't copy | Gaurav Misra (CEO and co-founder of Captions): https://www.lennysnewsletter.com/p/how-to-win-in-the-ai-era-gaurav-misra• “Dumbest idea I've heard” to $100M ARR: Inside the rise of Gamma | Grant Lee (CEO): https://www.lennysnewsletter.com/p/how-50-people-built-a-profitable-ai-unicorn• Eric Ries on LinkedIn: https://www.linkedin.com/in/eries• Elena's post on LinkedIn about Lovable Missions: https://www.linkedin.com/posts/elenaverna_everythingispossible-lovableway-activity-7401627519646474242-hn6e• SheBuilds: https://shebuilds.lovable.app• Shopify + Lovable: https://lovable.dev/shopify• The Product-Market Fit Treadmill: Why every AI company is sprinting just to stay in place: https://www.elenaverna.com/p/the-product-market-fit-treadmill• Cursor: https://cursor.com• The rise of Cursor: The $300M ARR AI tool that engineers can't stop using | Michael Truell (co-founder and CEO): https://www.lennysnewsletter.com/p/the-rise-of-cursor-michael-truell• Unorthodox frameworks for growing your product, career, and impact | Bangaly Kaba (YouTube, Instagram, Facebook, Instacart): https://www.lennysnewsletter.com/p/frameworks-for-growing-your-career-bangaly-kaba• The adjacent user: https://brianbalfour.com/quick-takes/the-adjacent-user• Granola: https://www.granola.ai• Wispr Flow: https://wisprflow.ai• I'm worried about women in tech: https://www.elenaverna.com/p/im-worried-about-women-in-tech• Slack founder: Mental models for building products people love ft. Stewart Butterfield: https://www.lennysnewsletter.com/p/slack-founder-stewart-butterfield—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. To hear more, visit www.lennysnewsletter.com
Did AI end up being a political force this year?
The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
AGENDA: 03:32 Lightspeed's $9 Billion Fundraise 05:20 The Impact of Mega Funds on Seed VCs 10:09 The Supercycle of Growth and Late-Stage Investments 13:06 Disney Invests $1BN into OpenAI and What It Means 23:19 Oracle Hit Hard: Is Now the Time to Buy 28:34 Broadcom's Market Cap Drop and Anthropic's AI Chip Orders 35:04 Cursor Competes with Figma: The Convergence of Design and Coding Tools 46:20 The Biggest Danger for Incumbents: Being Maimed by AI 55:28 Boom Supersonic Raising $300M to… Power Data Centres… WTF 01:00:24 Will SpaceX IPO at $1.5TRN and The Elon Option Value
You open Excel daily, but are you using AI to make it work for you? While 58% of professionals have tried AI, only 17% use it regularly—a missed opportunity. Join CPA Kyle Ashcraft in this hands-on webinar to learn vibe coding—a no-programming approach using Cursor AI to automate repetitive Excel tasks. Watch Kyle transform messy spreadsheets, organize GL data, and reconcile transactions with simple AI prompts while keeping data secure. You'll get three ready-to-use scripts plus a framework to automate countless tasks and reclaim hours weekly.(Originally recorded on October 20, 2025, on Earmark Webinars+)Chapters(02:18) - Meet Kyle Ashcraft and His AI Journey (02:31) - The Importance of AI in Accounting (03:30) - Kyle's Background and CPA Review (04:38) - Live Webinar and Audience Interaction (05:12) - Kyle's AI Projects and Cursor Introduction (07:39) - Data Privacy Concerns with AI (17:18) - Practical AI in Excel: Examples and Demonstration (20:03) - Getting Started with Cursor (23:55) - First Cursor Project: Cleaning Up Excel Data (31:38) - Jumping into Financial Document Verification (32:50) - Exploring Cursor's Privacy Settings (33:48) - Understanding Data Retention Policies (36:23) - Comparing Excel Files with Cursor (36:48) - Analyzing Complex GL Details (42:22) - Using Cursor for Recurring Accounting Tasks (49:20) - Leveraging AI for Audit and Analysis (50:53) - Practical Tips for Implementing Cursor (53:56) - Q&A: Advanced Cursor Features (59:04) - Conclusion and Next Steps Earn CPE for this episode: https://www.earmark.app/c/2854Sign up to get free CPE for listening to this podcasthttps://earmarkcpe.comhttps://earmark.app/Download the Earmark CPE App Apple: https://apps.apple.com/us/app/earmark-cpe/id1562599728Android: https://play.google.com/store/apps/details?id=com.earmarkcpe.appResourcesIntro to Cursor PDF Guide - https://mcusercontent.com/02dbcae4a3e3f15021db25a0c/files/deff5647-e0a3-51d7-4225-cf8b3a48532d/Cursor_AI_Quick_Guide.pdfWebinar presentation - https://ai.maxwellstudy.com/Connect with Our Guest, Kyle Ashcraft, CPALinkedIn: https://www.linkedin.com/in/kyle-ashcraft-cpa-7638a42aLearn more about Maxwell CPA Reviewhttps://maxwellcpareview.com/Connect with Blake Oliver, CPALinkedIn: https://www.linkedin.com/in/blaketoliverTwitter: https://twitter.com/blaketoliver/
Ryo Lu spent years watching his designs die in meetings. Then he discovered the tool that lets designers ship code at the speed of thought: Cursor, the company where Ryo is now Head of Design. In this episode, a16z General Partner Jennifer Li sits down with Ryo to discuss why "taste" is the wrong framework for understanding the future, why purposeful apps are "selfish," how System 7 holds secrets about AI interfaces, and the radical bet that one codebase can serve everyone if you design the concepts right instead of the buttons. Timecodes:00:01:45 - Design Becomes Approachable to Everyone00:02:36 - From Years to Minutes: Product Feedback Loops Collapse00:07:54 - "Each role used their own tool...their own lingo"00:13:15 - "If you don't have an opinion, you'll get AI slop"00:17:18 - The Lost Art of Being a Complete Builder00:21:42 - Design Is Not About Aesthetics00:28:57 - User-Centric vs System-Centric Philosophy00:34:00 - AI as Universal Interface, Not Chat Box00:38:42 - "Simplicity is the Biggest Constraint"00:43:42 - "I Don't Sit in Figma All Day Making Mocks"00:46:33 - RyoOS: Building A Personal Operating System00:48:45 - "We've been doing the same thing since 1984" Resources:Follow Ryo Lu on X: https://x.com/ryolu_Follow Jennifer Li on X: https://x.com/JenniferHliFollow Erik Torenberg on X: https://x.com/eriktorenberg Stay Updated:If you enjoyed this episode, be sure to like, subscribe, and share with your friends! Find a16z on X: https://x.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Show on SpotifyListen to the a16z Show on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
A deadbolt clicks. This email, that voice--they sound all right. Then things go sideways. This week, 911 Cyber CEO Marc Raphael joins the pod to explore how AI makes scams faster, smoother, and harder to spot, and what you can do to stay hard to hit in the new threatscape. Learn more about your ad choices. Visit megaphone.fm/adchoices
The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
David George is a General Partner at Andreessen Horowitz, where he leads the firm's Growth investing team. His team has backed many of the defining companies of this era, including Databricks, Figma, Stripe, SpaceX, Anduril, and OpenAI, and is now investing behind a new generation of AI startups like Cursor, Harvey, and Abridge. AGENDA: 03:05 – Why Everyone is Wrong: Mega Funds Does Not Reduce Returns 10:40 – Is Public Market Capital Actually Cheaper Than Private Capital? 18:55 – The Biggest Advantage of Staying Private for Longer 23:30 – The #1 Investing Rule for a16z: Always Invest in the Founder's Strength of Strengths 31:20 – Why Fear of Theoretical Competition Makes Investors Miss Great Companies 35:10 – Does Revenue Matter as Much in a World of AI? 44:10 – Does Kingmaking Still Exist in Venture Capital Today? 49:20 – Do Margins Matter Less Than Ever in an AI-First World? 53:50 – My Biggest Miss: Anthropic and What I Learn From it? 56:30 – Has OpenAI Won Consumer AI? Will Anthropic Win Enterprise? 59:45 – The Most Controversial Decision in Andreessen Horowitz History 1:01:30 – Why Did You Invest $300M into Adam Neumann and Flow?
TestTalks | Automation Awesomeness | Helping YOU Succeed with Test Automation
What if understanding your codebase was no longer a blocker for great testing? Most testers were trained to work around the code — clicking through UIs, guessing selectors, and relying on outdated docs or developer explanations. In this episode, Playwright expert Ben Fellows flip that model on its head. Using AI tools like Cursor, testers can now explore the codebase directly — asking questions, uncovering APIs, understanding data relationships, and spotting risk before a single test is written. This isn't about becoming a developer. It's about using AI to finally see how the system really works — and using that insight to test smarter, earlier, and with far more confidence. If you've ever joined a new team, inherited a legacy app, or struggled to understand what really changed in a release, this episode is for you. Registration for Automation Guild 2026 Now: https://testguild.me/podag26
Corey Quinn reconnects with Keith Townsend, founder of The CTO Advisor, for a candid conversation about the massive gap between AI hype and enterprise reality. Keith shares why a biopharma company gave Microsoft Copilot a hard no, and why AI has genuinely 10x'd his personal productivity while Fortune 500 companies treat it like radioactive material. From building apps with Cursor to watching enterprises freeze in fear of being the next AI disaster in the news, Keith and Corey dig into why the tools transforming solo founders and small teams are dead on arrival in the enterprise, and what it'll actually take to bridge that gap.About Keith TownsendKeith Townsend is an enterprise technologist and founder of The Advisor Bench LLC, where he helps major IT vendors refine their go-to-market strategies through practitioner-driven insights from CIOs, CTOs, and enterprise architects. Known as “The CTO Advisor,” Keith blends deep expertise in IT infrastructure, AI, and cloud with a talent for translating complex technology into clear business strategy.With more than 20 years of experience, including roles as a systems engineer, enterprise architect, and PwC consultant, Keith has advised clients such as HPE, Google Cloud, Adobe, Intel, and AWS. His content series, 100 Days of AI and CloudEveryday.dev, provide practical, plainspoken guidance for IT leaders. A frequent speaker at VMware Explore, Interop, and Tech Field Day, Keith is a trusted voice on cloud and infrastructure transformation.Show Highlights(01:25) Life After the Futurum Group Acquisition(03:56) Building Apps You're Not Qualified to Build with Cursor(05:45)Creating an AI-Powered RSS Reader(09:01) Why AI is Great at Language But Not Intelligence(11:39) Are You Looking for Advice or Just Validation?(13:49) Why Startups Can Risk AI Disasters and AWS Can't(17:28) You Can't Outsource Responsibility(19:52) Business Users Are Scared of AI Too(23:00) LinkedIn's AI Writing Tool Misses the Point(26:42) Private AI is Starting to Look Appealing(29:00) Never Going Back to Pre-AI Development(34:27) AI for Jobs You'd Never Hire Someone to Do(39:09) Where to Find Keith and Closing ThoughtsLinksThe CTO Advisor: https://thectoadvisor.comSponsor: https://www.sumologic.com/solutions/dojo-aihttps://wiz.io/crying-out-cloud
Eytan Seidman, VP of product at Shopify, joins the podcast to unpack Shopify's Winter '26 Edition and how AI is emerging into the market for developers and merchants. They discuss the new Dev MCP server, showing how tools like Cursor and Claude Desktop can rapidly scaffold Shopify apps, wire up Shopify functions, and ship payment customization and checkout UI extension experiences that lean on Shopify primitives like meta fields and meta objects across online stores and point of sale. Eytan also breaks down how Sidekick connects with apps, why the new analytics API and ShopifyQL open fresh analytics use cases, and more. Links Shopify Winter '26 Edition: https://www.shopify.com/editions/winter2026 We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Fill out our listener survey (https://t.co/oKVAEXipxu)! https://t.co/oKVAEXipxu Let us know by sending an email to our producer, Elizabeth, at elizabeth.becz@logrocket.com (mailto:elizabeth.becz@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Check out our newsletter (https://blog.logrocket.com/the-replay-newsletter/)! https://blog.logrocket.com/the-replay-newsletter/ Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understanding where your users are struggling by trying it for free at LogRocket.com. Try LogRocket for free today. (https://logrocket.com/signup/?pdr) Chapters 01:00 — AI as the Focus of Winter '26 02:00 — MCP Server as the Ideal Dev Workflow 03:00 — Best Clients for MCP (Cursor, Claude Desktop) 04:00 — Hallucinations & Code Validation in MCP 06:00 — Developer Judgment & Platform Primitives 07:00 — Storage Choices: Meta Fields vs External Storage 09:00 — Learning UI Patterns Through MCP 10:00 — Sidekick Overview & Merchant Automation 11:00 — Apps Inside Sidekick: Data & UI Integration 13:00 — Scopes, Data Access & Developer Responsibility 14:00 — AI-Ready Platform & Explosion of New Apps 16:00 — New Developer Demographics Entering Shopify 17:00 — Where Indie Devs Should Focus (POS, Analytics) 18:00 — New Analytics API & Opportunities 19:00 — Full Platform Coverage via MCP Tools 20:00 — Building Complete Apps in Minutes 21:00 — Large Stores, Token Limits & MCP Scaling 22:00 — Reducing Errors with UI & Function Testing 23:00 — Lessons from Building the MCP Server 25:00 — Lowering Barriers for Non-Experts 26:00 — High-Quality Rust Functions via MCP 27:00 — MCP Spec Adoption: Tools Over Resources 28:00 — Future: Speed, Quality & UI Precision 29:00 — Model Evolution, Evals & Reliability 31:00 — Core Shopify Primitives to Build On 33:00 — Docs, Community & Learning Resources
While teenagers may start out using AI chatbots for basic questions, their relationship with chatbot platforms has the potential to turn addictive. Plus, Anysphere CEO Michael Truell explained the features his company is focused on building out after reaching $1 billion in annualized revenue,. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Transitioning a mature organization from an API-first model to an AI-first model is no small feat. In this episode, Yash Kosaraju, CISO of Sendbird, shares the story of how they pivoted from a traditional chat API platform to an AI agent platform and how security had to evolve to keep up.Yash spoke about the industry's obsession with "Zero Trust," arguing instead for a practical "Multi-Layer Trust" approach that assumes controls will fail . We dive deep into the specific architecture of securing AI agents, including the concept of a "Trust OS," dealing with new incident response definitions (is a wrong AI answer an incident?), and the critical need to secure the bridge between AI agents and customer environments .This episode is packed with actionable advice for AppSec engineers feeling overwhelmed by the speed of AI. Yash shares how his team embeds security engineers into sprint teams for real-time feedback, the importance of "AI CTFs" for security awareness, and why enabling employees with enterprise-grade AI tools is better than blocking them entirely .Questions asked:Guest Socials - Yash's LinkedinPodcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Cybersecurity, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(02:20) Who is Yash Kosaraju? (CISO at Sendbird)(03:30) Sendbird's Pivot: From Chat API to AI Agent Platform(05:00) Balancing Speed and Security in an AI Transition(06:50) Embedding Security Engineers into AI Sprint Teams(08:20) Threats in the AI Agent World (Data & Vendor Risks)(10:50) Blind Spots: "It's Microsoft, so it must be secure"(12:00) Securing AI Agents vs. AI-Embedded Applications(13:15) The Risk of Agents Making Changes in Customer Environments(14:30) Multi-Layer Trust vs. Zero Trust (Marketing vs. Reality) (17:30) Practical Multi-Layer Security: Device, Browser, Identity, MFA(18:25) What is "Trust OS"? A Foundation for Responsible AI(20:45) Balancing Agent Security vs. Endpoint Security(24:15) AI Incident Response: When an AI Gives a Wrong Answer(29:20) Security for Platform Engineers: Enabling vs. Blocking(30:45) Providing Enterprise AI Tools (Gemini, ChatGPT, Cursor) to Employees(32:45) Building a "Security as Enabler" Culture(36:15) What Questions to Ask AI Vendors (Paying with Data?)(39:20) Personal Use of Corporate AI Accounts(43:30) Using AI to Learn AI (Gemini Conversations)(45:00) The Stress on AppSec Engineers: "I Don't Know What I'm Doing"(48:20) The AI CTF: Gamifying Security Training(50:10) Fun Questions: Outdoors, Team Building, and Indian/Korean Food
In the "Amazon Age," customers expect custom products yesterday. Eric Turney explains why businesses are now "forced to automate" or get left behind. In this interview, he reveals how Chinese manufacturers are using robotics to reduce factory lines from 50 workers to just 5 and why speed is now the most critical metric for survival. The conversation delves into the practical side of automation: using Monday.com to manage thousands of orders , implementing AI image editors to stay competitive , and the controversial rise of "Vibe Coding." Eric shares how he bypassed expensive developers to fix a critical website bug in 30 minutes using Claude AI , while Sean details how he built a functional SaaS tool using Cursor without writing the code himself. Check out the company: https://montereycompany.com
Dans cet épisode, découvre les coulisses de la lead generation, avec Tony Berthomé, fondateur de l'agence Leverage, ex-Google & Microsoft, et l'un des experts les plus pointus de sa génération sur la performance, l'acquisition et l'usage concret de l'IA dans les opérations marketing.Tony partage comment il débloque la croissance des entreprises qu'il accompagne : du vrai terrain, de la pédagogie, et énormément de valeur actionnable.
Dans cet épisode de Connected Mate, PPC et Alexandre explorent leurs meilleurs outils d'intelligence artificielle du moment, classés en trois catégories : les assistants généralistes, les outils de code, et les générateurs visuels.Ils commencent par ChatGPT, Manus, Perplexity et Notebook LM, en comparant leurs points forts et faiblesses. Alexandre dévoile comment il utilise ces IA au quotidien pour automatiser ses recherches, reformuler ses textes ou synthétiser des documents complexes.Puis direction le monde du code avec Cursor et Lovable. L'un est un véritable copilote pour développeurs, l'autre un outil plus “sexy” mais aux capacités plus limitées. Alexandre raconte comment il développe des apps iPhone sans être développeur, grâce à un système d'agents IA bien rôdé.Enfin, PPC et Alexandre s'attaquent aux IA visuelles, de Seelab à Sora 2, en passant par Higgsfield, et dressent un panorama puissant mais nuancé de ce que ces technologies permettent aujourd'hui… et de ce qu'elles ne permettent pas encore.Un épisode riche, concret et sans langue de bois.Pour suivre les actualités de ce podcast, abonnez-vous gratuitement à la newsletter écrite avec amour et garantie sans spam https://bonjourppc.substack.com Et pour découvrir l'ouvrage de PPC Réinventez votre entreprise à l'ère de l'IA, préfacé par Serge Papin, rdv ici https://amzn.to/4gTLwxSHébergé par Ausha. Visitez ausha.co/politique-de-confidentialite pour plus d'informations.
Tomer Cohen is the longtime chief product officer at LinkedIn, where he's pioneering the Full Stack Builder program, a radical new approach to product development that fully embraces what AI makes possible. Under his leadership, LinkedIn has scrapped its traditional Associate Product Manager program and replaced it with an Associate Product Builder program that teaches coding, design, and PM skills together. He's also introduced a formal “Full Stack Builder” title and career ladder, enabling anyone from any function to take products from idea to launch. In this conversation, Tomer explains why product development has become too complex at most companies and how LinkedIn is building an AI-powered product team that can move faster, adapt more quickly, and do more with less.We discuss:1. How 70% of the skills needed for jobs will change by 20302. The broken traditional model: organizational bloat slows features to a six-month cycle3. The Full Stack Builder model4. Three pillars of making FSB work: platform, agents, and culture (culture matters most)5. Building specialized agents that critique ideas and find vulnerabilities6. Why off-the-shelf AI tools never work on enterprise code without customization7. Top performers adopt AI tools fastest, contrary to expectations about leveling effects8. Change management tactics: celebrating wins, making tools exclusive, updating performance reviews—Brought to you by:Vanta—Automate compliance. Simplify security: https://vanta.com/lennyFigma Make—A prompt-to-code tool for making ideas real: https://www.figma.com/lenny/Miro—The AI Innovation Workspace where teams discover, plan, and ship breakthrough products: https://miro.com/lenny—Transcript: https://www.lennysnewsletter.com/p/why-linkedin-is-replacing-pms—My biggest takeaways (for paid newsletter subscribers): https://www.lennysnewsletter.com/i/180042347/my-takeaways-from-this-conversation—Where to find Tomer Cohen:• LinkedIn: https://www.linkedin.com/in/tomercohen• Podcast: https://podcasts.apple.com/us/podcast/building-one-with-tomer-cohen/id1726672498—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Introduction to Tomer Cohen(04:42) The need for change in product development(11:52) The full-stack builder model explained(16:03) Implementing AI and automation in product development(19:17) Building and customizing AI tools(27:51) The timeline to launch(31:46) Pilot program and early results(37:04) Feedback from top talent(39:48) Change management and adoption(46:53) Encouraging people to play with AI tools(41:21) Performance reviews and full-stack builders(48:00) Challenges and specialization(50:05) Finding talent(52:46) Tips for implementing in your own company(56:43) Lightning round and final thoughts—Referenced:• How LinkedIn became interesting: The inside story | Tomer Cohen (CPO at LinkedIn): https://www.lennysnewsletter.com/p/how-linkedin-became-interesting-tomer-cohen• LinkedIn: https://www.linkedin.com• Cursor: https://cursor.com• The rise of Cursor: The $300M ARR AI tool that engineers can't stop using | Michael Truell (co-founder and CEO): https://www.lennysnewsletter.com/p/the-rise-of-cursor-michael-truell• Devin: https://devin.ai• Figma: https://www.figma.com• Microsoft Copilot: https://copilot.microsoft.com• Windsurf: https://windsurf.com• Building a magical AI code editor used by over 1 million developers in four months: The untold story of Windsurf | Varun Mohan (co-founder and CEO): https://www.lennysnewsletter.com/p/the-untold-story-of-windsurf-varun-mohan• Lovable: https://lovable.dev• Building Lovable: $10M ARR in 60 days with 15 people | Anton Osika (co-founder and CEO): https://www.lennysnewsletter.com/p/building-lovable-anton-osika• APB program at LinkedIn: https://careers.linkedin.com/pathways-programs/entry-level/apb• Naval Ravikant on X: https://x.com/naval• One Song podcast: https://podcasts.apple.com/us/podcast/%D7%A9%D7%99%D7%A8-%D7%90%D7%97%D7%93-one-song/id1201883177• Song Exploder podcast: https://songexploder.net• Grok on Tesla: https://www.tesla.com/support/grok• Reid Hoffman on X: https://x.com/reidhoffman—Recommended books:• Why Nations Fail: The Origins of Power, Prosperity, and Poverty: https://www.amazon.com/Why-Nations-Fail-Origins-Prosperity/dp/0307719227• Outlive: The Science and Art of Longevity: https://www.amazon.com/Outlive-Longevity-Peter-Attia-MD/dp/0593236599• The Beginning of Infinity: Explanations That Transform the World: https://www.amazon.com/Beginning-Infinity-Explanations-Transform-World/dp/0143121359—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. To hear more, visit www.lennysnewsletter.com
“HR Heretics†| How CPOs, CHROs, Founders, and Boards Build High Performing Companies
Bryan Power, who recently left Nextdoor after seven years, discusses his viral article on quitting properly, why exits define careers, manager failures, and the Growth by Design-Cursor acquisition's implications.Support our Sponsor:Metaview is the AI platform built for recruiting. Check it out: https://www.metaview.ai/heretics* Our suite of AI agents work across your hiring process to save time, boost decision quality, and elevate the candidate experience.* Learn why team builders at 3,000+ cutting-edge companies like Brex, Deel, and Quora can't live without Metaview.* It only takes minutes to get up and running.KEEP UP WITH BRYAN, NOLAN + KELLI ON LINKEDINBryan: https://www.linkedin.com/in/bryanpower/Nolan: https://www.linkedin.com/in/nolan-church/Kelli: https://www.linkedin.com/in/kellidragovich/__LINKS:For coaching and advising inquire at https://kellidragovich.com/—TIMESTAMPS:(00:00) Introduction & Power Hour Returns(00:55) Bryan's Viral “How to Quit” Article(04:00) Why Your Exit Becomes Your Entire Story(06:00) Why Companies Don't Teach Employees How to Leave(09:00) The Loyalty Expectation Problem(12:27) Sponsor: Metaview(14:38) How Managers Screw Up Exits(21:57) No Long Goodbyes: The Best Timing Advice(24:13) What to Do When Someone Resigns(27:30) Maintaining Relationships After You Leave(29:26) Growth by Design Acquired by Cursor(36:33) The State of the Recruiting Market(40:00) AI Native Skills & The Future of Entry Level Hiring(46:49) Cringey Corporate Lingo Game(50:55) Wrap This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit hrheretics.substack.com
¿Por qué los agentes de IA pierden el contexto en conversaciones largas? ¿Por qué buscan archivos por palabras y no por funcionalidad? Apple ha publicado CLaRa, una investigación que ataca los problemas fundamentales del RAG. Hoy explicamos qué es un RAG, por qué falla, y cómo esta nueva aproximación podría cambiar la forma en que programamos con IA. ------------ Escucha este podcast o ve el vídeo en Be Native: https://benative.dev Conviértete en el Maestro que las empresas necesitan con el Swift Mastery Program 26: https://acoding.academy/smp26 ------------ Si usas Claude Code, Cursor, Copilot o cualquier agente de IA para programar, seguro que has experimentado esto: llevas una hora de sesión, el modelo entiende perfectamente tu proyecto, y de repente... parece que se le va la cabeza. Te propone cosas que ya habíais descartado. Ignora archivos que le pasaste hace veinte minutos. ¿Por qué pasa esto? La respuesta está en cómo funcionan los sistemas RAG (Retrieval-Augmented Generation) y en sus limitaciones fundamentales. En este episodio explicamos: → Qué es un RAG y por qué es esencial para los agentes de código→ Por qué el buscador de archivos no entiende tu código, solo palabras→ Cómo la compresión en conversaciones largas destruye el contexto que has construido→ Qué es CLaRa, la nueva investigación de Apple que unifica búsqueda y generación→ Por qué comprimir inteligentemente puede dar mejores resultados que usar el texto completo Apple sigue apostando por la eficiencia sobre la fuerza bruta. No buscan el modelo más grande, buscan el más inteligente. Y CLaRa es un ejemplo perfecto de esa filosofía. Investigación disponible en abierto:Paper: arxiv.org/abs/2511.18659GitHub: github.com/apple/ml-claraModelos: huggingface.co/apple/CLaRa-7B-Instruct El desarrollo ha cambiado para siempre con la llegada de los agentes de IA, y para poder sacarle el mayor provecho y ser un desarrollador de los que buscan las empresas por su ultra-productividad, tienes que ser un Maestro: consígue la Maestría con el Swift Mastery Program 2026. Descárgala ya desde el App Store: Be Native y escúchanos desde ahí. Suscríbete a nuestro canal de Youtube: Apple Coding en YouTube Descubre nuestro canal de Twitch: Apple Coding en Twitch. Descubre nuestras ofertas para oyentes: - Cursos en Udemy (con código de oferta) - Apple Coding Academy - Suscríbete a Apple Coding en nuestro Patreon. - Canal de Telegram de Swift. Acceso al canal. --------------- Consigue las camisetas oficiales de Apple Coding con los logos de Swift y Apple Coding así como todo tipo de merchadising como tazas o fundas. - Tienda de merchandising de Apple Coding.
This Week In Startups is made possible by:LinkedIn Ads - http://linkedin.com/thisweekinstartupsVanta - https://www.vanta.com/twistPilot - https://pilot.com/twistToday's show: Did you know there's actually a shortage of US bricklayers? It's TRUE! So feel free to marvel at Monumental's brick-laying robots. They're not putting anyone out of work, but filling a much-needed gap.Join Alex and Monumental founder/CEO Salar al Khafaji for a deep-dive on how the startup is making construction robots play nice together by maintaining separate “zones” of operation, why Salar thinks startups need to focus on truly complex, real-world problems to truly blossom, and the secrets of fundraising in Europe.PLUS Alex chats with Seasats CEO Mike Flanigan about designing the next generation of autonomous marine crafts. (That is to say, ocean drones.) From their home base in San Diego, the company is trying to get completely independent of all Chinese parts. Find out how it's going, how they're overcoming the “wildly negative” ROI on maritime tech, and why we have so few defenses against tiny, agile drones.All that AND Jason takes some of YOUR Founder Questions.Timestamps:(03:23) How Monumental determined what kinds of robots construction sites need the most(06:49) How maintaining “zones” ensure that the robots all play nice with one another(07:52) There's a shortage of bricklayers, so Monumental's NOT taking anyone's job(9:16) LinkedIn Ads: Start converting your B2B audience into high quality leads today. Launch your first campaign and get $250 FREE when you spend at least $250. Go to http://linkedin.com/thisweekinstartups to claim your credit.(13:21) Why startups need to tackle large-scale, complex, real-world problems to really grow(15:44) Why Monumental is building in The Netherlands, and running pilots in the UK(19:07) Vanta - Get $1000 off your SOC 2 at https://www.vanta.com/twist(20:44) Why construction is unique among applications for automation and robots(26:01) Salar argues that fundraising in Europe is not as hard as you may have heard(27:55) We don't just need housing, we need BEAUTIFUL housing(31:11) Pilot - Visit https://www.pilot.com/twist and get $1,200 off your first year. (33:25) How the Scout autonomous boat challenge inspired Seasats(35:28) Trying to make drones into an “iPhone Style” project(37:39) Why Seasats is focused on endurance and staying power more than launches(39:15) The complexities of working with fuel cells(42:27) The importance of beautiful design even when working on government technology(45:51) Why they're building Seasats in beautiful San Diego, CA(47:29) The challenge of getting entirely free from Chinese components(53:52) “The Power of Small Things Has Changed”(55:18) The “wildly negative” ROI on most humanoid robotics companies also applies to maritime tech(59:09) Why there are so few defense nets against people with tiny but agile drones(01:02:32) FOUNDER Q's: Is a founder working 24/7 a red flag?(01:10:11) How bad is it to use VC money to pay off credit cards?(01:12:49) A look at Cursor's unique recruitment strategy.(01:19:57) Should young VCs go to startup conferences?Subscribe to the TWiST500 newsletter: https://ticker.thisweekinstartups.com/Check out the TWIST500: https://twist500.comSubscribe to This Week in Startups on Apple: https://rb.gy/v19fcp*Follow Lon:X: https://x.com/lons*Follow Alex:X: https://x.com/alexLinkedIn: https://www.linkedin.com/in/alexwilhelm/*Thank you to our partners:(9:16) LinkedIn Ads: Start converting your B2B audience into high quality leads today. Launch your first campaign and get $250 FREE when you spend at least $250. Go to http://linkedin.com/thisweekinstartups to claim your credit.(19:07) Vanta - Get $1000 off your SOC 2 at https://www.vanta.com/twist(31:11) Pilot - Visit https://www.pilot.com/twist and get $1,200 off your first year.
My guest today is David George. David is a General Partner at Andreessen Horowitz, where he leads the firm's growth investing business. His team has backed many of the defining companies of this era – including Databricks, Figma, Stripe, SpaceX, Anduril, and OpenAI – and is now investing behind a new generation of AI startups like Cursor, Harvey, and Abridge. This conversation is a detailed look at how David built and runs the a16z growth practice. He shares how he recruits and builds his team a “Yankees-level” culture, how his team makes investment decisions without traditional committees, and how they work with founders years before investing to win the most competitive deals. Much of our conversation centers on AI and how his team is investing across the stack, from foundational models to applications. David draws parallels to past platform shifts – from SaaS to mobile – and explains why he believes this period will produce some of the largest companies ever built. David also outlines the models that guide his approach – why markets often misprice consistent growth, what makes “pull” businesses so powerful, and why most great tech markets end up winner-take-all. David reflects on what he's learned from studying exceptional founders and why he's drawn to a particular type, the “technical terminator.” Please enjoy my conversation with David George. For the full show notes, transcript, and links to mentioned content, check out the episode page here. ----- This episode is brought to you by Ramp. Ramp's mission is to help companies manage their spend in a way that reduces expenses and frees up time for teams to work on more valuable projects. Go to ramp.com/invest to sign up for free and get a $250 welcome bonus. ----- This episode is brought to you by Ridgeline. Ridgeline has built a complete, real-time, modern operating system for investment managers. It handles trading, portfolio management, compliance, customer reporting, and much more through an all-in-one real-time cloud platform. Head to ridgelineapps.com to learn more about the platform. ----- This episode is brought to you by AlphaSense. AlphaSense has completely transformed the research process with cutting-edge AI technology and a vast collection of top-tier, reliable business content. Invest Like the Best listeners can get a free trial now at Alpha-Sense.com/Invest and experience firsthand how AlphaSense and Tegus help you make smarter decisions faster. ----- Editing and post-production work for this episode was provided by The Podcast Consultant (https://thepodcastconsultant.com). Show Notes: (00:00:00) Welcome to Invest Like The Best (00:04:00) Meet David George (00:03:04) Understanding the Impact of AI on Consumers and Enterprises (00:05:56) Monetizing AI: What is AI's Business Model (00:11:04) Investing in Robotics and American Dynamism (00:13:31) Lessons from Investing in Waymo (00:15:55) Investment Philosophy and Strategy (00:17:15) Investing in Technical Terminators (00:20:18) Market Leaders Capture All of the Value Creation (00:24:56) The Maturation of VC and Competitive Landscape (00:28:18) What a16z Does to Win Deals (00:33:06) David's Daily Routine: Meetings Structure and Blocking Time to Think (00:36:34) Why David Invests: Curiosity and Competition (00:40:12) The Unique Culture at Andreessen Horowitz (00:42:46) The Perfect Conditions for Growth Investing (00:47:04) Push v. Pull Businesses (00:49:19) The Three Metrics a16z Uses to Evaluate AI Companies (00:52:15) Unique Products and Unique Distribution (00:54:55) Tradeoffs of the a16z Firm Structure (00:59:04) a16z's Semi-Algorithmic Approach to Selling (01:00:54) Three Ways Startups can Beat Incumbents in AI (01:03:44) The Kindest Thing
HTML All The Things - Web Development, Web Design, Small Business
The web development world never stops moving - frameworks push new versions, browsers release new features, dependabot keeps chiming in, and AI tools like Cursor and the latest LLMs drop at a dizzying pace. In this episode, Mike breaks down why everything updates so fast, how he personally decides what's worth upgrading, and how he stays sane with the nonstop stream of patches, releases, and AI model announcements. From security fixes to real productivity gains, Mike shares practical strategies for keeping your workflow stable without falling behind. Show Notes: https://www.htmlallthethings.com/podcast/never-ending-updates-ai-models-cursor-frameworks Powered by CodeRabbit - AI Code Reviews: https://coderabbit.link/htmlallthethings Use our Scrimba affiliate link (https://scrimba.com/?via=htmlallthethings) for a 20% discount!! Full details in show notes.
On this episode I sit down with indie app builder and designer Chris ****Raroque to walk through his real AI coding workflow. Chris explains how he ships a portfolio of productivity apps doing thousands in MRR by pairing Claude Code and Cursor instead of picking just one tool. He live-demos “vibe coding” an iOS animation, then compares how Claude Code and Cursor's plan mode tackle the same task. The episode closes with concrete tips on plan mode, MCP servers, AI code review, dictation, and deep research so solo devs can build bigger apps than they could alone. Timestamps 00:00 – Intro 03:04 – Which Tools & Models to Use 09:16 – Thoughts on the Vibe Coding Mobile App Landscape 11:14 – Live demo: prompting Claude Code to build an iOS “AI searching” animation 18:07 – Live demo: prompting Cursor with same task 21:02 – Chris's Best Tips for Vibe Coders Key Points You don't have to pick one IDE copilot: Chris actively switches between Claude Code and Cursor because they have different strengths. For very complex bug-hunting, he prefers Cursor with plan mode; for big-picture app architecture, he leans on Claude Code with Opus. Non-developers should start on higher-level “vibe coding” platforms like Create Anything for mobile apps before graduating to Claude/Cursor. Plan mode plus detailed, spoken prompts dramatically improves code quality, especially for UI and animation work. MCP servers and AI code review bots let solo developers safely set up infra, enforce security, and catch bugs they'd otherwise miss. Claude's deep research is a powerful way to choose the right patterns and libraries before handing implementation back to Claude Code or Cursor. The #1 tool to find startup ideas/trends - https://www.ideabrowser.com LCA helps Fortune 500s and fast-growing startups build their future - from Warner Music to Fortnite to Dropbox. We turn 'what if' into reality with AI, apps, and next-gen products https://latecheckout.agency/ The Vibe Marketer - Resources for people into vibe marketing/marketing with AI: thevibemarketer.com Startup Empire - get your free builders toolkit to build cashflowing business - https://startup-ideas-pod.link/startup-empire-toolkit Become a member - https://startup-ideas-pod.link/startup-empire FIND ME ON SOCIAL X/Twitter: https://twitter.com/gregisenberg Instagram: https://instagram.com/gregisenberg/ LinkedIn: https://www.linkedin.com/in/gisenberg/ FIND CHRIS ON SOCIAL Youtube: https://www.youtube.com/@raroque X/Twitter: https://x.com/raroque Instagram: https://www.instagram.com/chris.raroque/
What happens when AI adoption surges inside companies faster than anyone can track, and the data that fuels those systems quietly slips out of sight? That question sat at the front of my mind as I spoke with Cyberhaven CEO Nishant Doshi, fresh from publishing one of the most detailed looks at real-world AI usage I have seen. This wasn't a report built on opinions or surveys. It was built on billions of actual data flows across live enterprise environments, which made our conversation feel urgent from the very first moment. Nishant explained how AI has moved out of the experimental phase and into everyday workflows at a speed few anticipated. Employees across every department are turning to AI tools not as a novelty but as a core part of how they work. That shift has delivered huge productivity gains, yet it has also created a new breed of hidden risk. Sensitive material isn't just being uploaded through deliberate actions. It is being blended, remixed, and moved in ways that older security models cannot understand. Hearing him describe how this happens in fragments rather than files made me rethink how data exposure works in 2025. We also dug into one of the most surprising findings in Cyberhaven's research. The biggest AI power users inside companies are not executives or early career talent. It is mid-level employees. They know where the friction is, and they are under pressure to deliver quickly, so they experiment freely. That experimentation is driving progress, but it is also widening the gap between how AI is used and how data is meant to be protected. Nishant shared how that trend is now pushing sensitive code, R&D material, health information, and customer data into tools that often lack proper controls. Another moment that stood out was his explanation of how developers are reshaping their work with AI coding assistants. The growth in platforms like Cursor is extraordinary, yet the risks are just as large. Code that forms the heart of an organisation's competitive strength is frequently pasted into external systems without full awareness of where it might end up. It creates a situation where innovation and exposure rise together, and older security frameworks simply cannot keep pace. Throughout the conversation, Nishant returned to the importance of visibility. Companies cannot set fair rules or safe boundaries if they cannot see what is happening at the point where data leaves the user's screen. Traditional controls were built for a world of predictable patterns. AI has broken those patterns apart. In his view, modern safeguards need to sit closer to employees, understand how fragments are created, and guide people toward safer workflows without slowing them down. By the time we reached the end of the interview, it was clear that AI governance is no longer a strategic nice-to-have. It is becoming a daily operational requirement. Nishant believes employers must create a clear path forward that balances freedom with control, and give teams the tools to do their best work without unknowingly putting their organisations at risk. His message wasn't alarmist. It was practical, grounded, and shaped by years working at the intersection of data and security. So here is the question I would love you to reflect on. If AI is quickly becoming the engine of productivity across every department, what would your organisation need to change today to keep its data safe tomorrow? And how much visibility do you honestly have over where your most sensitive information is going right now? I would love to hear your thoughts. Useful Links Connect with Cyberhaven CEO Nishant Doshi on LinkedIn Learn more about Cyberhaven Tech Talks Daily is Sponsored by NordLayer: Get the exclusive Black Friday offer: 28% off NordLayer yearly plans with the coupon code: techdaily-28. Valid until December 10th, 2025. Try it risk-free with a 14-day money-back guarantee.
AI Assisted Coding: Treating AI Like a Junior Engineer - Onboarding Practices for AI Collaboration In this special episode, Sergey Sergyenko, CEO of Cybergizer, shares his practical framework for AI-assisted development built on transactional models, Git workflows, and architectural conventions. He explains why treating AI like a junior engineer, keeping commits atomic, and maintaining rollback strategies creates production-ready code rather than just prototypes. Vibecoding: An Automation Design Instrument "I would define Vibecoding as an automation design instrument. It's not a tool that can deliver end-to-end solution, but it's like a perfect set of helping hands for a person who knows what they need to do." Sergey positions vibecoding clearly: it's not magic, it's an automation design tool. The person using it must know what they need to accomplish—AI provides the helping hands to execute that vision faster. This framing sets expectations appropriately: AI speeds up development significantly, but it's not a silver bullet that works without guidance. The more you practice vibecoding, the better you understand its boundaries. Sergey's definition places vibecoding in the evolution of development tools: from scaffolding to co-pilots to agentic coding to vibecoding. Each step increases automation, but the human architect remains essential for providing direction, context, and validation. Pair Programming with the Machine "If you treat AI as a junior engineer, it's very easy to adopt it. Ah, okay, maybe we just use the old traditions, how we onboard juniors to the team, and let AI follow this step." One of Sergey's most practical insights is treating AI like a junior engineer joining your team. This mental model immediately clarifies roles and expectations. You wouldn't let a junior architect your system or write all your tests—so why let AI? Instead, apply existing onboarding practices: pair programming, code reviews, test-driven development, architectural guidance. This approach leverages Extreme Programming practices that have worked for decades. The junior engineer analogy helps teams understand that AI needs mentorship, clear requirements, and frequent validation. Just as you'd provide a junior with frameworks and conventions to follow, you constrain AI with established architectural patterns and framework conventions like Ruby on Rails. The Transactional Model: Atomic Commits and Rollback "When you're working with AI, the more atomic commits it delivers, more easy for you to kind of guide and navigate it through the process of development." Sergey's transactional approach transforms how developers work with AI. Instead of iterating endlessly when something goes wrong, commit frequently with atomic changes, then rollback and restart if validation fails. Each commit should be small, independent, and complete—like a feature flag you can toggle. The commit message includes the prompt sequence used to generate the code and rollback instructions. This approach makes the Git repository the context manager, not just the AI's memory. When you need to guide AI, you can reference specific commits and their context. This mirrors trunk-based development practices where teams commit directly to master with small, verified changes. The cost of rollback stays minimal because changes are atomic, making this strategy far more efficient than trying to fix broken implementations through iteration. Context Management: The Weak Point and the Solution "Managing context and keeping context is one of the weak points of today's coding agents, therefore we need to be very mindful in how we manage that context for the agent." Context management challenges current AI coding tools—they forget, lose thread, or misinterpret requirements over long sessions. Sergey's solution is embedding context within the commit history itself. Each commit links back to the specific reasoning behind that code: why it was accepted, what iterations it took, and how to undo it if needed. This creates a persistent context trail that survives beyond individual AI sessions. When starting new features, developers can reference previous commits and their context to guide the AI. The transactional model doesn't just provide rollback capability—it creates institutional memory that makes AI progressively more effective as the codebase grows. TDD 2.0: Humans Write Tests, AI Writes Code "I would never allow AI to write the test. I would do it by myself. Still, it can write the code." Sergey is adamant about roles: humans write tests, AI writes implementation code. This inverts traditional TDD slightly—instead of developers writing tests then code, they write tests and AI writes the code to pass them. Tests become executable requirements and prompts. This provides essential guardrails: AI can iterate on implementation until tests pass, but it can't redefine what "passing" means. The tests represent domain knowledge, business requirements, and validation criteria that only humans should control. Sergey envisions multi-agent systems where one agent writes code while another validates with tests, but critically, humans author the original test suite. This TDD 2.0 framework (a talk Sergey gave at the Global Agile Summit) creates a verification mechanism that prevents the biggest anti-pattern: coding without proper validation. The Two Cardinal Rules: Architecture and Verification "I would never allow AI to invent architecture. Writing AI agentic coding, Vibecoding, whatever coding—without proper verification and properly setting expectations of what you want to get as a result—that's the main mistake." Sergey identifies two non-negotiables. First, never let AI invent architecture. Use framework conventions (Rails, etc.) to constrain AI's choices. Leverage existing code generators and scaffolding. Provide explicit architectural guidelines in planning steps. Store iteration-specific instructions where AI can reference them. The framework becomes the guardrails that prevent AI from making structural decisions it's not equipped to make. Second, always verify AI output. Even if you don't want to look at code, you must validate that it meets requirements. This might be through tests, manual review, or automated checks—but skipping verification is the fundamental mistake. These two rules—human-defined architecture and mandatory verification—separate successful AI-assisted development from technical debt generation. Prototype vs. Production: Two Different Workflows "When you pair as an architect or a really senior engineer who can implement it by himself, but just wants to save time, you do the pair programming with AI, and the AI kind of ships a draft, and rapid prototype." Sergey distinguishes clearly between prototype and production development. For MVPs and rapid prototypes, a senior architect pairs with AI to create drafts quickly—this is where speed matters most. For production code, teams add more iterative testing and polishing after AI generates initial implementation. The key is being explicit about which mode you're in. The biggest anti-pattern is treating prototype code as production-ready without the necessary validation and hardening steps. When building production systems, Sergey applies the full transactional model: atomic commits, comprehensive tests, architectural constraints, and rollback strategies. For prototypes, speed takes priority, but the architectural knowledge still comes from humans, not AI. The Future: AI Literacy as Mandatory "Being a software engineer and trying to get a new job, it's gonna be a mandatory requirement for you to understand how to use AI for coding. So it's not enough to just be a good engineer." Sergey sees AI-assisted coding literacy becoming as fundamental as Git proficiency. Future engineering jobs will require demonstrating effective AI collaboration, not just traditional coding skills. We're reaching good performance levels with AI models—now the challenge is learning to use them efficiently. This means frameworks and standardized patterns for AI-assisted development will emerge and consolidate. Approaches like AAID, SpecKit, and others represent early attempts to create these patterns. Sergey expects architectural patterns for AI-assisted development to standardize, similar to how design patterns emerged in object-oriented programming. The human remains the bottleneck—for domain knowledge, business requirements, and architectural guidance—but the implementation mechanics shift heavily toward AI collaboration. Resources for Practitioners "We are reaching a good performance level of AI models, and now we need to guide it to make it impactful. It's a great tool, now we need to understand how to make it impactful." Sergey recommends Obie Fernandez's work on "Patterns of Application Development Using AI," particularly valuable for Ruby and Rails developers but applicable broadly. He references Andrey Karpathy's original vibecoding post and emphasizes Extreme Programming practices as foundational. The tools he uses—Cursor and Claude Code—support custom planning steps and context management. But more important than tools is the mindset: we have powerful AI capabilities now, and the focus must shift to efficient usage patterns. This means experimenting with workflows, documenting what works, and sharing patterns with the community. Sergey himself shares case studies on LinkedIn and travels extensively speaking about these approaches, contributing to the collective learning happening in real-time. About Sergey Sergyenko Sergey is the CEO of Cybergizer, a dynamic software development agency with offices in Vilnius, Lithuania. Specializing in MVPs with zero cash requirements, Cybergizer offers top-tier CTO services and startup teams. Their tech stack includes Ruby, Rails, Elixir, and ReactJS. Sergey was also a featured speaker at the Global Agile Summit, and you can find his talk available in your membership area. If you are not a member don't worry, you can get the 1-month trial and watch the whole conference. You can cancel at any time. You can link with Sergey Sergyenko on LinkedIn.
Le sujet :L'IA et le vibe coding ont révolutionné la création de side business : plus besoin d'être un expert de la tech pour lancer un site, une application ou un média. L'essentiel est ailleurs : savoir identifier les bonnes opportunités business et utiliser les meilleurs outils.L'invité du jour :Esther Moisy-Kirschbaum est responsable du développement de Magma, une newsletter d'identification de tendances et d'opportunités de business.Aux côtés de Matthieu Stefani, Esther et Christofer Ciminelli nous expliquent comment créer un side business rentable grâce au vibe coding et aux outils d'IA les plus accessibles.Découvrez : Pourquoi l'entrepreneuriat est un pilier de l'investissementComment identifier les opportunités de side businessQu'est-ce que le vibe coding et comment se lancerLes opportunités du faceless et du live shoppingComment combiner vibe coding, IA et APIAvantages :Bonne nouvelle ! Nous avons négocié pour vous un avantage exceptionnel. Avec le code BFLAMARTINGALE, obtenez 50% de réduction sur l'abonnement annuel à la newsletter Magma. Offre valable jusqu'au 31/12/2025 (au-delà, le code vous offre tout de même 50€ de réduction
AI Assisted Coding: Swimming in AI - Managing Tech Debt in the Age of AI-Assisted Coding In this special episode, Lou Franco, veteran software engineer and author of "Swimming in Tech Debt," shares his practical approach to AI-assisted coding that produces the same amount of tech debt as traditional development—by reading every line of code. He explains the critical difference between vibecoding and AI-assisted coding, why commit-by-commit thinking matters, and how to reinvest productivity gains into code quality. Vibecoding vs. AI-Assisted Coding: Reading Code Matters "I read all the code that it outputs, so I need smaller steps of changes." Lou draws a clear distinction between vibecoding and his approach to AI-assisted coding. Vibecoding, in his definition, means not reading the code at all—just prompting, checking outputs, and prompting again. His method is fundamentally different: he reads every line of generated code before committing it. This isn't just about catching bugs; it's about maintaining architectural control and accountability. As Lou emphasizes, "A computer can't be held accountable, so a computer can never make decisions. A human always has to make decisions." This philosophy shapes his entire workflow—AI generates code quickly, but humans make the final call on what enters the repository. The distinction matters because it determines whether you're managing tech debt proactively or discovering it later when changes become difficult. The Moment of Shift: Staying in the Zone "It kept me in the zone. It saved so much time! Never having to look up what a function's arguments were... it just saved so much time." Lou's AI coding journey began in late 2022 with GitHub Copilot's free trial. He bought a subscription immediately after the trial ended because of one transformative benefit: staying in the flow state. The autocomplete functionality eliminated constant context switching to documentation, Stack Overflow searches, and function signature lookups. This wasn't about replacing thinking—it was about removing friction from implementation. Lou could maintain focus on the problem he was solving rather than getting derailed by syntax details. This experience shaped his understanding that AI's value lies in removing obstacles to productivity, not in replacing the developer's judgment about architecture and design. Thinking in Commits: The Right Size for AI Work "I think of prompts commit-by-commit. That's the size of the work I'm trying to do in a prompt." Lou's workflow centers on a simple principle: size your prompts to match what should be a single commit. This constraint provides multiple benefits. First, it keeps changes small enough to review thoroughly—if a commit is too big to review properly, the prompt was too ambitious. Second, it creates a clear commit history that tells a story about how the code evolved. Third, it enables easy rollback if something goes wrong. This commit-sized thinking mirrors good development practices that existed long before AI—small, focused changes that each accomplish one clear purpose. Lou uses inline prompting in Cursor (Command-K) for these localized changes because it keeps context tight: "Right here, don't go look at the rest of my files... Everything you need is right here. The context is right here... And it's fast." The Tech Debt Question: Same Code, Same Debt "Based on the way I've defined how I did it, it's exactly the same amount of tech debt that I would have done on my own... I'm faster and can make more code, but I invest some of that savings back into cleaning things up." As the author of "Swimming in Tech Debt," Lou brings unique perspective to whether AI coding creates more technical debt. His answer: not if you're reading and reviewing everything. When you maintain the same quality standards—code review, architectural oversight, refactoring—you generate the same amount of debt as manual coding. The difference is speed. Lou gets productivity gains from AI, and he consciously reinvests a portion of those gains back into code quality through refactoring. This creates a virtuous cycle: faster development enables more time for cleanup, which maintains a codebase that's easier for both humans and AI to work with. The key insight is that tech debt isn't caused by AI—it's caused by skipping quality practices regardless of how code is generated. When Vibecoding Creates Debt: AI Resistance as a Symptom "When you start asking the AI to do things, and it can't do them, or it undoes other things while it's doing them... you're experiencing the tech debt a different way. You're trying to make changes that are on your roadmap, and you're getting resistance from making those changes." Lou identifies a fascinating pattern: tech debt from vibecoding (without code review) manifests as "AI resistance"—difficulty getting AI to make the changes you want. Instead of compile errors or brittle tests signaling problems, you experience AI struggling to understand your codebase, undoing changes while making new ones, or producing code with repetition and tight coupling. These are classic tech debt symptoms, just detected differently. The debt accumulates through architecture violations, lack of separation of concerns, and code that's hard to modify. Lou's point is profound: whether you notice debt through test failures or through AI confusion, the underlying problem is the same—code that's difficult to change. The solution remains consistent: maintain quality practices including code review, even when AI makes generation fast. Can AI Fix Tech Debt? Yes, With Guidance "You should have some acceptance criteria on the code... guide the LLM as to the level of code quality you want." Lou is optimistic but realistic about AI's ability to address existing tech debt. AI can definitely help with refactoring and adding tests—but only with human guidance on quality standards. You must specify what "good code" looks like: acceptance criteria, architectural patterns, quality thresholds. Sometimes copy/paste is faster than having AI regenerate code. Very convoluted codebases challenge both humans and AI, so some remediation should happen before bringing AI into the picture. The key is recognizing that AI amplifies your approach—if you have strong quality standards and communicate them clearly, AI accelerates improvement. If you lack quality standards, AI will generate code just as problematic as what already exists. Reinvesting Productivity Gains in Quality "I'm getting so much productivity out of it, that investing a little bit of that productivity back into refactoring is extremely good for another kind of productivity." Lou describes a critical strategy: don't consume all productivity gains as increased feature velocity. Reinvest some acceleration back into code quality through refactoring. This mirrors the refactor step in test-driven development—after getting code working, clean it up before moving on. AI makes this more attractive because the productivity gains are substantial. If AI makes you 30% faster at implementation, using 10% of that gain on refactoring still leaves you 20% ahead while maintaining quality. Lou explicitly budgets this reinvestment, treating quality maintenance as a first-class activity rather than something that happens "when there's time." This discipline prevents the debt accumulation that makes future work progressively harder. The 100x Code Concern: Accountability Remains Human "Directionally, I think you're probably right... this thing is moving fast, we don't know. But I'm gonna always want to read it and approve it." When discussing concerns about AI generating 100x more code (and potentially 100x more tech debt), Lou acknowledges the risk while maintaining his position: he'll always read and approve code before it enters the repository. This isn't about slowing down unnecessarily—it's about maintaining accountability. Humans must make the decisions because only humans can be held accountable for those decisions. Lou sees potential for AI to improve by training on repository evolution rather than just end-state code, learning from commit history how codebases develop. But regardless of AI improvements, the human review step remains essential. The goal isn't to eliminate human involvement; it's to shift human focus from typing to thinking, reviewing, and making architectural decisions. Practical Workflow: Inline Prompting and Small Changes "Right here, don't go look at the rest of my files... Everything you need is right here. The context is right here... And it's fast." Lou's preferred tool is Cursor with inline prompting (Command-K), which allows him to work on specific code sections with tight context. This approach is fast because it limits what AI considers, reducing both latency and irrelevant changes. The workflow resembles pair programming: Lou knows what he wants, points AI at the specific location, AI generates the implementation, and Lou reviews before accepting. He also uses Claude Code for full codebase awareness when needed, but the inline approach dominates his daily work. The key principle is matching tool choice to context needs—use inline prompting for localized changes, full codebase tools when you need broader understanding. This thoughtful tool selection keeps development efficient while maintaining control. Resources and Community Lou recommends Steve Yegge's upcoming book on vibecoding. His website, LouFranco.com, provides additional resources. About Lou Franco Lou Franco is a veteran software engineer and author of Swimming in Tech Debt. With decades of experience at startups, as well as Trello, and Atlassian, he's seen both sides of debt—as coder and leader. Today, he advises teams on engineering practices, helping them turn messy codebases into momentum. You can link with Lou Franco on LinkedIn and visit his website at LouFranco.com.
The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
Maor Shlomo is the Founder and CEO of Base44, the AI building platform that Maor built from idea to $80M acquisition by Wix, in just 8 months. Today the company serves millions of users and will hit $50M ARR by the end of the year. Before Base44, Maor was the Co-Founder and CTO of Explorium. AGENDA: 00:05 – 00:10: How Vibe Coding is Going to Kill Salesforce and SaaS 00:13 – 00:15: Do Vibe Coding platforms have any defensibility? 00:22 – 00:24: I am not worried about Replit and Lovable, I am worried about Google… 00:28 – 00:29: Margins do not matter, the price of the models will go to zero 00:31 – 00:32: Speed to copy has never been lower; has the technical moat been eroded? 00:47 – 00:48: How does Base44 beat Cursor? 00:56 – 00:57: Do not pay attention to competition: focus on your business 00:57 – 00:58: How Base44 is helped, not hurt by not being in Silicon Valley? 00:58 – 00:59: What percent of code will be written by AI in 12 months? 01:01 – 01:02: OpenAI or Anthropic: Why Maor is Long Anthropic? 01:03 – 01:04: If I could have any board member in the world it would be Jack Dorsey
AI Assisted Coding: From Designer to Solo Developer - Building Production Apps with AI In this special episode, Elina Patjas shares her remarkable journey from designer to solo developer, building LexieLearn—an AI-powered study tool with 1,500+ users and paying customers—entirely through AI-assisted coding. She reveals the practical workflow, anti-patterns to avoid, and why the future of software might not need permanent apps at all. The Two-Week Transformation: From Idea to App Store "I did that, and I launched it to App Store, and I was like, okay, so… If I can do THIS! So, what else can I do? And this all happened within 2 weeks." Elina's transformation happened fast. As a designer frustrated with traditional software development where maybe 10% of your original vision gets executed, she discovered Cursor and everything changed. Within two weeks, she went from her first AI-assisted experiment to launching a complete app in the App Store. The moment that shifted everything was realizing that AI had fundamentally changed the paradigm from "writing code" to "building the product." This wasn't about learning to code—it was about finally being able to execute her vision 100% the way she wanted it, with immediate feedback through testing. Building LexieLearn: Solving Real Problems for Real Users "I got this request from a girl who was studying, and she said she would really appreciate to be able to iterate the study set... and I thought: "That's a brilliant idea! And I can execute that!" And the next morning, it was 9.15, I sent her a screen capture." Lexie emerged from Elina's frustration with ineffective study routines and gamified edtech that didn't actually help kids learn. She built an AI-powered study tool for kids aged 10-15 that turns handwritten notes into adaptive quizzes revealing knowledge gaps—private, ad-free, and subscription-based. What makes Lexie remarkable isn't just the technology, but the speed of iteration. When a user requested a feature, Elina designed and implemented it overnight, sending a screen capture by 9:15 AM the next morning. This kind of responsiveness—from customer feedback to working feature in hours—represents a fundamental shift in how software can be built. Today, Lexie has over 1,500 users with paying customers, proving that AI-assisted development isn't just for prototypes anymore. The Workflow: It's Not Just "Vibing" "I spend 30 minutes designing the whole workflow inside my head... all the UX interactions, the data flow, and the overall architectural decisions... so I spent a lot of time writing a really, really good spec. And then I gave that to Claude Code." Elina has mixed feelings about the term "vibecoding" because it suggests carelessness. Her actual workflow is highly disciplined. She spends significant time designing the complete workflow mentally—all UX interactions, data flow, and architectural decisions—then writes detailed specifications. She often collaborates with Claude to write these specs, treating the AI as a thinking partner. Once the spec is clear, she gives it to Claude Code and enters a dialogue mode: splitting work into smaller tasks, maintaining constant checkpoints, and validating every suggestion. She reads all the code Claude generates (32,000 lines client-side, 8,000 server-side) but doesn't write code herself anymore. This isn't lazy—it's a new kind of discipline focused on design, architecture, and clear communication rather than syntax. Reading Code vs. Writing Code: A New Skill Set "AI is able to write really good code, if you just know how to read it... But I do not write any code. I haven't written a single line of code in a long time." Elina's approach reveals an important insight: the skill shifts from writing code to reading and validating it. She treats Claude Code as a highly skilled companion that she needs to communicate with extremely well. This requires knowing "what good looks like"—her 15 years of experience as a designer gives her the judgment to evaluate what the AI produces. She maintains dialogue throughout development, using checkpoints to verify direction and clarify requirements. The fast feedback loop means when she fails to explain something clearly, she gets immediate feedback and can course-correct instantly. This is fundamentally different from traditional development where miscommunication might not surface until weeks later. The Anti-Pattern: Letting AI Run Rampant "You need to be really specific about what you want to do, and how you want to do it, and treat the AI as this highly skilled companion that you need to be able with." The biggest mistake Elina sees is treating AI like magic—giving vague instructions and expecting it to "just figure it out." This leads to chaos. Instead, developers need to be incredibly specific about requirements and approach, treating AI as a skilled partner who needs clear communication. The advantage is that the iteration loop is so fast that when you fail to explain something properly, you get feedback immediately and can clarify. This makes the learning curve steep but short. The key is understanding that AI amplifies your skills—if you don't know what good architecture looks like, AI won't magically create it for you. Breaking the Gatekeeping: One Person, Ten Jobs "I think that I can say that I am a walking example of what you can do, if you have the proper background, and you know what good looks like. You can do several things at a time. What used to require 10 people, at least, to build before." Elina sees herself as living proof that the gatekeeping around software development is breaking down. Someone with the right background and judgment can now do what previously required a team of ten people. She's passionate about others experiencing this same freedom—the ability to execute their vision without compromise, to respond to user feedback overnight, to build production-quality software solo. This isn't about replacing developers; it's about expanding who can build software and what's possible for small teams. For Elina, working with a traditional team would actually slow her down now—she'd spend more time explaining her vision than the team would save through parallel work. The Future: Intent-Based Software That Emerges and Disappears "The software gets built in an instance... it's going to this intent-based mode when we actually don't even need apps or software as we know them." Elina's vision for the future is radical: software that emerges when you need it and disappears when you don't. Instead of permanent apps, you'd have intent-based systems that generate solutions in the moment. This shifts software from a product you download and learn to a service that materializes around your needs. We're not there yet, but Elina sees the trajectory clearly. The speed at which she can now build and modify Lexie—overnight feature implementations, instant bug fixes, continuous evolution—hints at a future where software becomes fluid rather than fixed. Getting Started: Just Do It "I think that the best resource is just your own frustration with some existing tools... Just open whatever tool you're using, is it Claude or ChatGPT and start interacting and discussing, getting into this mindset that you're exploring what you can do, and then just start doing." When asked about resources, Elina's advice is refreshingly direct: don't look for tutorials, just start. Let your frustration with existing tools drive you. Open Claude or ChatGPT and start exploring, treating it as a dialogue partner. Start building something you actually need. The learning happens through doing, not through courses. Her own journey proves this—she went from experimenting with Cursor to shipping Lexie to the App Store in two weeks, not because she found the perfect tutorial, but because she just started building. The tools are good enough now that the biggest barrier isn't technical knowledge—it's having the courage to start and the judgment to evaluate what you're building. About Elina Patjas Elina is building Lexie, an AI-powered study tool for kids aged 10–15. Frustrated by ineffective "read for exams" routines and gamified edtech fluff, she designed Lexie to turn handwritten notes into adaptive quizzes that reveal knowledge gaps—private, ad-free, and subscription-based. Lexie is learning, simplified. You can link with Elina Patjas on LinkedIn.
The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
AGENDA: 04:47 Cursor Raises $2.3BN at $29BN Valuation 11:36 What Gemini 3 Means for Lovable, Cursor and Replit 30:54 Peter Thiel and Softbank Sell NVIDIA: The Bubble Bursting? 48:54 Oracle Credit Default Swaps: The Risk is Increasing 01:07:22 Stripe Does Tender at All-Time High: Why the Best Companies Will Never IPO 01:19:18 Why Retail WIll Cause a Surge of Capital into VC Funds
OpenAI is testing out group chats as a sort of collaborative prompting experience. The hyperscalers are lining up against Nvidia in one specific arena. The Sam Altman Elon Musk feud isn't over. Google knows who sent you that fake UPS shipment alert text. And, of course, the Weekend Longreads Suggestions. ChatGPT launches pilot group chats across Japan, New Zealand, South Korea, and Taiwan (TechCrunch) Amazon and Microsoft Back Effort That Would Restrict Nvidia's Exports to China (WSJ) OpenAI, Apple Lose Bid to Toss Musk xAI Suit Over Competition (Bloomberg) AI startup Cursor raises $2.3 billion funding round at $29.3 billion valuation (CNBC) You know those fake USPS texts? Google says it's found who's behind them (Fast Company) Weekend Longreads Suggestions: Sundar Pichai Is Google's AI ‘Wartime CEO' After All (Bloomberg) CRYPTO: Realm of the Coin (Vanity Fair) I'm Going to Be a Dad. Here's Why I'm Not Posting About My Kid Online (CNET) Learn more about your ad choices. Visit megaphone.fm/adchoices
Plus: Harbinger Motors raises $160 million and secures an order from FedEx. And AI startup Cursor raises an additional $2.3 billion. Zoe Kuhlkin hosts. Learn more about your ad choices. Visit megaphone.fm/adchoices