AI researcher at Tesla
POPULARITY
Happy New Year! NVIDIA just spent $20 billion to hollow out an AI company for its brains, while Meta and Google scramble to scoop up fresh talent before AI gets "too weird to manage." Who's winning, who's left behind, and what do these backroom deals mean for the future of artificial intelligence? Andrej Karpathy admits programmers cannot keep pace with AI advances Economic uncertainty in AI despite massive stock market influence Google, Anthropic, and Microsoft drive AI productization for business and consumers OpenAI, Claude, and Gemini battle for consumer AI dominance Journalism struggles to keep up with AI realities and misinformation tools Concerns mount over AI energy, water, and environmental impact narratives Meta buys Manus, expands AI agent ambitions with Llama model OpenAI posts high-stress "Head of Preparedness" job worth $555K+ Training breakthroughs: DeepSeek's mHC and comparisons to Action Park U.S. lawmakers push broad, controversial internet censorship bills Age verification and bans spark state laws, VPN workaround explosion U.S. drone ban labeled protectionist as industry faces tech shortages FCC security initiatives falter; Cyber Trust Mark program scrapped Waymo robotaxis stall in blackouts, raising AV urban planning issues School cellphone bans expose kids' struggle with analog clocks MetroCard era ends in NYC as tap-to-pay takes over subway access RAM, VRAM, and GPU prices soar as AI and gaming squeeze supply CES preview: Samsung QD-OLED TV, Sony AFEELA car, gadget show hype Remembering Stewart Cheifet and Computer Chronicles' legacy Host: Leo Laporte Guests: Dan Patterson and Joey de Villa Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: zscaler.com/security canary.tools/twit - use code: TWIT monarch.com with code TWIT Melissa.com/twit redis.io
Happy New Year! NVIDIA just spent $20 billion to hollow out an AI company for its brains, while Meta and Google scramble to scoop up fresh talent before AI gets "too weird to manage." Who's winning, who's left behind, and what do these backroom deals mean for the future of artificial intelligence? Andrej Karpathy admits programmers cannot keep pace with AI advances Economic uncertainty in AI despite massive stock market influence Google, Anthropic, and Microsoft drive AI productization for business and consumers OpenAI, Claude, and Gemini battle for consumer AI dominance Journalism struggles to keep up with AI realities and misinformation tools Concerns mount over AI energy, water, and environmental impact narratives Meta buys Manus, expands AI agent ambitions with Llama model OpenAI posts high-stress "Head of Preparedness" job worth $555K+ Training breakthroughs: DeepSeek's mHC and comparisons to Action Park U.S. lawmakers push broad, controversial internet censorship bills Age verification and bans spark state laws, VPN workaround explosion U.S. drone ban labeled protectionist as industry faces tech shortages FCC security initiatives falter; Cyber Trust Mark program scrapped Waymo robotaxis stall in blackouts, raising AV urban planning issues School cellphone bans expose kids' struggle with analog clocks MetroCard era ends in NYC as tap-to-pay takes over subway access RAM, VRAM, and GPU prices soar as AI and gaming squeeze supply CES preview: Samsung QD-OLED TV, Sony AFEELA car, gadget show hype Remembering Stewart Cheifet and Computer Chronicles' legacy Host: Leo Laporte Guests: Dan Patterson and Joey de Villa Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: zscaler.com/security canary.tools/twit - use code: TWIT monarch.com with code TWIT Melissa.com/twit redis.io
Happy New Year! NVIDIA just spent $20 billion to hollow out an AI company for its brains, while Meta and Google scramble to scoop up fresh talent before AI gets "too weird to manage." Who's winning, who's left behind, and what do these backroom deals mean for the future of artificial intelligence? Andrej Karpathy admits programmers cannot keep pace with AI advances Economic uncertainty in AI despite massive stock market influence Google, Anthropic, and Microsoft drive AI productization for business and consumers OpenAI, Claude, and Gemini battle for consumer AI dominance Journalism struggles to keep up with AI realities and misinformation tools Concerns mount over AI energy, water, and environmental impact narratives Meta buys Manus, expands AI agent ambitions with Llama model OpenAI posts high-stress "Head of Preparedness" job worth $555K+ Training breakthroughs: DeepSeek's mHC and comparisons to Action Park U.S. lawmakers push broad, controversial internet censorship bills Age verification and bans spark state laws, VPN workaround explosion U.S. drone ban labeled protectionist as industry faces tech shortages FCC security initiatives falter; Cyber Trust Mark program scrapped Waymo robotaxis stall in blackouts, raising AV urban planning issues School cellphone bans expose kids' struggle with analog clocks MetroCard era ends in NYC as tap-to-pay takes over subway access RAM, VRAM, and GPU prices soar as AI and gaming squeeze supply CES preview: Samsung QD-OLED TV, Sony AFEELA car, gadget show hype Remembering Stewart Cheifet and Computer Chronicles' legacy Host: Leo Laporte Guests: Dan Patterson and Joey de Villa Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: zscaler.com/security canary.tools/twit - use code: TWIT monarch.com with code TWIT Melissa.com/twit redis.io
Happy New Year! NVIDIA just spent $20 billion to hollow out an AI company for its brains, while Meta and Google scramble to scoop up fresh talent before AI gets "too weird to manage." Who's winning, who's left behind, and what do these backroom deals mean for the future of artificial intelligence? Andrej Karpathy admits programmers cannot keep pace with AI advances Economic uncertainty in AI despite massive stock market influence Google, Anthropic, and Microsoft drive AI productization for business and consumers OpenAI, Claude, and Gemini battle for consumer AI dominance Journalism struggles to keep up with AI realities and misinformation tools Concerns mount over AI energy, water, and environmental impact narratives Meta buys Manus, expands AI agent ambitions with Llama model OpenAI posts high-stress "Head of Preparedness" job worth $555K+ Training breakthroughs: DeepSeek's mHC and comparisons to Action Park U.S. lawmakers push broad, controversial internet censorship bills Age verification and bans spark state laws, VPN workaround explosion U.S. drone ban labeled protectionist as industry faces tech shortages FCC security initiatives falter; Cyber Trust Mark program scrapped Waymo robotaxis stall in blackouts, raising AV urban planning issues School cellphone bans expose kids' struggle with analog clocks MetroCard era ends in NYC as tap-to-pay takes over subway access RAM, VRAM, and GPU prices soar as AI and gaming squeeze supply CES preview: Samsung QD-OLED TV, Sony AFEELA car, gadget show hype Remembering Stewart Cheifet and Computer Chronicles' legacy Host: Leo Laporte Guests: Dan Patterson and Joey de Villa Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: zscaler.com/security canary.tools/twit - use code: TWIT monarch.com with code TWIT Melissa.com/twit redis.io
Happy New Year! NVIDIA just spent $20 billion to hollow out an AI company for its brains, while Meta and Google scramble to scoop up fresh talent before AI gets "too weird to manage." Who's winning, who's left behind, and what do these backroom deals mean for the future of artificial intelligence? Andrej Karpathy admits programmers cannot keep pace with AI advances Economic uncertainty in AI despite massive stock market influence Google, Anthropic, and Microsoft drive AI productization for business and consumers OpenAI, Claude, and Gemini battle for consumer AI dominance Journalism struggles to keep up with AI realities and misinformation tools Concerns mount over AI energy, water, and environmental impact narratives Meta buys Manus, expands AI agent ambitions with Llama model OpenAI posts high-stress "Head of Preparedness" job worth $555K+ Training breakthroughs: DeepSeek's mHC and comparisons to Action Park U.S. lawmakers push broad, controversial internet censorship bills Age verification and bans spark state laws, VPN workaround explosion U.S. drone ban labeled protectionist as industry faces tech shortages FCC security initiatives falter; Cyber Trust Mark program scrapped Waymo robotaxis stall in blackouts, raising AV urban planning issues School cellphone bans expose kids' struggle with analog clocks MetroCard era ends in NYC as tap-to-pay takes over subway access RAM, VRAM, and GPU prices soar as AI and gaming squeeze supply CES preview: Samsung QD-OLED TV, Sony AFEELA car, gadget show hype Remembering Stewart Cheifet and Computer Chronicles' legacy Host: Leo Laporte Guests: Dan Patterson and Joey de Villa Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: zscaler.com/security canary.tools/twit - use code: TWIT monarch.com with code TWIT Melissa.com/twit redis.io
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
2025 has almost come to a close and the new year is right around the corner.At this time of year, it's usual to reflect on the year and consider some of the biggest, most impactful things that have happened. But here at ITPro, we like to take a different approach: what didn't happen?The tech industry can't help but make bold promises and some just don't pan out. What are some of the biggest targets, trends, and predictions that just haven't come to fruition in 2025?In this episode, Jane and Rory are once again joined by Ross Kelly, news and analysis editor at ITPro, to discuss the biggest misses of the year.Read more:Is enterprise agentic AI adoption matching the hype?‘Agent washing' is here: Most agentic AI tools are just ‘repackaged' RPA solutions and chatbots – and Gartner says 40% of projects will be ditched within two yearsAgentic AI carries huge implications for security teams - here's what leaders should know'It's slop': OpenAI co-founder Andrej Karpathy pours cold water on agentic AI hype – so your jobs are safe, at least for nowIBM is targeting 'quantum advantage' in 12 months – and says useful quantum computing is just a few years awaySAS thinks quantum AI has huge enterprise potential – here's whySAS rejects generative AI hype in favor of data fundamentals at Innovate 2025Post-quantum cryptography is now top of mind for cybersecurity leadersWhy does Nvidia have a no-chip quantum strategy?Meta executive denies hyping up Llama 4 benchmark scores – but...
Ho Ho Ho, Alex here! (a real human writing these words, this needs to be said in 2025) Merry Christmas (to those who celebrate) and welcome to the very special yearly ThursdAI recap! This was an intense year in the world of AI, and after 51 weekly episodes (this is episode 52!) we have the ultimate record of all the major and most important AI releases of this year! So instead of bringing you a weekly update (it's been a slow week so far, most AI labs are taking a well deserved break, the Cchinese AI labs haven't yet surprised anyone), I'm dropping a comprehensive yearly AI review! Quarter by quarter, month by month, both in written form and as a pod/video! Why do this? Who even needs this? Isn't most of it obsolete? I have asked myself this exact question while prepping for the show (it was quite a lot of prep, even with Opus's help). I eventually landed on, hey, if nothing else, this will serve as a record of the insane week of AI progress we all witnessed. Can you imagine that the term Vibe Coding is less than 1 year old? That Claude Code was released at the start of THIS year? We get hedonicly adapt to new AI goodies so quick, and I figured this will serve as a point in time check, we can get back to and feel the acceleration! With that, let's dive in - P.S. the content below is mostly authored by my co-author for this, Opus 4.5 high, which at the end of 2025 I find the best creative writer with the best long context coherence that can imitate my voice and tone (hey, I'm also on a break!
Did AI end up being a political force this year?
Alexander Embiricos leads product on Codex, OpenAI's powerful coding agent, which has grown 20x since August and now serves trillions of tokens weekly. Before joining OpenAI, Alexander spent five years building a pair programming product for engineers. He now works at the frontier of AI-led software development, building what he describes as a software engineering teammate—an AI agent designed to participate across the entire development lifecycle.We discuss:1. Why Codex has grown 20x since launch and what product decisions unlocked this growth2. How OpenAI built the Sora Android app in just 18 days using Codex3. Why the real bottleneck to AGI-level productivity isn't model capability—it's human typing speed4. The vision of AI as a proactive teammate, not just a tool you prompt5. The bottleneck shifting from building to reviewing AI-generated work6. Why coding will be a core competency for every AI agent—because writing code is how agents use computers best—Brought to you by:WorkOS—Modern identity platform for B2B SaaS, free up to 1 million MAUs: https://workos.com/lennyFin—The #1 AI agent for customer service: https://fin.ai/lennyJira Product Discovery—Confidence to build the right thing: https://atlassian.com/lenny/?utm_source=lennypodcast&utm_medium=paid-audio&utm_campaign=fy24q1-jpd-imc—Transcript: https://www.lennysnewsletter.com/p/why-humans-are-ais-biggest-bottleneck—My biggest takeaways (for paid newsletter subscribers): https://www.lennysnewsletter.com/i/180365355/my-biggest-takeaways-from-this-conversation—Where to find Alexander Embiricos:• X: https://x.com/embirico• LinkedIn: https://www.linkedin.com/in/embirico—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Introduction to Alexander Embiricos (05:13) The speed and ambition at OpenAI(11:34) Codex: OpenAI's coding agent(15:43) Codex's explosive growth(24:59) The future of AI and coding agents(33:11) The impact of AI on engineering(44:08) How Codex has impacted the way PMs operate(45:40) Throwaway code and ubiquitous coding(47:10) Shipping the Sora Android app(49:01) Building the Atlas browser(53:34) Codex's impact on productivity(55:35) Measuring progress on Codex(58:09) Why they are building a web browser(01:01:58) Non-engineering use cases for Codex(01:02:53) Codex's capabilities(01:04:49) Tips for getting started with Codex(01:05:37) Skills to lean into in the AI age(01:10:36) How far are we from a human version of AI?(01:13:31) Hiring and team growth at Codex(01:15:47) Lightning round and final thoughts—Referenced:• OpenAI: https://openai.com• Codex: https://openai.com/codex• Inside ChatGPT: The fastest-growing product in history | Nick Turley (Head of ChatGPT at OpenAI): https://www.lennysnewsletter.com/p/inside-chatgpt-nick-turley• Dropbox: http://dropbox.com• Datadog: https://www.datadoghq.com• Andrej Karpathy on X: https://x.com/karpathy• The rise of Cursor: The $300M ARR AI tool that engineers can't stop using | Michael Truell (co-founder and CEO): https://www.lennysnewsletter.com/p/the-rise-of-cursor-michael-truell• Atlas: https://openai.com/index/introducing-chatgpt-atlas• How Block is becoming the most AI-native enterprise in the world | Dhanji R. Prasanna: https://www.lennysnewsletter.com/p/how-block-is-becoming-the-most-ai-native• Goose: https://block.xyz/inside/block-open-source-introduces-codename-goose• Lessons on building product sense, navigating AI, optimizing the first mile, and making it through the messy middle | Scott Belsky (Adobe, Behance): https://www.lennysnewsletter.com/p/lessons-on-building-product-sense• Sora Android app: https://play.google.com/store/apps/details?id=com.openai.sora&hl=en_US&pli=1• The OpenAI Podcast—ChatGPT Atlas and the next era of web browsing: https://www.youtube.com/watch?v=WdbgNC80PMw&list=PLOXw6I10VTv9GAOCZjUAAkSVyW2cDXs4u&index=2• How to measure AI developer productivity in 2025 | Nicole Forsgren: https://www.lennysnewsletter.com/p/how-to-measure-ai-developer-productivity• Compiling: https://3d.xkcd.com/303• Jujutsu Kaisen on Netflix: https://www.netflix.com/title/81278456• Tesla: https://www.tesla.com• Radical Candor: From theory to practice with author Kim Scott: https://www.lennysnewsletter.com/p/radical-candor-from-theory-to-practice• Andreas Embirikos: https://en.wikipedia.org/wiki/Andreas_Embirikos• George Embiricos: https://en.wikipedia.org/wiki/George_Embiricos: https://en.wikipedia.org/wiki/George_Embiricos—Recommended books:• Culture series: https://www.amazon.com/dp/B07WLZZ9WV• The Lord of the Rings: https://www.amazon.com/Lord-Rings-J-R-R-Tolkien/dp/0544003411• A Fire Upon the Deep (Zones of Thought series Book 1): https://www.amazon.com/Fire-Upon-Deep-Zones-Thought/dp/1250237750• Radical Candor: Be a Kick-Ass Boss Without Losing Your Humanity: https://www.amazon.com/Radical-Candor-Kick-Ass-Without-Humanity/dp/1250103509—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. To hear more, visit www.lennysnewsletter.com
Hey everyone, December started strong and does NOT want to slow down!? OpenAI showed us their response to the Code Red and it's GPT 5.2, which doesn't feel like a .1 upgrade! We got it literally as breaking news at the end of the show, and oh boy! The new kind of LLMs is here. GPT, then Gemini, then Opus and now GPT again... Who else feels like we're on a trippy AI rolercoaster? Just me?
„Oxford“ metų žodžiu tapo „rage bait“. Ar atgaivinti senas nuotraukas ir iš jų sukurti generuotą video yra gera idėja? Andrej Karpathy pasiūlė naują idėją, kaip šnekėti su kalbos modeliais: LLM taryba. „Samsung“ pristatė triskart atsiverčiantį telefoną „Galaxy Z TriFold“. Vis dažniau kalbama apie duomenų serverius kosmose ar net Mėnulyje – kodėl tai nėra gera idėja? „Warner Music“ susitarė su „Suno AI“ ir bendradarbiaus kurdami generuotą muziką. Pavelas Durovas pristatė „Cocoon“ – decentralizuotą AI skaičiavimų tinklą, leidžiantį įdarbinti laisvas vaizdo plokštes. JAV įmonė treniravo AI modelį su kalinių vaizdo ir garso skambučiais ir dabar vykdys jų monitoringą. Lietuvoje mobiliojo ryšio operatoriai skelbia apie vieningą įtartinų skambučių blokavimo sistemą. Apmokant „Claude“ jam buvo suteiktas „sielos dokumentas“. Kinijoje kalbama apie humanoidų robotų burbulą.
【欢迎订阅】 每天早上5:30,准时更新。 【阅读原文】 标题:Collins' Word of the Year 2025: AI meets authenticity as society shifts正文:Tired of wrestling with syntax? Just go with the vibes. That's the essence of vibe coding, Collins' Word of the Year 2025, a term that captures something fundamental about our evolving relationship with technology. Coined by AI pioneer Andrej Karpathy, vibe coding refers to the use of artificial intelligence prompted by natural language to write computer code. Basically, telling a machine what you want rather than painstakingly coding it yourself.知识点:wrestle with phr. /ˈresl wɪð/to struggle to deal with or understand something difficult. 努力解决;绞尽脑汁e.g. The students had to wrestle with the complex philosophical concepts. 学生们不得不努力理解那些复杂的哲学概念。获取外刊的完整原文以及精讲笔记,请关注微信公众号「早安英文」,回复“外刊”即可。更多有意思的英语干货等着你! 【节目介绍】 《早安英文-每日外刊精读》,带你精读最新外刊,了解国际最热事件:分析语法结构,拆解长难句,最接地气的翻译,还有重点词汇讲解。 所有选题均来自于《经济学人》《纽约时报》《华尔街日报》《华盛顿邮报》《大西洋月刊》《科学杂志》《国家地理》等国际一线外刊。 【适合谁听】 1、关注时事热点新闻,想要学习最新最潮流英文表达的英文学习者 2、任何想通过地道英文提高听、说、读、写能力的英文学习者 3、想快速掌握表达,有出国学习和旅游计划的英语爱好者 4、参加各类英语考试的应试者(如大学英语四六级、托福雅思、考研等) 【你将获得】 1、超过1000篇外刊精读课程,拓展丰富语言表达和文化背景 2、逐词、逐句精确讲解,系统掌握英语词汇、听力、阅读和语法 3、每期内附学习笔记,包含全文注释、长难句解析、疑难语法点等,帮助扫除阅读障碍。
O podcast apresenta uma análise detalhada da "vibe coding", um novo paradigma no desenvolvimento de software assistido por Inteligência Artificial (IA), onde os programadores descrevem as intenções em linguagem natural e a IA gera o código. Este fenômeno, popularizado por um tweet de Andrej Karpathy em 2025 e reconhecido como Palavra do Ano pelo Collins Dictionary, representa uma democratização da programação e um aumento sem precedentes na velocidade de prototipagem, com startups a alcançar marcos extraordinários. Contudo, o texto também enfatiza os desafios críticos desta abordagem, como graves vulnerabilidades de segurança no código gerado, dificuldades de manutenção e depuração, e críticas de especialistas como Andrew Ng sobre a trivialização do esforço intelectual necessário. Em última análise, o futuro da vibe coding é visto como um modelo híbrido, onde os humanos atuam como arquitetos e revisores, garantindo a qualidade e segurança do código.
Ever heard of “vibe coding”? It's been named Word of the Year by Collins Dictionary, but what does it mean?You can thank OpenAI's co-founder Andrej Karpathy, who came up with the phrase.The World Weather Attribution has released new data revealing that climate change significantly amplified Hurricane Melissa's destructive winds and rainfall.We speak to the rapid study's co-author, climate scientist Theodore Keeping, from the World Weather Attribution team at Imperial College London.Three Chinese astronauts are stuck in space for longer than expected, after an unidentified object hits the return spacecraft.Also in this episode:UK energy supplier Tomato Energy has collapsedPrince William honours young environmentalists at Earthshot PrizeThe newly described species of toads that give birth to fully formed toadletsAI chatbots "suffer from brainrot" too Hosted on Acast. See acast.com/privacy for more information.
AI coding assistants promise to write your code, speed up your sprint, and maybe even make engineers obsolete. But what if the people building with them every day see something very different?In this special Halloween edition of CRAFTED. — which also marks the show's third anniversary! — a masked CTO shares what he can't say publicly: that these tools are powerful, but insidious. In his view, coding assistants are great for auto-complete, but they can't do what a human engineer does. He says they're terrible at starting from scratch and will often suggest code that “works in vacuum”, but not in context. And because AI can write so much code, so quickly, it's hard to catch errors. In short, he sees an increase in short term velocity, at the expense of increased defects and an increasing dependency on systems that are untrustworthy. I want to emphasize that this episode features the experience of one very experienced person. There are obviously others who disagree, who say AI coding agents are incredible, so long as they're managed well. However, there are also an increasing number of people questioning the sustainability of coding agents — they're incredibly expensive to run — and also how good they are in the first place.For example Andrej Karpathy, the guy who literally coined the phrase "vibe coding" and was early at OpenAI and Tesla, just said publicly on Dwarkesh Podcast that the path to AI agents is going to be a lot slower than people in the industry think it will be. He said coding agents are "not that good at writing code that's never been written before" and that there is too much hype right now about where AI really is, with people in the industry, quote "trying to pretend like this is amazing, when it's not." And he said: "My Claude Code or Codex still feels like this elementary-grade student." Today's guest agrees with Karpathy on a lot of this. Our guest has worked at startups, scale-ups, and big tech companies you've definitely heard of and today he's at a very AI-forward company and using AI coding tools every day. Enjoy this special episode of CRAFTED.! ---And pretty please...!Share with a friend! Word of mouth is how podcasts grow!Subscribe to the newsletter at https://www.crafted.fmShare your feedback! I'm experimenting with new episode formats and would love your feedback on this and other episodes. DM me on LinkedIn or contact me email, via https://www.crafted.fmSponsor the show? I'm actively speaking to potential sponsors for 2026 episodes. Let's talk!Get psyched!… There are some big updates to the show in 2026!---Key Quotes03:16 The myth of AI replacement: “The idea that AI can actually supplant a software engineer in their current role is basically nonsense.”06:29 Why AI struggles without human input: “If you remove the human engineer from the equation, there's no place to start from. The AI does not do well when you're starting from scratch because it doesn't have the real-world context or the continuous learning required to make that system better.”12:21: The illusion of speed: “Coding assistants help you generate code very quickly. There's an illusion that your velocity increases. What actually happens is you're just shipping more bugs to production.”13:30 More code than humans can review: “AI generates so much code that no human can keep that context in their head and review it in a meaningful way. At some point you just have to trust — but who are you trusting? You're trusting the AI, and the AI cannot be trusted.”14:02 AI & Junior Engineer Hiring: “The narrative that hiring trends have anything to do with AI is absurd. It's not that AI is replacing junior engineers — it's that companies are running lean and don't have the bandwidth to train them.”15:42: Where the AI Bulls and Bears Differ: “Whereas we see flawed systems that aren't ready for primetime [...] they view this as ‘oh, that's, that's insignificant. They will get better almost immediately. It's not a big deal.' But we've been repeating this cycle for years at this point.”19:50 Where AI Excels: “Where review and revise are part of the process already, that's a really good place for generative AI because you already have a human in the loop.”21:02: What builders need to unlearn “To the extent that people think these things are thinking or reasoning or on any path to AGI at all — they should discard that. These models don't think. They're very sophisticated pattern-matching machines, and that's really it.”
OpenAI presenta ChatGPT Atlas, il suo primo browser AI-first con capacità agentiche, pensato per leggere, pianificare e agire sul web. Ma dietro l'hype si apre una domanda più grande: quanto siamo davvero vicini all'intelligenza generale artificiale?Uno studio sperimentale, LLM Brain Rot, mostra che i modelli linguistici possono letteralmente “degradarsi” se esposti troppo a contenuti junk: perdono capacità di ragionamento, memoria di contesto e stabilità.E infine Andrej Karpathy — ex Tesla e OpenAI — mette tutto in prospettiva: non è l'anno degli agenti, ma il decennio in cui dovremo imparare a costruirli davvero.L'AI non esploderà da un giorno all'altro: crescerà lentamente, tra bug cognitivi, nuovi framework e una buona dose di umiltà.E a proposito di fare le cose bene: abbiamo rilasciato da poco Datapizza AI, il nostro framework open-source per la GenAI, e da lunedì scorso è su Product Hunt.Provalo, sperimenta, raccontaci cosa ne pensi: il tuo feedback ci aiuta a migliorarlo!GitHub → https://bit.ly/48ELY0O****Per altri contenuti sul mondo Tech, Data & AI, seguici sui nostri canali!
Is the AI boom already peaking? In this episode of The Newcomer Podcast, Eric, Madeline and Tom take a hard look at the hype cycle driving Silicon Valley's latest gold rush — from Andreessen Horowitz's record-breaking $25 billion year to Amazon's push to automate its entire workforce.We explore whether AI's trillion-dollar promise is real innovation, or if the cracks are already showing. From OpenAI's overblown math claims to Andrej Karpathy's “State of the Union” reflections, we break down what's really happening behind the headlines.
Is the AI boom already peaking? In this episode of The Newcomer Podcast, Eric, Madeline and Tom take a hard look at the hype cycle driving Silicon Valley's latest gold rush — from Andreessen Horowitz's record-breaking $25 billion year to Amazon's push to automate its entire workforce.We explore whether AI's trillion-dollar promise is real innovation, or if the cracks are already showing. From OpenAI's overblown math claims to Andrej Karpathy's “State of the Union” reflections, we break down what's really happening behind the headlines.
This week, we discuss OpenAI's new browser, AI trying to build spreadsheets, and when to use Claude skills. Plus, Coté explores the art of the perfect staycation. Watch the YouTube Live Recording of Episode (https://www.youtube.com/live/PnwoFl5JjNo?si=DS2CoIgHVlVU9Y3m) 543 (https://www.youtube.com/live/PnwoFl5JjNo?si=DS2CoIgHVlVU9Y3m) Runner-up Titles Firewire is dead USB, what are you going to do? It's like I tell my son: you know what to do, you chose not to do it. I am just a guest. I don't need helpful An amazing hole. Slides for nobody You closed the loop It's pretty amazing, but does it need to exist? Slackhole Rundown OpenAI Introducing ChatGPT Atlas (https://openai.com/index/introducing-chatgpt-atlas/) OpenAI Is Building a Banker (https://www.bloomberg.com/opinion/newsletters/2025-10-21/openai-is-building-a-banker?srnd=undefined&embedded-checkout=true) OpenAI has five years to turn $13 billion into $1 trillion (https://techcrunch.com/2025/10/14/openai-has-five-years-to-turn-13-billion-into-1-trillion/) AI agents are not amazing, they are slop: says OpenAI cofounder Andrej Karpathy as he strongly disagrees with CEO Sam Altman on AGI timeline - The Times of India (https://timesofindia.indiatimes.com/technology/tech-news/ai-agents-are-not-amazing-they-are-slop-says-openai-cofounder-andrej-karpathy-as-he-strongly-disagrees-with-ceo-sam-altman-on-agi-timeline/articleshow/124720565.cms) OpenAI's ChatGPT will soon allow ‘erotica' for adults in major policy shift (https://www.cnbc.com/2025/10/15/erotica-coming-to-chatgpt-this-year-says-openai-ceo-sam-altman.html) OpenAI Inks Deal With Broadcom to Design Its Own Chips for A.I. (https://www.nytimes.com/2025/10/13/technology/openai-broadcom-chips-deal.html) Claude Skills are awesome, maybe a bigger deal than MCP (https://simonwillison.net/2025/Oct/16/claude-skills/#atom-everything) OpenStack Flamingo pays down technical debt as adoption continues to climb (https://www.networkworld.com/article/4066532/openstack-flamingo-pays-down-technical-debt-as-adoption-continues-to-climb.html) Relevant to your Interests Elon Musk will settle $128 million Twitter execs lawsuit (https://www.theverge.com/news/796239/elon-musk-x-128-million-twitter-exec-lawsuit-settlement) GitHub Will Prioritize Migrating to Azure Over Feature Development (https://thenewstack.io/github-will-prioritize-migrating-to-azure-over-feature-development/) The Discord Hack is Every User's Worst Nightmare (https://www.404media.co/the-discord-hack-is-every-users-worst-nightmare/) Cursor-Maker Anysphere Considers Investment Offers at $30 Billion Valuation (https://www.theinformation.com/articles/cursor-maker-anysphere-considers-investment-offers-30-billion-valuation) Rubygems.org AWS Root Access Event – September 2025 (https://rubycentral.org/news/rubygems-org-aws-root-access-event-september-2025/) This Discord Zendesk compromise has gotten more silly (https://x.com/vxunderground/status/1976417029289607223) WP Engine Vs Automattic & Mullenweg Is Back In Play (https://www.searchenginejournal.com/wp-engine-vs-automattic-mullenweg-is-back-in-play/557905/) Windows 11 removes all bypass methods for Microsoft account setup, removing local accounts (https://alternativeto.net/news/2025/10/windows-11-now-blocks-all-microsoft-account-bypasses-during-setup/) Introducing the React Foundation: The New Home for React & React Native (https://engineering.fb.com/2025/10/07/open-source/introducing-the-react-foundation-the-new-home-for-react-react-native/?utm_source=changelog-news) Wiz Finds Critical Redis RCE Vulnerability: CVE‑2025‑49844 | Wiz Blog (https://www.wiz.io/blog/wiz-research-redis-rce-cve-2025-49844) DevRel is -Unbelievably- Back (https://dx.tips/devrel-is-back) The Ruby community has a DHH problem (https://tekin.co.uk/2025/09/the-ruby-community-has-a-dhh-problem) YouTube rolls out its redesigned video player globally (https://www.engadget.com/entertainment/youtube/youtube-rolls-out-its-redesigned-video-player-globally-174609883.html) Oracle stock rises as company confirms Meta cloud deal (https://www.cnbc.com/2025/10/16/oracle-confirms-meta-cloud-deal-.html) Adiós, AirPods (https://www.theatlantic.com/technology/2025/10/apple-airpods-live-translation/684582/?gift=iWa_iB9lkw4UuiWbIbrWGV8Zzu9GF6V5YZpJtnAzcvU&utm_source=copy-link&utm_medium=social&utm_campaign=share) NVIDIA shows off its first Blackwell wafer manufactured in the US (https://www.engadget.com/big-tech/nvidia-shows-off-its-first-blackwell-wafer-manufactured-in-the-us-192836249.html) This Is How Much Anthropic and Cursor Spend On Amazon Web Services (https://www.wheresyoured.at/costs/) Automattic CEO calls Tumblr his 'biggest failure' so far (https://techcrunch.com/2025/10/20/automattic-ceo-calls-tumblr-his-biggest-failure-so-far/) Marc Benioff says Salesforce is saving about $100M a year by using AI tools in its customer service operations (https://www.bloomberg.com/news/articles/2025-10-14/salesforce-says-ai-customer-service-saves-100-million-annually | http://www.techmeme.com/251014/p32#a251014p32) Amazon cloud computing outage disrupts Snapchat, Ring and many other online services (https://apnews.com/article/amazon-east-internet-services-outage-654a12ac9aff0bf4b9dc0e22499d92d7) Amazon Outage Forces Hundreds of Websites Offline for Hours (https://www.nytimes.com/2025/10/20/business/aws-down-internet-outage.html) Today is when Amazon brain drain finally caught up with AWS (https://www.theregister.com/2025/10/20/aws_outage_amazon_brain_drain_corey_quinn/) AWS crash causes $2,000 Smart Beds to overheat and get stuck upright - Dexerto (https://www.dexerto.com/entertainment/aws-crash-causes-2000-smart-beds-to-overheat-and-get-stuck-upright-3272251/) Nonsense Streetlights Are Mysteriously Turning Purple. Here's Why (https://www.scientificamerican.com/article/streetlights-are-mysteriously-turning-purple-heres-why/) Buc-ee's is not America's top convenience store; Midwest chain takes No. 1 spot (https://local12.com/news/nation-world/bucees-not-america-top-convenience-store-satisfaction-ratings-rankings-midwest-chain-kwik-trip-takes-number-one-spot-wawa-sheetz-quicktrip-cincinnati-ohio) French post office rolls out croissant-scented stamp (https://www.ctvnews.ca/world/article/french-post-office-rolls-out-croissant-scented-stamp/) Listener Feedback Jeffrey is looking for college interns. (https://careers.blizzard.com/global/en/job/R025908/2026-US-Summer-Internships-Game-Engineering) Conferences Wiz Wizdom Conferences (https://www.wiz.io/wizdom), NYC November 3-5, London November 17-19 SREDay Amsterdam (https://sreday.com/2025-amsterdam-q4/), Coté speaking, November 7th. SDT News & Community Join our Slack community (https://softwaredefinedtalk.slack.com/join/shared_invite/zt-1hn55iv5d-UTfN7mVX1D9D5ExRt3ZJYQ#/shared-invite/email) Email the show: questions@softwaredefinedtalk.com (mailto:questions@softwaredefinedtalk.com) Free stickers: Email your address to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) Follow us on social media: Twitter (https://twitter.com/softwaredeftalk), Threads (https://www.threads.net/@softwaredefinedtalk), Mastodon (https://hachyderm.io/@softwaredefinedtalk), LinkedIn (https://www.linkedin.com/company/software-defined-talk/), BlueSky (https://bsky.app/profile/softwaredefinedtalk.com) Watch us on: Twitch (https://www.twitch.tv/sdtpodcast), YouTube (https://www.youtube.com/channel/UCi3OJPV6h9tp-hbsGBLGsDQ/featured), Instagram (https://www.instagram.com/softwaredefinedtalk/), TikTok (https://www.tiktok.com/@softwaredefinedtalk) Book offer: Use code SDT for $20 off "Digital WTF" by Coté (https://leanpub.com/digitalwtf/c/sdt) Sponsor the show (https://www.softwaredefinedtalk.com/ads): ads@softwaredefinedtalk.com (mailto:ads@softwaredefinedtalk.com) Recommendations Brandon: The PR Guy Who Says the AI Boom Is a Bust (https://overcast.fm/+AAQL2e2DHQo) Matt: Comfort Ear Grip Hooks (https://www.amazon.com.au/dp/B07YVDT3KT) Coté: MSG on popcorn, Claude Skills, Masman Curry, Sora? Photo Credits Header (https://unsplash.com/photos/person-holding-white-and-gray-stone-OV44gxH71DU)
This episode features Rob Toews from Radical Ventures and Ari Morcos, Head of Research at Datology AI, reacting to Andrej Karpathy's recent statement that AGI is at least a decade away and that current AI capabilities are "slop." The discussion explores whether we're in an AI bubble, with both guests pushing back on overly bearish narratives while acknowledging legitimate concerns about hype and excessive CapEx spending. They debate the sustainability of AI scaling, examining whether continued progress will come from massive compute increases or from efficiency gains through better data quality, architectural innovations, and post-training techniques like reinforcement learning. The conversation also tackles which companies truly need frontier models versus those that can succeed with slightly-behind-the-curve alternatives, the surprisingly static landscape of AI application categories (coding, healthcare, and legal remain dominant), and emerging opportunities from brain-computer interfaces to more efficient scaling methods. (0:00) Intro(1:04) Debating the AI Bubble(1:50) Over-Hyping AI: Realities and Misconceptions(3:21) Enterprise AI and Data Center Investments(7:46) Consumer Adoption and Monetization Challenges(8:55) AI in Browsers and the Future of Internet Use(14:37) Deepfakes and Ethical Concerns(26:29) AI's Impact on Job Markets and Training(31:38) Google and Anthropic: Strategic Partnerships(34:51) OpenAI's Strategic Deals and Future Prospects(37:12) The Evolution of Vibe Coding(44:35) AI Outside of San Francisco(48:09) Data Moats in AI Startups(50:38) Comparing AI to the Human Brain(56:07) The Role of Physical Infrastructure in AI(56:55) The Potential of Chinese AI Models(1:03:15) Apple's AI Strategy(1:12:35) The Future of AI Applications With your co-hosts: @jacobeffron - Partner at Redpoint, Former PM Flatiron Health @patrickachase - Partner at Redpoint, Former ML Engineer LinkedIn @ericabrescia - Former COO Github, Founder Bitnami (acq'd by VMWare) @jordan_segall - Partner at Redpoint
OpenAI is back and coming for search. This week on Mixture of Experts, we debrief ChatGPT Atlas, OpenAI's new web browser and the impacts on search. Then, Andrej Karpathy is back with his pessimistic timeline to AGI. Later, we discuss DeepSeek-OCR. Finally, can your AI have brain rot? Join host Tim Hwang and panelists Aaron Baughman, Abraham Daniels and Martin Keen on this week's Mixture of Experts to find out. 00:00 – Intro 00:55 – Goldman AI, Groq and IBM, Military AI and Uber 02:05 – ChatGPT Atlas 14:23 – Karpathy's AGI timeline 23:52 – DeepSeek-OCR 34:30 – AI brain rot The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity.
Die „programmier.con 2025 - Web & AI Edition“ findet am 29. und 30. Oktober 2025 statt. Sichert euch jetzt Tickets für die Konferenz auf unserer Webseite!Was wäre, wenn wir in weniger als zwei Jahren echte AGI hätten?
Elle a quitté un job de rêve pour coder son propre LLM. Parce que même les meilleurs ingénieurs IA sont en train d'être dépassés. Et pour cause. Aucun domaine n'avance plus vite que l'IA générative. Chaque semaine, un nouveau modèle. Un qui code. Un qui traduit vos vidéos. Un qui les génère… Et un… Qui s'appelle ”Banana”
The AI Breakdown: Daily Artificial Intelligence News and Discussions
Silicon Valley spent the weekend debating whether it's time to delay AGI expectations by a decade — and what that would mean for the so-called “AI bubble.” NLW breaks down the chain reaction: Microsoft's retreat from OpenAI's infrastructure arms race, an OpenAI math gaffe that went viral, and Andrej Karpathy's take on agent timelines — plus why none of it necessarily spells doom for real-world AI adoption.Brought to you by:KPMG – Discover how AI is transforming possibility into reality. Tune into the new KPMG 'You Can with AI' podcast and unlock insights that will inform smarter decisions inside your enterprise. Listen now and start shaping your future with every episode. https://www.kpmg.us/AIpodcastsBlitzy.com - Go to https://blitzy.com/ to build enterprise software in days, not months Robots & Pencils - Cloud-native AI solutions that power results https://robotsandpencils.com/The Agent Readiness Audit from Superintelligent - Go to https://besuper.ai/ to request your company's agent readiness score.The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614Interested in sponsoring the show? nlw@aidailybrief.ai
The Andrej Karpathy episode.During this interview, Andrej explains why reinforcement learning is terrible (but everything else is much worse), why AGI will just blend into the previous ~2.5 centuries of 2% GDP growth, why self driving took so long to crack, and what he sees as the future of education.It was a pleasure chatting with him.Watch on YouTube; read the transcript.Sponsors* Labelbox helps you get data that is more detailed, more accurate, and higher signal than you could get by default, no matter your domain or training paradigm. Reach out today at labelbox.com/dwarkesh* Mercury helps you run your business better. It's the banking platform we use for the podcast — we love that we can see our accounts, cash flows, AR, and AP all in one place. Apply online in minutes at mercury.com* Google's Veo 3.1 update is a notable improvement to an already great model. Veo 3.1's generations are more coherent and the audio is even higher-quality. If you have a Google AI Pro or Ultra plan, you can try it in Gemini today by visiting https://gemini.googleTimestamps(00:00:00) – AGI is still a decade away(00:29:45) – LLM cognitive deficits(00:40:05) – RL is terrible(00:49:38) – How do humans learn?(01:06:25) – AGI will blend into 2% GDP growth(01:17:36) – ASI(01:32:50) – Evolution of intelligence & culture(01:42:55) - Why self driving took so long(01:56:20) - Future of education Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Многие знают, что когда модели обучаются, где-то под капотом перемножаются матрицы и тензоры, и все это связано с дифференцированием. Мы с Денисом Степановым взялись за нелегкую задачу – разобраться, что же именно там происходит! Также ждем вас, ваши лайки, репосты и комменты в мессенджерах и соцсетях! Telegram-чат: https://t.me/podlodka Telegram-канал: https://t.me/podlodkanews Страница в Facebook: www.facebook.com/podlodkacast/ Twitter-аккаунт: https://twitter.com/PodcastPodlodka Ведущие в выпуске: Женя Кателла, Аня Симонова Полезные ссылки: Dive into Deep Learning Aston Zhang, Zachary C. Lipton, Mu Li, Alexander J. Smola (online book with code and formulas) https://d2l.ai/ https://www.amazon.com/s/ref=dp_byline_sr_book_2?ie=UTF8&field-author=Zachary+C.+Lipton&text=Zachary+C.+Lipton&sort=relevancerank&search-alias=books Micrograd by Andrej Karpathy https://github.com/karpathy/micrograd Andrej Karpathy builds GPT from scratch https://www.youtube.com/watch?v=kCc8FmEb1nY Scott Aaronson on LLM Watermarking https://www.youtube.com/watch?v=YzuVet3YkkA Annotated history of Modern AI and Deep Learning by Jurgen Schmidhuber https://people.idsia.ch/~juergen/deep-learning-history.html Probabilistic Machine Learning: An Introduction Kevin Patrick Murphy https://probml.github.io/pml-book/book1.html Probabilistic Machine Learning: Advanced Topics Kevin Patrick Murphy https://probml.github.io/pml-book/book2.html Pattern Recognition and Machine Learning Christopher Bishop https://www.microsoft.com/en-us/research/wp-content/uploads/2006/01/Bishop-Pattern-Recognition-and-Machine-Learning-2006.pdf Deep Learning: Foundations and Concepts Christopher Bishop, Hugh Bishop https://www.bishopbook.com/ Deep Learning Ian Goodfellow, Yoshua Bengio, Aaron Courville https://www.deeplearningbook.org/ Глубокое обучение: Погружение в мир нейронных сетей С. Николенко, А. Кадурин, Е. Архангельская https://www.k0d.cc/storage/books/AI,%20Neural%20Networks/%D0%93%D0%BB%D1%83%D0%B1%D0%BE%D0%BA%D0%BE%D0%B5%20%D0%BE%D0%B1%D1%83%D1%87%D0%B5%D0%BD%D0%B8%D0%B5%20(%D0%9D%D0%B8%D0%BA%D0%BE%D0%BB%D0%B5%D0%BD%D0%BA%D0%BE).pdf Gonzo-обзоры ML статей Григорий Сапунов, Алексей Тихонов https://t.me/gonzo_ML Machine Learning Street Talk podcast https://www.youtube.com/c/machinelearningstreettalk Feedforward NNs, Autograd, Backprop (Datalore report, Denis Stepanov) https://datalore.jetbrains.com/report/static/Ht_isxs4iB2.BNIqv-C3WUp/pEpNv2eMVU9tEkPsaboR9y Softmax Regression, Adversarial Attacks (Datalore report, Denis Stepanov) https://datalore.jetbrains.com/report/static/Ht_isxs4iB2.BNIqv-C3WUp/cIvd6zX1B5I3kULNiVCEyy Dual Numbers, PINN (Datalore report, Denis Stepanov) https://datalore.jetbrains.com/report/static/Ht_isxs4iB2.BNIqv-C3WUp/3oa1BNrPGpQ8uc82tCaz5d
Ivan Zhao joins Joubin Mirzadegan on Grit to break down how the company's minimalist design became a strategic edge in a world overwhelmed by bloated software. He shares why the AI agent still hasn't arrived, and how Notion's modular approach might be the closest thing to making it real.Guest: Ivan Zhao, co-founder and CEO of NotionMentioned in this episode: Fuzzy Khosrowshahi, Airbnb, Sequoia Capital, Linear, Figma, Apple, Things, Microsoft, BMW, Lumiere, The Beatles, The Rolling Stones, Eric Clapton, Rippling, Matt MacInnis, Inkling, Steve Jobs, Douglas Engelbart, Alan Kay, Bill Gates, OpenAI ChatGPT, Y Combinator, Andrej Karpathy, Toby Schachman, Simon Last, Spotify, SlackConnect with Ivan ZhaoXLinkedInConnect with JoubinXLinkedInEmail: grit@kleinerperkins.comLearn more about Kleiner Perkins
Want Sam's playbook to turn ChatGPT into your executive coach? Get it here: https://clickhubspot.com/sfb Episode 726: Sam Parr ( https://x.com/theSamParr ) and Shaan Puri ( https://x.com/ShaanVP ) talk to Dharmesh Shah ( https://x.com/dharmesh ) about how he's using ChatGPT. — Show Notes: (0:00) Intro (2:00) Context windows (5:26) Vector embeddings (17:20) Automation and orchestration (21:03) Tool calling (28:14) Dharmesh's hot takes on AI (33:06) Agentic managers (39:41) Zuck poaches OpenAI talent w/ 9-figures (49:33) Shaan makes a video game — Links: • Agent.ai - https://agent.ai/ • Andrej Karpathy - https://www.youtube.com/andrejkarpathy — Check Out Shaan's Stuff: • Shaan's weekly email - https://www.shaanpuri.com • Visit https://www.somewhere.com/mfm to hire worldwide talent like Shaan and get $500 off for being an MFM listener. Hire developers, assistants, marketing pros, sales teams and more for 80% less than US equivalents. • Mercury - Need a bank for your company? Go check out Mercury (mercury.com). Shaan uses it for all of his companies! Mercury is a financial technology company, not an FDIC-insured bank. Banking services provided by Choice Financial Group, Column, N.A., and Evolve Bank & Trust, Members FDIC — Check Out Sam's Stuff: • Hampton - https://www.joinhampton.com/ • Ideation Bootcamp - https://www.ideationbootcamp.co/ • Copy That - https://copythat.com • Hampton Wealth Survey - https://joinhampton.com/wealth • Sam's List - http://samslist.co/ My First Million is a HubSpot Original Podcast // Brought to you by HubSpot Media // Production by Arie Desormeaux // Editing by Ezra Bakker Trupiano
Generativ AI med de store sprogmodeller, LLM'erne, er det største, der er sket inden for softwareudvikling i de fire årtier, hvor Thomas har skrevet instruktioner til computere for at få dem til at gøre det, som han gerne vil have dem til. Og det er altså ikke softwareudviklingen bag LLM'erne, som det handler om: Det er det, at man kan bruge LLM'erne til at lave software med. Det synspunkt står han ikke alene med inden for softwarebranchen, og Klaus er da heller ikke afvisende over for tanken. For bedre at kunne forklare hvorfor bliver et nyligt foredrag af Andrej Karpathy (eks-OpenAI, eks-Tesla AI) om “Software 3.0” til afsættet for en tur op gennem historien fra den håndkodede og tilstræbt forudsigelige software 1.0 til den probabilistiske og fabulerende 3.0.**** Nu også på YouTube — hvis du bedst kan lide din podcast med uden video. https://youtube.com/@silberblom **** Hvem betaler for Silberbauer & Blomseth? Det gør vi selv. Vores indhold er på ingen måde egnet til sponsorer eller reklamer for proteinpulver, VPN-forbindelse eller e-bøger. Så hosting, udstyr og alt det der er på egen regning. Det eneste vi beder om til gengæld (hvis du altså kan lide det, vi laver) er at du smider stjerner, og måske oven i købet en lille anbefaling, efter os på Apple Podcast. Det betyder alverden. Vi higer jo allesammen efter anerkendelse i en eller anden form. Husk at følge os på Bluesky (@silberblom) Linktree
The AI Breakdown: Daily Artificial Intelligence News and Discussions
Andrej Karpathy's Software 3.0 talk reframes LLMs as a new kind of software—programmable, agent-native, and fundamentally different from past computing models. This episode breaks down his key ideas, from autonomy sliders to the need for new infrastructure designed for AI-first users.Source: https://www.youtube.com/watch?v=LCEmiRjPEtQGet Ad Free AI Daily Brief: https://patreon.com/AIDailyBriefBrought to you by:Gemini - Supercharge your creativity and productivity - http://gemini.google/KPMG – Go to https://kpmg.com/ai to learn more about how KPMG can help you drive value with our AI solutions.Blitzy.com - Go to https://blitzy.com/ to build enterprise software in days, not months AGNTCY - The AGNTCY is an open-source collective dedicated to building the Internet of Agents, enabling AI agents to communicate and collaborate seamlessly across frameworks. Join a community of engineers focused on high-quality multi-agent software and support the initiative at agntcy.org - https://agntcy.org/?utm_campaign=fy25q4_agntcy_amer_paid-media_agntcy-aidailybrief_podcast&utm_channel=podcast&utm_source=podcast Vanta - Simplify compliance - https://vanta.com/nlwPlumb - The automation platform for AI experts and consultants https://useplumb.com/The Agent Readiness Audit from Superintelligent - Go to https://besuper.ai/ to request your company's agent readiness score.The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614Subscribe to the newsletter: https://aidailybrief.beehiiv.com/Join our Discord: https://bit.ly/aibreakdownInterested in sponsoring the show? nlw@breakdown.network
This week on Cloud Unplugged: AI goes local, Google Cloud breaks the internet, and the DOJ turns up the heat on Google's $32B Wiz acquisition.We're breaking down the biggest stories in cloud and AI:Context & Qualcomm are teaming up to move AI agents off the cloud and onto your device. What does this mean for the future of local-first AI?A major Google Cloud outage caused chaos across Cloudflare, Shopify, and Discord. We explain what went wrong and what it tells us about the risks of centralised cloud infrastructure.The DOJ is investigating Google's acquisition of Wiz, raising questions about cloud security competition and antitrust concerns.Plus: Andrej Karpathy's Software 3.0 vision, is natural language the new programming interface?Hosted by Lewis and Jon, two cloud-native veterans covering the real stories behind the hype in cloud, AI, and dev infrastructure.
Links:Dax's tweet about opencode rewriteopencode.aiIntro | opencodeGitHub - sst/opencode: AI coding agent, built for the terminal.Andrej Karpathy: Software Is Changing (Again) - YouTubeModels.dev — An open-source database of AI modelsSponsor: Terminal now offers a monthly box called Cron.Want to carry on the conversation? Join us in Discord. Or send us an email at sliceoffalittlepieceofbacon@tomorrow.fm.Topics:(00:00) - A Canadian standoff (00:29) - Who's the resident nice guy around here? (02:27) - Finding the Apple of carseats and baby strollers (05:39) - Transitioning from walking to running (08:03) - Sleeping struggles (11:21) - Launching Opencode (16:48) - Is starting a podcast the key to working well together as programmers? 4 out of 5 podcast editors say yes (22:46) - Figuring out what to work on next in open source software (32:29) - Dax is still living in oblivious bliss from Twitter (33:45) - Andrej Karpathy on how Software Is Changing (35:53) - Dax tries vibe coding (46:30) - How much of a bet are we placing on the terminal? (48:17) - Writing code for Frank ★ Support this podcast ★
AI models have a defined memory ceiling, which is reshaping the ongoing debates surrounding copyright and data privacy. Recent research from Meta, Google DeepMind, Cornell, and NVIDIA reveals that large language models have a fixed memorization capacity of approximately 8.6 bits per parameter. This finding clarifies the distinction between memorized data and generalized knowledge, indicating that larger datasets do not necessarily lead to increased memorization of specific data points. This understanding is crucial as it informs the operational mechanisms of AI models and addresses concerns related to copyright infringement.Sundar Pichai, CEO of Google, has introduced the term "artificial jagged intelligence" to describe the current phase of AI development, highlighting the non-linear progress and the challenges faced by researchers despite significant advancements. Pichai's perspective reflects the mixed performance of AI models, which can exhibit extraordinary capabilities alongside notable errors. This sentiment is echoed by deep learning researcher Andrej Karpathy, emphasizing the unpredictability of AI performance and the need for a more nuanced understanding of its capabilities.The rise of AI retrieval bots is transforming how users access information online, with a significant increase in traffic from these bots. Companies like OpenAI and Anthropic are deploying these bots to summarize content in real-time, moving away from traditional search methods that provide links to multiple sources. This shift poses challenges for content publishers, as the growth of retrieval bots indicates a changing economic landscape where content is increasingly consumed by AI first, with human users following. Publishers may need to rethink their engagement strategies to adapt to this new reality.In the broader context of technology and cybersecurity, WhatsApp's intervention in a legal case concerning encryption and privacy rights highlights the growing role of platforms in surveillance debates. Additionally, the U.S. Cybersecurity and Infrastructure Security Agency faces leadership challenges amid a talent exodus, raising concerns about its operational effectiveness. As the IT services industry evolves, the integration of AI into various sectors, including hiring and cybersecurity, underscores the importance of execution, interoperability, and trust in automation. The future of technology will depend on how well businesses can navigate these changes and support their clients in making informed decisions. Four things to know today 00:00 AI's Jagged Reality: Study Reveals Limits to Model Memory as Bots Redefine the Web Economy05:35 Cybersecurity Crossroads: WhatsApp Joins Apple in Legal Fight as U.S. Agency Leadership Crumbles08:29 AI Matures Into Infrastructure Layer as IT Vendors Shift Focus to Outcomes and Execution11:51 Legal Tech, GenAI, and Fast Food Bots All Show One Thing: Hype Doesn't Equal Success This is the Business of Tech. Supported by: All our Sponsors: https://businessof.tech/sponsors/ Do you want the show on your podcast app or the written versions of the stories? Subscribe to the Business of Tech: https://www.businessof.tech/subscribe/Looking for a link from the stories? The entire script of the show, with links to articles, are posted in each story on https://www.businessof.tech/ Support the show on Patreon: https://patreon.com/mspradio/ Want to be a guest on Business of Tech: Daily 10-Minute IT Services Insights? Send Dave Sobel a message on PodMatch, here: https://www.podmatch.com/hostdetailpreview/businessoftech Want our stuff? Cool Merch? Wear “Why Do We Care?” - Visit https://mspradio.myspreadshop.com Follow us on:LinkedIn: https://www.linkedin.com/company/28908079/YouTube: https://youtube.com/mspradio/Facebook: https://www.facebook.com/mspradionews/Instagram: https://www.instagram.com/mspradio/TikTok: https://www.tiktok.com/@businessoftechBluesky: https://bsky.app/profile/businessof.tech
No Priors: Artificial Intelligence | Machine Learning | Technology | Startups
In this episode of No Priors, Sarah and Elad are joined by Dr. Fei-Fei Li, AI pioneer, co-director of Stanford's Human-Centered AI Institute, and founder of World Labs. Fei-Fei shares why she's building at the intersection of embodiment and intelligence, and what today's AI systems are still missing. From the early days of ImageNet to her vision for the next generation of robotics, she unpacks the human and technical motivations behind World Labs. They also discuss the challenges of 3D world modeling, her approach to building exceptional teams, and the special qualities that have led her students like Andrej Karpathy to make major breakthroughs. Show Notes: 0:00 Why and what Dr. Fei-Fei Li is building 3:00 World models at World Labs 6:44 Missing gaps in the AI future 9:16 Robotics and physical intelligence 16:15 Greatest challenges of 3D 19:08 Fei-Fei's work in PhD in ImageNet 23:05 Special moments in Dr. Li's career 29:33 Building teams 32:05 Human-centered AI
Vibe coding is having a moment.The buzzy new phrase was coined earlier this year by OpenAI co-founder Andrej Karpathy to describe his process of programming by prompting AI. It's been embraced by tech professionals and amateurs alike. Google, Microsoft and Apple have or are developing their own AI-assisted coding platforms while vibe coding startups like Cursor are raking in funding.Marketplace's Meghan McCarty Carino recently spoke with Clarence Huang, vice president of technology at the financial software company Intuit and an early adopter of vibe coding, about how the practice has changed how he approaches building software.More on this“What is vibe coding, exactly?” - from MIT Technology Review“New ‘Slopsquatting' Threat Emerges from AI-Generated Code Hallucinations” - from HackRead“Three-minute explainer on… slopsquatting” - from Raconteur
Vibe coding is having a moment.The buzzy new phrase was coined earlier this year by OpenAI co-founder Andrej Karpathy to describe his process of programming by prompting AI. It's been embraced by tech professionals and amateurs alike. Google, Microsoft and Apple have or are developing their own AI-assisted coding platforms while vibe coding startups like Cursor are raking in funding.Marketplace's Meghan McCarty Carino recently spoke with Clarence Huang, vice president of technology at the financial software company Intuit and an early adopter of vibe coding, about how the practice has changed how he approaches building software.More on this“What is vibe coding, exactly?” - from MIT Technology Review“New ‘Slopsquatting' Threat Emerges from AI-Generated Code Hallucinations” - from HackRead“Three-minute explainer on… slopsquatting” - from Raconteur
Our episode this week tackles that quiet question many of us ponder: are others using AI more effectively than we are?We explore some fascinating new research just published by Harvard Business Review that reveals the top 100 AI use cases based on actual user reports. It shows some dramatic changes in how people are using AI compared with just one year ago. We know it will give you new insights on how you could be using AI right now too.What's particularly interesting is that the top five use cases have been completely reshuffled. Entirely new entrants making their debut straight into the top five spots. From deeply personal uses to professional applications, we were surprised by the findings.The research has a unique approach which we explore in this episode as well as:Explore the top 5 use case types and share prompts and experiences Share personal examples of how others are using AI to save hours organising their livesDiscuss why certain uses have skyrocketed in popularity, and Examine a thought-provoking observation from AI thought leader Andrej Karpathy about why AI is unfolding completely differently than any other tech.If you've been fretting that your workplace is falling behind in the AI race we share exactly why this might be. And if you have been wondering how your use of AI compares to other peoples', then this episode may answer that question too. If you're looking for inspiration on how to use AI then stay tuned for a wealth of practical ideas. Enjoy this episode. Useful LinksFull list of HBR's Top 100 AI use cases - check our website for the complete list - www.dontstopusnow.coHarvard Business Review article on Top 100 AI use casesAndrej Karpathy's blog post on consumers as AI power usersSubscribe to Don't Stop Us Now – AI Edition wherever you get your podcasts to stay in the loop on what you need to know to remain relevant in this fast-changing world. Hosted on Acast. See acast.com/privacy for more information.
The AI Breakdown: Daily Artificial Intelligence News and Discussions
OpenAI cofounder Andrej Karpathy makes an argument that the normal patterns of technology diffusion have been upended with AI, to the benefit of regular people. Source: https://x.com/karpathy/status/1909308143156240538Get Ad Free AI Daily Brief: https://patreon.com/AIDailyBriefBrought to you by:KPMG – Go to https://kpmg.com/ai to learn more about how KPMG can help you drive value with our AI solutions.Vanta - Simplify compliance - https://vanta.com/nlwPlumb - The Automation Platform for AI Experts - https://useplumb.com/nlwThe Agent Readiness Audit from Superintelligent - Go to https://besuper.ai/ to request your company's agent readiness score.The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614Subscribe to the newsletter: https://aidailybrief.beehiiv.com/Join our Discord: https://bit.ly/aibreakdown
Kevin Weil is the chief product officer at OpenAI, where he oversees the development of ChatGPT, enterprise products, and the OpenAI API. Prior to OpenAI, Kevin was head of product at Twitter, Instagram, and Planet, and was instrumental in the development of the Libra (later Novi) cryptocurrency project at Facebook.In this episode, you'll learn:1. How OpenAI structures its product teams and maintains agility while developing cutting-edge AI2. The power of model ensembles—using multiple specialized models together like a company of humans with different skills3. Why writing effective evals (AI evaluation tests) is becoming a critical skill for product managers4. The surprisingly enduring value of chat as an interface for AI, despite predictions of its obsolescence5. How “vibe coding” is changing how companies operate6. What OpenAI looks for when hiring product managers (hint: high agency and comfort with ambiguity)7. “Model maximalism” and why today's AI is the worst you'll ever use again8. Practical prompting techniques that improve AI interactions, including example-based prompting—Brought to you by:• Eppo—Run reliable, impactful experiments• Persona—A global leader in digital identity verification• OneSchema—Import CSV data 10x faster—Where to find Kevin Weil:• X: https://x.com/kevinweil• LinkedIn: https://www.linkedin.com/in/kevinweil/—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Kevin's background(04:06) OpenAI's new image model(06:52) The role of chief product officer at OpenAI(10:18) His recruitment story and joining OpenAI(17:20) The importance of evals in AI(24:59) Shipping quickly and consistently(28:34) Product reviews and iterative deployment(39:35) Chat as an interface for AI(43:59) Collaboration between researchers and product teams(46:41) Hiring product managers at OpenAI(48:45) Embracing ambiguity in product management(51:41) The role of AI in product teams(53:21) Vibe coding and AI prototyping(55:55) The future of product teams and fine-tuned models(01:04:36) AI in education(01:06:42) Optimism and concerns about AI's future(01:16:37) Reflections on the Libra project(01:20:37) Lightning round and final thoughts—Referenced:• OpenAI: https://openai.com/• The AI-Generated Studio Ghibli Trend, Explained: https://www.forbes.com/sites/danidiplacido/2025/03/27/the-ai-generated-studio-ghibli-trend-explained/• Introducing 4o Image Generation: https://openai.com/index/introducing-4o-image-generation/• Waymo: https://waymo.com/• X: https://x.com• Facebook: https://www.facebook.com/• Instagram: https://www.instagram.com/• Planet: https://www.planet.com/• Sam Altman on X: https://x.com/sama• A conversation with OpenAI's CPO Kevin Weil, Anthropic's CPO Mike Krieger, and Sarah Guo: https://www.youtube.com/watch?v=IxkvVZua28k• OpenAI evals: https://github.com/openai/evals• Deep Research: https://openai.com/index/introducing-deep-research/• Ev Williams on X: https://x.com/ev• OpenAI API: https://platform.openai.com/docs/overview• Dwight Eisenhower quote: https://www.brainyquote.com/quotes/dwight_d_eisenhower_164720• Inside Bolt: From near-death to ~$40m ARR in 5 months—one of the fastest-growing products in history | Eric Simons (founder & CEO of StackBlitz): https://www.lennysnewsletter.com/p/inside-bolt-eric-simons• StackBlitz: https://stackblitz.com/• Claude 3.5 Sonnet: https://www.anthropic.com/news/claude-3-5-sonnet• Anthropic: https://www.anthropic.com/• Four-minute mile: https://en.wikipedia.org/wiki/Four-minute_mile• Chad: https://chatgpt.com/g/g-3F100ZiIe-chad-open-a-i• Dario Amodei on LinkedIn: https://www.linkedin.com/in/dario-amodei-3934934/• Figma: https://www.figma.com/• Julia Villagra on LinkedIn: https://www.linkedin.com/in/juliavillagra/• Andrej Karpathy on X: https://x.com/karpathy• Silicon Valley CEO says ‘vibe coding' lets 10 engineers do the work of 100—here's how to use it: https://fortune.com/2025/03/26/silicon-valley-ceo-says-vibe-coding-lets-10-engineers-do-the-work-of-100-heres-how-to-use-it/• Cursor: https://www.cursor.com/• Windsurf: https://codeium.com/windsurf• GitHub Copilot: https://github.com/features/copilot• Patrick Srail on X: https://x.com/patricksrail• Khan Academy: https://www.khanacademy.org/• CK-12 Education: https://www.ck12.org/• Sora: https://openai.com/sora/• Sam Altman's post on X about creative writing: https://x.com/sama/status/1899535387435086115• Diem (formerly known as Libra): https://en.wikipedia.org/wiki/Diem_(digital_currency)• Novi: https://about.fb.com/news/2020/05/welcome-to-novi/• David Marcus on LinkedIn: https://www.linkedin.com/in/dmarcus/• Peter Zeihan on X: https://x.com/PeterZeihan• The Wheel of Time on Prime Video: https://www.amazon.com/Wheel-Time-Season-1/dp/B09F59CZ7R• Top Gun: Maverick on Prime Video: https://www.amazon.com/Top-Gun-Maverick-Joseph-Kosinski/dp/B0DM2LYL8G• Thinking like a gardener not a builder, organizing teams like slime mold, the adjacent possible, and other unconventional product advice | Alex Komoroske (Stripe, Google): https://www.lennysnewsletter.com/p/unconventional-product-advice-alex-komoroske• MySQL: https://www.mysql.com/—Recommended books:• Co-Intelligence: Living and Working with AI: https://www.amazon.com/Co-Intelligence-Living-Working-Ethan-Mollick/dp/059371671X• The Accidental Superpower: Ten Years On: https://www.amazon.com/Accidental-Superpower-Ten-Years/dp/1538767341• Cable Cowboy: https://www.amazon.com/Cable-Cowboy-Malone-Modern-Business/dp/047170637X—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. Get full access to Lenny's Newsletter at www.lennysnewsletter.com/subscribe
The term 'vibe coding' — which first appeared in a post on X by Andrej Karpathy in early February 2025 — has set the software development world abuzz: everyone seems to have their own take on what it is, how it's done and whether it's a bold new chapter in the history of programming or an insult to anyone that's ever written a line of code. Clearly, then, we need to talk about vibe coding — and that's precisely what we do on this episode of the Technology Podcast. Featuring Thoughtworkers Birgitta Böckeler (AI for Software Delivery Lead) and Lilly Ryan (Cybersecurity Principal), who join hosts Neal Ford and Prem Chandrasekaran, we dive into the different understandings and applications of the concept, and discuss what happens when a meme collides with reality.
This week we talk about Studio Ghibli, Andrej Karpathy, and OpenAI.We also discuss code abstraction, economic repercussions, and DOGE.Recommended Book: How To Know a Person by David BrooksTranscriptIn late-November of 2022, OpenAI released a demo version of a product they didn't think would have much potential, because it was kind of buggy and not very impressive compared to the other things they were working on at the time. This product was a chatbot interface for a generative AI model they had been refining, called ChatGPT.This was basically just a chatbot that users could interact with, as if they were texting another human being. And the results were good enough—both in the sense that the bot seemed kinda sorta human-like, but also in the sense that the bot could generate convincing-seeming text on all sorts of subjects—that people went absolutely gaga over it, and the company went full-bore on this category of products, dropping an enterprise version in August the following year, a search engine powered by the same general model in October of 2024, and by 2025, upgraded versions of their core models were widely available, alongside paid, enhanced tiers for those who wanted higher-level processing behind the scenes: that upgraded version basically tapping a model with more feedstock, a larger training library and more intensive and refined training, but also, in some cases, a model that thinks longer, than can reach out and use the internet to research stuff it doesn't already know, and increasingly, to produce other media, like images and videos.During that time, this industry has absolutely exploded, and while OpenAI is generally considered to be one of the top dogs in this space, still, they've got enthusiastic and well-funded competition from pretty much everyone in the big tech world, like Google and Amazon and Meta, while also facing upstart competitors like Anthropic and Perplexity, alongside burgeoning Chinese competitors, like Deepseek, and established Chinese tech giants like Tencent and Baidu.It's been somewhat boggling watching this space develop, as while there's a chance some of the valuations of AI-oriented companies are overblown, potentially leading to a correction or the popping of a valuation bubble at some point in the next few years, the underlying tech and the output of that tech really has been iterating rapidly, the state of the art in generative AI in particular producing just staggeringly complex and convincing images, videos, audio, and text, but the lower-tier stuff, which is available to anyone who wants it, for free, is also valuable and useable for all sorts of purposes.Just recently, at the tail-end of March 2025, OpenAI announced new multimodal capabilities for its GPT-4o language model, which basically means this model, which could previously only generate text, can now produce images, as well.And the model has been lauded as a sort of sea change in the industry, allowing users to produce remarkable photorealistic images just by prompting the AI—telling it what you want, basically—with usually accurate, high-quality text, which has been a problem for most image models up till this point. It also boasts the capacity to adjust existing images in all sorts of ways.Case-in-point, it's possible to use this feature to take a photo of your family on vacation and have it rendered in the style of a Studio Ghibli cartoon; Studio Ghibli being the Japanese animation studio behind legendary films like My Neighbor Totoro, Spirited Away, and Princess Mononoke, among others.This is partly the result of better capabilities by this model, compared to its precursors, but it's also the result of OpenAI loosening its policies to allow folks to prompt these models in this way; previously they disallowed this sort of power, due to copyright concerns. And the implications here are interesting, as this suggests the company is now comfortable showing that their models have been trained on these films, which has all sorts of potential copyright implications, depending on how pending court cases turn out, but also that they're no long being as precious with potential scandals related to how their models are used.It's possible to apply all sorts of distinctive styles to existing images, then, including South Park and the Simpsons, but Studio Ghibli's style has become a meme since this new capability was deployed, and users have applied it to images ranging from existing memes to their own self-portrait avatars, to things like the planes crashing into the Twin Towers on 9/11, JFK's assassination, and famous mass-shootings and other murders.It's also worth noting that the co-founder of Studio Ghibli, Hayao Miyazaki, has called AI-generated artwork “an insult to life itself.” That so many people are using this kind of AI-generated filter on these images is a jarring sort of celebration, then, as the person behind that style probably wouldn't appreciate it; many people are using it because they love the style and the movies in which it was born so much, though. An odd moral quandary that's emerged as a result of these new AI-provided powers.What I'd like to talk about today is another burgeoning controversy within the AI space that's perhaps even larger in implications, and which is landing on an unprepared culture and economy just as rapidly as these new image capabilities and memes.—In February of 2025, the former AI head at Tesla, founding team member at OpenAI, and founder of an impending new, education-focused project called Eureka Labs named Andrej Karpathy coined the term ‘vibe coding' to refer to a trend he's noticed in himself and other developers, people who write code for a living, to develop new projects using code-assistant AI tools in a manner that essentially abstracts away the code, allowing the developer to rely more on vibes in order to get their project out the door, using plain English rather than code or even code-speak.So while a developer would typically need to invest a fair bit of time writing the underlying code for a new app or website or video game, someone who's vibe coding might instead focus on a higher, more meta-level of the project, worrying less about the coding parts, and instead just telling their AI assistant what they want to do. The AI then figures out the nuts and bolts, writes a bunch of code in seconds, and then the vibe coder can tweak the code, or have the AI tweak it for them, as they refine the concept, fix bugs, and get deeper into the nitty-gritty of things, all, again, in plain-spoken English.There are now videos, posted in the usual places, all over YouTube and TikTok and such, where folks—some of whom are coders, some of whom are purely vibe coders, who wouldn't be able to program their way out of a cardboard box—produce entire functioning video games in a matter of minutes.These games typically aren't very good, but they work. And reaching even that level of functionality would previously have taken days or weeks for an experienced, highly trained developer; now it takes mere minutes or moments, and can be achieved by the average, non-trained person, who has a fundamental understanding of how to prompt AI to get what they want from these systems.Ethan Mollick, who writes a fair bit on this subject and who keeps tabs on these sorts of developments in his newsletter, One Useful Thing, documented his attempts to make meaning from a pile of data he had sitting around, and which he hadn't made the time to dig through for meaning. Using plain English he was able to feed all that data to OpenAI's Deep Research model, interact with its findings, and further home in on meaningful directions suggested by the data.He also built a simple game in which he drove a firetruck around a 3D city, trying to put out fires before a competing helicopter could do the same. He spent a total of about $13 in AI token fees to make the game, and he was able to do so despite not having any relevant coding expertise.A guy named Pieter Levels, who's an experienced software engineer, was able to vibe-code a video game, which is a free-to-play, massively multiplayer online flying game, in just a month. Nearly all the code was written by Cursor and Grok 3, the first of which is a code-writing AI system, the latter of which is a ChatGPT-like generalist AI agent, and he's been able to generate something like $100k per month in revenue from this game just 17 days, post-launch.Now an important caveat here is that, first, this game received a lot of publicity, because Levels is a well-known name in this space, and he made this game as part of a ‘Vibe Coding Game Jam,' which is an event focused on exactly this type of AI-augmented programming, in which all of the entrants had to be at least 80% AI generated. But he's also a very skilled programmer and game-maker, so this isn't the sort of outcome the average person could expect from these sorts of tools.That said, it's an interesting case study that suggests a few things about where this category of tools is taking us, even if it's not representative for all programming spaces and would-be programmers.One prediction that's been percolating in this space for years, even before ChatGPT was released, but especially after generative AI tools hit the mainstream, is that many jobs will become redundant, and as a result many people, especially those in positions that are easily and convincingly replicated using such tools, will be fired. Because why would you pay twenty people $100,000 a year to do basic coding work when you can have one person working part-time with AI tools vibe-coding their way to approximately the same outcome?It's a fair question, and it's one that pretty much every industry is asking itself right now. And we've seen some early waves of firings based on this premise, most of which haven't gone great for the firing entity, as they've then had to backtrack and starting hiring to fill those positions again—the software they expected to fill the gaps not quite there yet, and their offerings suffering as a consequence of that gambit.Some are still convinced this is the way things are going, though, including people like Elon Musk, who, as part of his Department of Government Efficiency, or DOGE efforts in the US government, is basically stripping things down to the bare-minimum, in part to weaken agencies he doesn't like, but also, ostensibly at least, to reduce bloat and redundancy, the premise being that a lot of this work can be done by fewer people, and in some cases can be automated entirely using AI-based systems.This was the premise of his mass-firings at Twitter, now X, when he took over, and while there have been a lot of hiccups and issues resulting from that decision, the company is managing to operate, even if less optimally than before, with about 20% the staff it had before he took over—something like 1,500 people compared to 7,500.Now, there are different ways of looking at that outcome, and Musk's activities since that acquisition will probably color some of our perceptions of his ambitions and level of success with that job-culling, as well. But the underlying theory that a company can do even 90% as well as it did before with just a fifth of the workforce is a compelling argument to many people, and that includes folks running governments, but also those in charge of major companies with huge rosters of employees that make up the vast majority of their operating expenses.A major concern about all this, though, is that even if this theory works in broader practice, and all these companies and governments can function well enough with a dramatically reduced staff using AI tools to augment their capabilities and output, we may find ourselves in a situation in which the folks using said tools are more and more commodified—they'll be less specialized and have less education and expertise in the relevant areas, so they can be paid less, basically, the tools doing more and the humans mostly being paid to prompt and manage them. And as a result we may find ourselves in a situation where these people don't know enough to recognize when the AI are doing something wrong or weird, and we may even reach a point where the abstraction is so complete that very few humans even know how this code works, which leaves us increasingly reliant on these tools, but also more vulnerable to problems should they fail at a basic level, at which point there may not be any humans left who are capable of figuring out what went wrong, since all the jobs that would incentivize the acquisition of such knowledge and skill will have long since disappeared.As I mentioned in the intro, these tools are being applied to images, videos, music, and everything else, as well. Which means we could see vibe artists, vibe designers, vibe musicians and vibe filmmakers. All of which is arguably good in the sense that these mediums become more accessible to more people, allowing more voices to communicate in more ways than ever before.But it's also arguably worrying in the sense that more communication might be filtered through the capabilities of these tools—which, by the way, are predicated on previous artists and writers and filmmakers' work, arguably stealing their styles and ideas and regurgitating them, rather than doing anything truly original—and that could lead to less originality in these spaces, but also a similar situation in which people forget how to make their own films, their own art, their own writing; a capability drain that gets worse with each new generation of people who are incentivized to hand those responsibilities off to AI tools; we'll all become AI prompters, rather than all the things we are, currently.This has been the case with many technologies over the years—how many blacksmiths do we have in 2025, after all? And how many people actually hand-code the 1s and 0s that all our coding languages eventually write, for us, after we work at a higher, more human-optimized level of abstraction?But because our existing economies are predicated on a certain type of labor and certain number of people being employed to do said labor, even if those concerns ultimately don't end up being too big a deal, because the benefits are just that much more impactful than the downsides and other incentives to develop these or similar skills and understandings arise, it's possible we could experience a moment, years or decades long, in which the whole of the employment market is disrupted, perhaps quite rapidly, leaving a lot of people without income and thus a lot fewer people who can afford the products and services that are generated more cheaply using these tools.A situation that's ripe with potential for those in a position to take advantage of it, but also a situation that could be devastating to those reliant on the current state of employment and income—which is the vast, vast majority of human beings on the planet.Show Noteshttps://en.wikipedia.org/wiki/X_Corphttps://devclass.com/2025/03/26/the-paradox-of-vibe-coding-it-works-best-for-those-who-do-not-need-it/https://www.wired.com/story/doge-rebuild-social-security-administration-cobol-benefits/https://www.wired.com/story/anthropic-benevolent-artificial-intelligence/https://arstechnica.com/tech-policy/2025/03/what-could-possibly-go-wrong-doge-to-rapidly-rebuild-social-security-codebase/https://en.wikipedia.org/wiki/Vibe_codinghttps://www.newscientist.com/article/2473993-what-is-vibe-coding-should-you-be-doing-it-and-does-it-matter/https://nmn.gl/blog/dangers-vibe-codinghttps://x.com/karpathy/status/1886192184808149383https://simonwillison.net/2025/Mar/19/vibe-coding/https://arstechnica.com/ai/2025/03/is-vibe-coding-with-ai-gnarly-or-reckless-maybe-some-of-both/https://devclass.com/2025/03/26/the-paradox-of-vibe-coding-it-works-best-for-those-who-do-not-need-it/https://www.creativebloq.com/3d/video-game-design/what-is-vibe-coding-and-is-it-really-the-future-of-app-and-game-developmenthttps://arstechnica.com/ai/2025/03/openais-new-ai-image-generator-is-potent-and-bound-to-provoke/https://en.wikipedia.org/wiki/Studio_Ghibli This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit letsknowthings.substack.com/subscribe
In this episode, Phillip Gervasi and Ryan Booth dive into "vibe coding"—a new AI-assisted approach where LLMs generate code from natural language descriptions. Inspired by Andrej Karpathy's vision, vibe coding streamlines development but raises questions about debugging, best practices, and the future of software engineering.
Episode 140: Alex breaks down Pieter Levels' AI-coded flying game, which hit $67,000 in monthly revenue in just 3 weeks. Here's what to expect: The stats & story behind Levels' flying game Key lessons to take from this business Understanding critiques of the game — Show Notes: (0:00) A note from our sponsor (2:26) Welcome back to Founder's Journal (3:09) Peter Levels' AI-coded flying game (6:17) The power of AI in game development (8:00) The value of trusted distribution (13:24) Vibe marketing explained (15:58) Addressing critiques (20:15) Conclusion— Thanks to our presenting sponsor, Gusto. Head to www.gusto.com/alex — Episode Links: • Flying game - https://fly.pieter.com/ • Levels on X - https://x.com/levelsio • Andrej Karpathy - https://www.youtube.com/@AndrejKarpathyCheck Out Alex's Stuff: • storyarb - https://www.storyarb.com/ • CTA - https://www.creatortalentagency.co/ • X - https://x.com/businessbarista • Linkedin - https://www.linkedin.com/in/alex-lieberman/ Learn more about your ad choices. Visit megaphone.fm/adchoices
xAI and Elon Musk have launched Grok-3, their cutting-edge AI model. Is it really a step forward? It's really the cutting-edge? Andrej Karpathy is gonna tell us. The first tri-foldable phone is here. Is there a new huge AI player? And how Apple's move to manufacture in India is going.Sponsors:MackWeldon.com Promocode: BRIANLinks:Elon Musk's xAI releases its latest flagship model, Grok 3 (TechCrunch)Impressions of Grok-3 (@karpathy)Huawei's trifold phone launches outside of China (The Verge)Trump tariffs result in 10% laptop price hike in U.S. says Acer CEO (Tom's Hardware)OpenAI Co-Founder Sutskever's Startup Is Fundraising at $30 Billion-Plus Valuation (Bloomberg)Apple's quiet pivot to India (FT)See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Ejaaz and David reunite to dissect the AI Crypto sector's rebound from a 70% crash, fueled by Elon's rumored $97B OpenAI bid and the relentless rise of open-source devs being heads down. They explore how tokens might be the most accessible path to AI exposure, why ARC's curated launchpad could elevate agent quality, and what Virtuals' move onto Solana means for cross-chain expansion. Meanwhile, X (Twitter) embraces a new wave of AI agents, and AI16z reorganizes to stay competitive. Is this the turning point for AI and crypto—or just another plateau? Tune in to find out, anon. ------
If you're in SF, join us tomorrow for a fun meetup at CodeGen Night!If you're in NYC, join us for AI Engineer Summit! The Agent Engineering track is now sold out, but 25 tickets remain for AI Leadership and 5 tickets for the workshops. You can see the full schedule of speakers and workshops at https://ai.engineer!It's exceedingly hard to introduce someone like Bret Taylor. We could recite his Wikipedia page, or his extensive work history through Silicon Valley's greatest companies, but everyone else already does that.As a podcast by AI engineers for AI engineers, we had the opportunity to do something a little different. We wanted to dig into what Bret sees from his vantage point at the top of our industry for the last 2 decades, and how that explains the rise of the AI Architect at Sierra, the leading conversational AI/CX platform.“Across our customer base, we are seeing a new role emerge - the role of the AI architect. These leaders are responsible for helping define, manage and evolve their company's AI agent over time. They come from a variety of both technical and business backgrounds, and we think that every company will have one or many AI architects managing their AI agent and related experience.”In our conversation, Bret Taylor confirms the Paul Buchheit legend that he rewrote Google Maps in a weekend, armed with only the help of a then-nascent Google Closure Compiler and no other modern tooling. But what we find remarkable is that he was the PM of Maps, not an engineer, though of course he still identifies as one. We find this theme recurring throughout Bret's career and worldview. We think it is plain as day that AI leadership will have to be hands-on and technical, especially when the ground is shifting as quickly as it is today:“There's a lot of power in combining product and engineering into as few people as possible… few great things have been created by committee.”“If engineering is an order taking organization for product you can sometimes make meaningful things, but rarely will you create extremely well crafted breakthrough products. Those tend to be small teams who deeply understand the customer need that they're solving, who have a maniacal focus on outcomes.”“And I think the reason why is if you look at like software as a service five years ago, maybe you can have a separation of product and engineering because most software as a service created five years ago. I wouldn't say there's like a lot of technological breakthroughs required for most business applications. And if you're making expense reporting software or whatever, it's useful… You kind of know how databases work, how to build auto scaling with your AWS cluster, whatever, you know, it's just, you're just applying best practices to yet another problem. "When you have areas like the early days of mobile development or the early days of interactive web applications, which I think Google Maps and Gmail represent, or now AI agents, you're in this constant conversation with what the requirements of your customers and stakeholders are and all the different people interacting with it and the capabilities of the technology. And it's almost impossible to specify the requirements of a product when you're not sure of the limitations of the technology itself.”This is the first time the difference between technical leadership for “normal” software and for “AI” software was articulated this clearly for us, and we'll be thinking a lot about this going forward. We left a lot of nuggets in the conversation, so we hope you'll just dive in with us (and thank Bret for joining the pod!)Timestamps* 00:00:02 Introductions and Bret Taylor's background* 00:01:23 Bret's experience at Stanford and the dot-com era* 00:04:04 The story of rewriting Google Maps backend* 00:11:06 Early days of interactive web applications at Google* 00:15:26 Discussion on product management and engineering roles* 00:21:00 AI and the future of software development* 00:26:42 Bret's approach to identifying customer needs and building AI companies* 00:32:09 The evolution of business models in the AI era* 00:41:00 The future of programming languages and software development* 00:49:38 Challenges in precisely communicating human intent to machines* 00:56:44 Discussion on Artificial General Intelligence (AGI) and its impact* 01:08:51 The future of agent-to-agent communication* 01:14:03 Bret's involvement in the OpenAI leadership crisis* 01:22:11 OpenAI's relationship with Microsoft* 01:23:23 OpenAI's mission and priorities* 01:27:40 Bret's guiding principles for career choices* 01:29:12 Brief discussion on pasta-making* 01:30:47 How Bret keeps up with AI developments* 01:32:15 Exciting research directions in AI* 01:35:19 Closing remarks and hiring at Sierra Transcript[00:02:05] Introduction and Guest Welcome[00:02:05] Alessio: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co host swyx, founder of smol.ai.[00:02:17] swyx: Hey, and today we're super excited to have Bret Taylor join us. Welcome. Thanks for having me. It's a little unreal to have you in the studio.[00:02:25] swyx: I've read about you so much over the years, like even before. Open AI effectively. I mean, I use Google Maps to get here. So like, thank you for everything that you've done. Like, like your story history, like, you know, I think people can find out what your greatest hits have been.[00:02:40] Bret Taylor's Early Career and Education[00:02:40] swyx: How do you usually like to introduce yourself when, you know, you talk about, you summarize your career, like, how do you look at yourself?[00:02:47] Bret: Yeah, it's a great question. You know, we, before we went on the mics here, we're talking about the audience for this podcast being more engineering. And I do think depending on the audience, I'll introduce myself differently because I've had a lot of [00:03:00] corporate and board roles. I probably self identify as an engineer more than anything else though.[00:03:04] Bret: So even when I was. Salesforce, I was coding on the weekends. So I think of myself as an engineer and then all the roles that I do in my career sort of start with that just because I do feel like engineering is sort of a mindset and how I approach most of my life. So I'm an engineer first and that's how I describe myself.[00:03:24] Bret: You majored in computer[00:03:25] swyx: science, like 1998. And, and I was high[00:03:28] Bret: school, actually my, my college degree was Oh, two undergrad. Oh, three masters. Right. That old.[00:03:33] swyx: Yeah. I mean, no, I was going, I was going like 1998 to 2003, but like engineering wasn't as, wasn't a thing back then. Like we didn't have the title of senior engineer, you know, kind of like, it was just.[00:03:44] swyx: You were a programmer, you were a developer, maybe. What was it like in Stanford? Like, what was that feeling like? You know, was it, were you feeling like on the cusp of a great computer revolution? Or was it just like a niche, you know, interest at the time?[00:03:57] Stanford and the Dot-Com Bubble[00:03:57] Bret: Well, I was at Stanford, as you said, from 1998 to [00:04:00] 2002.[00:04:02] Bret: 1998 was near the peak of the dot com bubble. So. This is back in the day where most people that they're coding in the computer lab, just because there was these sun microsystems, Unix boxes there that most of us had to do our assignments on. And every single day there was a. com like buying pizza for everybody.[00:04:20] Bret: I didn't have to like, I got. Free food, like my first two years of university and then the dot com bubble burst in the middle of my college career. And so by the end there was like tumbleweed going to the job fair, you know, it was like, cause it was hard to describe unless you were there at the time, the like level of hype and being a computer science major at Stanford was like, A thousand opportunities.[00:04:45] Bret: And then, and then when I left, it was like Microsoft, IBM.[00:04:49] Joining Google and Early Projects[00:04:49] Bret: And then the two startups that I applied to were VMware and Google. And I ended up going to Google in large part because a woman named Marissa Meyer, who had been a teaching [00:05:00] assistant when I was, what was called a section leader, which was like a junior teaching assistant kind of for one of the big interest.[00:05:05] Bret: Yes. Classes. She had gone there. And she was recruiting me and I knew her and it was sort of felt safe, you know, like, I don't know. I thought about it much, but it turned out to be a real blessing. I realized like, you know, you always want to think you'd pick Google if given the option, but no one knew at the time.[00:05:20] Bret: And I wonder if I'd graduated in like 1999 where I've been like, mom, I just got a job at pets. com. It's good. But you know, at the end I just didn't have any options. So I was like, do I want to go like make kernel software at VMware? Do I want to go build search at Google? And I chose Google. 50, 50 ball.[00:05:36] Bret: I'm not really a 50, 50 ball. So I feel very fortunate in retrospect that the economy collapsed because in some ways it forced me into like one of the greatest companies of all time, but I kind of lucked into it, I think.[00:05:47] The Google Maps Rewrite Story[00:05:47] Alessio: So the famous story about Google is that you rewrote the Google maps back in, in one week after the map quest quest maps acquisition, what was the story there?[00:05:57] Alessio: Is it. Actually true. Is it [00:06:00] being glorified? Like how, how did that come to be? And is there any detail that maybe Paul hasn't shared before?[00:06:06] Bret: It's largely true, but I'll give the color commentary. So it was actually the front end, not the back end, but it turns out for Google maps, the front end was sort of the hard part just because Google maps was.[00:06:17] Bret: Largely the first ish kind of really interactive web application, say first ish. I think Gmail certainly was though Gmail, probably a lot of people then who weren't engineers probably didn't appreciate its level of interactivity. It was just fast, but. Google maps, because you could drag the map and it was sort of graphical.[00:06:38] Bret: My, it really in the mainstream, I think, was it a map[00:06:41] swyx: quest back then that was, you had the arrows up and down, it[00:06:44] Bret: was up and down arrows. Each map was a single image and you just click left and then wait for a few seconds to the new map to let it was really small too, because generating a big image was kind of expensive on computers that day.[00:06:57] Bret: So Google maps was truly innovative in that [00:07:00] regard. The story on it. There was a small company called where two technologies started by two Danish brothers, Lars and Jens Rasmussen, who are two of my closest friends now. They had made a windows app called expedition, which had beautiful maps. Even in 2000.[00:07:18] Bret: For whenever we acquired or sort of acquired their company, Windows software was not particularly fashionable, but they were really passionate about mapping and we had made a local search product that was kind of middling in terms of popularity, sort of like a yellow page of search product. So we wanted to really go into mapping.[00:07:36] Bret: We'd started working on it. Their small team seemed passionate about it. So we're like, come join us. We can build this together.[00:07:42] Technical Challenges and Innovations[00:07:42] Bret: It turned out to be a great blessing that they had built a windows app because you're less technically constrained when you're doing native code than you are building a web browser, particularly back then when there weren't really interactive web apps and it ended up.[00:07:56] Bret: Changing the level of quality that we [00:08:00] wanted to hit with the app because we were shooting for something that felt like a native windows application. So it was a really good fortune that we sort of, you know, their unusual technical choices turned out to be the greatest blessing. So we spent a lot of time basically saying, how can you make a interactive draggable map in a web browser?[00:08:18] Bret: How do you progressively load, you know, new map tiles, you know, as you're dragging even things like down in the weeds of the browser at the time, most browsers like Internet Explorer, which was dominant at the time would only load two images at a time from the same domain. So we ended up making our map tile servers have like.[00:08:37] Bret: Forty different subdomains so we could load maps and parallels like lots of hacks. I'm happy to go into as much as like[00:08:44] swyx: HTTP connections and stuff.[00:08:46] Bret: They just like, there was just maximum parallelism of two. And so if you had a map, set of map tiles, like eight of them, so So we just, we were down in the weeds of the browser anyway.[00:08:56] Bret: So it was lots of plumbing. I can, I know a lot more about browsers than [00:09:00] most people, but then by the end of it, it was fairly, it was a lot of duct tape on that code. If you've ever done an engineering project where you're not really sure the path from point A to point B, it's almost like. Building a house by building one room at a time.[00:09:14] Bret: The, there's not a lot of architectural cohesion at the end. And then we acquired a company called Keyhole, which became Google earth, which was like that three, it was a native windows app as well, separate app, great app, but with that, we got licenses to all this satellite imagery. And so in August of 2005, we added.[00:09:33] Bret: Satellite imagery to Google Maps, which added even more complexity in the code base. And then we decided we wanted to support Safari. There was no mobile phones yet. So Safari was this like nascent browser on, on the Mac. And it turns out there's like a lot of decisions behind the scenes, sort of inspired by this windows app, like heavy use of XML and XSLT and all these like.[00:09:54] Bret: Technologies that were like briefly fashionable in the early two thousands and everyone hates now for good [00:10:00] reason. And it turns out that all of the XML functionality and Internet Explorer wasn't supporting Safari. So people are like re implementing like XML parsers. And it was just like this like pile of s**t.[00:10:11] Bret: And I had to say a s**t on your part. Yeah, of[00:10:12] Alessio: course.[00:10:13] Bret: So. It went from this like beautifully elegant application that everyone was proud of to something that probably had hundreds of K of JavaScript, which sounds like nothing. Now we're talking like people have modems, you know, not all modems, but it was a big deal.[00:10:29] Bret: So it was like slow. It took a while to load and just, it wasn't like a great code base. Like everything was fragile. So I just got. Super frustrated by it. And then one weekend I did rewrite all of it. And at the time the word JSON hadn't been coined yet too, just to give you a sense. So it's all XML.[00:10:47] swyx: Yeah.[00:10:47] Bret: So we used what is now you would call JSON, but I just said like, let's use eval so that we can parse the data fast. And, and again, that's, it would literally as JSON, but at the time there was no name for it. So we [00:11:00] just said, let's. Pass on JavaScript from the server and eval it. And then somebody just refactored the whole thing.[00:11:05] Bret: And, and it wasn't like I was some genius. It was just like, you know, if you knew everything you wished you had known at the beginning and I knew all the functionality, cause I was the primary, one of the primary authors of the JavaScript. And I just like, I just drank a lot of coffee and just stayed up all weekend.[00:11:22] Bret: And then I, I guess I developed a bit of reputation and no one knew about this for a long time. And then Paul who created Gmail and I ended up starting a company with him too, after all of this told this on a podcast and now it's large, but it's largely true. I did rewrite it and it, my proudest thing.[00:11:38] Bret: And I think JavaScript people appreciate this. Like the un G zipped bundle size for all of Google maps. When I rewrote, it was 20 K G zipped. It was like much smaller for the entire application. It went down by like 10 X. So. What happened on Google? Google is a pretty mainstream company. And so like our usage is shot up because it turns out like it's faster.[00:11:57] Bret: Just being faster is worth a lot of [00:12:00] percentage points of growth at a scale of Google. So how[00:12:03] swyx: much modern tooling did you have? Like test suites no compilers.[00:12:07] Bret: Actually, that's not true. We did it one thing. So I actually think Google, I, you can. Download it. There's a, Google has a closure compiler, a closure compiler.[00:12:15] Bret: I don't know if anyone still uses it. It's gone. Yeah. Yeah. It's sort of gone out of favor. Yeah. Well, even until recently it was better than most JavaScript minifiers because it was more like it did a lot more renaming of variables and things. Most people use ES build now just cause it's fast and closure compilers built on Java and super slow and stuff like that.[00:12:37] Bret: But, so we did have that, that was it. Okay.[00:12:39] The Evolution of Web Applications[00:12:39] Bret: So and that was treated internally, you know, it was a really interesting time at Google at the time because there's a lot of teams working on fairly advanced JavaScript when no one was. So Google suggest, which Kevin Gibbs was the tech lead for, was the first kind of type ahead, autocomplete, I believe in a web browser, and now it's just pervasive in search boxes that you sort of [00:13:00] see a type ahead there.[00:13:01] Bret: I mean, chat, dbt[00:13:01] swyx: just added it. It's kind of like a round trip.[00:13:03] Bret: Totally. No, it's now pervasive as a UI affordance, but that was like Kevin's 20 percent project. And then Gmail, Paul you know, he tells the story better than anyone, but he's like, you know, basically was scratching his own itch, but what was really neat about it is email, because it's such a productivity tool, just needed to be faster.[00:13:21] Bret: So, you know, he was scratching his own itch of just making more stuff work on the client side. And then we, because of Lars and Yen sort of like setting the bar of this windows app or like we need our maps to be draggable. So we ended up. Not only innovate in terms of having a big sync, what would be called a single page application today, but also all the graphical stuff you know, we were crashing Firefox, like it was going out of style because, you know, when you make a document object model with the idea that it's a document and then you layer on some JavaScript and then we're essentially abusing all of this, it just was running into code paths that were not.[00:13:56] Bret: Well, it's rotten, you know, at this time. And so it was [00:14:00] super fun. And, and, you know, in the building you had, so you had compilers, people helping minify JavaScript just practically, but there is a great engineering team. So they were like, that's why Closure Compiler is so good. It was like a. Person who actually knew about programming languages doing it, not just, you know, writing regular expressions.[00:14:17] Bret: And then the team that is now the Chrome team believe, and I, I don't know this for a fact, but I'm pretty sure Google is the main contributor to Firefox for a long time in terms of code. And a lot of browser people were there. So every time we would crash Firefox, we'd like walk up two floors and say like, what the hell is going on here?[00:14:35] Bret: And they would load their browser, like in a debugger. And we could like figure out exactly what was breaking. And you can't change the code, right? Cause it's the browser. It's like slow, right? I mean, slow to update. So, but we could figure out exactly where the bug was and then work around it in our JavaScript.[00:14:52] Bret: So it was just like new territory. Like so super, super fun time, just like a lot of, a lot of great engineers figuring out [00:15:00] new things. And And now, you know, the word, this term is no longer in fashion, but the word Ajax, which was asynchronous JavaScript and XML cause I'm telling you XML, but see the word XML there, to be fair, the way you made HTTP requests from a client to server was this.[00:15:18] Bret: Object called XML HTTP request because Microsoft and making Outlook web access back in the day made this and it turns out to have nothing to do with XML. It's just a way of making HTTP requests because XML was like the fashionable thing. It was like that was the way you, you know, you did it. But the JSON came out of that, you know, and then a lot of the best practices around building JavaScript applications is pre React.[00:15:44] Bret: I think React was probably the big conceptual step forward that we needed. Even my first social network after Google, we used a lot of like HTML injection and. Making real time updates was still very hand coded and it's really neat when you [00:16:00] see conceptual breakthroughs like react because it's, I just love those things where it's like obvious once you see it, but it's so not obvious until you do.[00:16:07] Bret: And actually, well, I'm sure we'll get into AI, but I, I sort of feel like we'll go through that evolution with AI agents as well that I feel like we're missing a lot of the core abstractions that I think in 10 years we'll be like, gosh, how'd you make agents? Before that, you know, but it was kind of that early days of web applications.[00:16:22] swyx: There's a lot of contenders for the reactive jobs of of AI, but no clear winner yet. I would say one thing I was there for, I mean, there's so much we can go into there. You just covered so much.[00:16:32] Product Management and Engineering Synergy[00:16:32] swyx: One thing I just, I just observe is that I think the early Google days had this interesting mix of PM and engineer, which I think you are, you didn't, you didn't wait for PM to tell you these are my, this is my PRD.[00:16:42] swyx: This is my requirements.[00:16:44] mix: Oh,[00:16:44] Bret: okay.[00:16:45] swyx: I wasn't technically a software engineer. I mean,[00:16:48] Bret: by title, obviously. Right, right, right.[00:16:51] swyx: It's like a blend. And I feel like these days, product is its own discipline and its own lore and own industry and engineering is its own thing. And there's this process [00:17:00] that happens and they're kind of separated, but you don't produce as good of a product as if they were the same person.[00:17:06] swyx: And I'm curious, you know, if, if that, if that sort of resonates in, in, in terms of like comparing early Google versus modern startups that you see out there,[00:17:16] Bret: I certainly like wear a lot of hats. So, you know, sort of biased in this, but I really agree that there's a lot of power and combining product design engineering into as few people as possible because, you know few great things have been created by committee, you know, and so.[00:17:33] Bret: If engineering is an order taking organization for product you can sometimes make meaningful things, but rarely will you create extremely well crafted breakthrough products. Those tend to be small teams who deeply understand the customer need that they're solving, who have a. Maniacal focus on outcomes.[00:17:53] Bret: And I think the reason why it's, I think for some areas, if you look at like software as a service five years ago, maybe you can have a [00:18:00] separation of product and engineering because most software as a service created five years ago. I wouldn't say there's like a lot of like. Technological breakthroughs required for most, you know, business applications.[00:18:11] Bret: And if you're making expense reporting software or whatever, it's useful. I don't mean to be dismissive of expense reporting software, but you probably just want to understand like, what are the requirements of the finance department? What are the requirements of an individual file expense report? Okay.[00:18:25] Bret: Go implement that. And you kind of know how web applications are implemented. You kind of know how to. How databases work, how to build auto scaling with your AWS cluster, whatever, you know, it's just, you're just applying best practices to yet another problem when you have areas like the early days of mobile development or the early days of interactive web applications, which I think Google Maps and Gmail represent, or now AI agents, you're in this constant conversation with what the requirements of your customers and stakeholders are and all the different people interacting with it.[00:18:58] Bret: And the capabilities of the [00:19:00] technology. And it's almost impossible to specify the requirements of a product when you're not sure of the limitations of the technology itself. And that's why I use the word conversation. It's not literal. That's sort of funny to use that word in the age of conversational AI.[00:19:15] Bret: You're constantly sort of saying, like, ideally, you could sprinkle some magic AI pixie dust and solve all the world's problems, but it's not the way it works. And it turns out that actually, I'll just give an interesting example.[00:19:26] AI Agents and Modern Tooling[00:19:26] Bret: I think most people listening probably use co pilots to code like Cursor or Devon or Microsoft Copilot or whatever.[00:19:34] Bret: Most of those tools are, they're remarkable. I'm, I couldn't, you know, imagine development without them now, but they're not autonomous yet. Like I wouldn't let it just write most code without my interactively inspecting it. We just are somewhere between it's an amazing co pilot and it's an autonomous software engineer.[00:19:53] Bret: As a product manager, like your aspirations for what the product is are like kind of meaningful. But [00:20:00] if you're a product person, yeah, of course you'd say it should be autonomous. You should click a button and program should come out the other side. The requirements meaningless. Like what matters is like, what is based on the like very nuanced limitations of the technology.[00:20:14] Bret: What is it capable of? And then how do you maximize the leverage? It gives a software engineering team, given those very nuanced trade offs. Coupled with the fact that those nuanced trade offs are changing more rapidly than any technology in my memory, meaning every few months you'll have new models with new capabilities.[00:20:34] Bret: So how do you construct a product that can absorb those new capabilities as rapidly as possible as well? That requires such a combination of technical depth and understanding the customer that you really need more integration. Of product design and engineering. And so I think it's why with these big technology waves, I think startups have a bit of a leg up relative to incumbents because they [00:21:00] tend to be sort of more self actualized in terms of just like bringing those disciplines closer together.[00:21:06] Bret: And in particular, I think entrepreneurs, the proverbial full stack engineers, you know, have a leg up as well because. I think most breakthroughs happen when you have someone who can understand those extremely nuanced technical trade offs, have a vision for a product. And then in the process of building it, have that, as I said, like metaphorical conversation with the technology, right?[00:21:30] Bret: Gosh, I ran into a technical limit that I didn't expect. It's not just like changing that feature. You might need to refactor the whole product based on that. And I think that's, that it's particularly important right now. So I don't, you know, if you, if you're building a big ERP system, probably there's a great reason to have product and engineering.[00:21:51] Bret: I think in general, the disciplines are there for a reason. I think when you're dealing with something as nuanced as the like technologies, like large language models today, there's a ton of [00:22:00] advantage of having. Individuals or organizations that integrate the disciplines more formally.[00:22:05] Alessio: That makes a lot of sense.[00:22:06] Alessio: I've run a lot of engineering teams in the past, and I think the product versus engineering tension has always been more about effort than like whether or not the feature is buildable. But I think, yeah, today you see a lot more of like. Models actually cannot do that. And I think the most interesting thing is on the startup side, people don't yet know where a lot of the AI value is going to accrue.[00:22:26] Alessio: So you have this rush of people building frameworks, building infrastructure, layered things, but we don't really know the shape of the compute. I'm curious that Sierra, like how you thought about building an house, a lot of the tooling for evals or like just, you know, building the agents and all of that.[00:22:41] Alessio: Versus how you see some of the startup opportunities that is maybe still out there.[00:22:46] Bret: We build most of our tooling in house at Sierra, not all. It's, we don't, it's not like not invented here syndrome necessarily, though, maybe slightly guilty of that in some ways, but because we're trying to build a platform [00:23:00] that's in Dorian, you know, we really want to have control over our own destiny.[00:23:03] Bret: And you had made a comment earlier that like. We're still trying to figure out who like the reactive agents are and the jury is still out. I would argue it hasn't been created yet. I don't think the jury is still out to go use that metaphor. We're sort of in the jQuery era of agents, not the react era.[00:23:19] Bret: And, and that's like a throwback for people listening,[00:23:22] swyx: we shouldn't rush it. You know?[00:23:23] Bret: No, yeah, that's my point is. And so. Because we're trying to create an enduring company at Sierra that outlives us, you know, I'm not sure we want to like attach our cart to some like to a horse where it's not clear that like we've figured out and I actually want as a company, we're trying to enable just at a high level and I'll, I'll quickly go back to tech at Sierra, we help consumer brands build customer facing AI agents.[00:23:48] Bret: So. Everyone from Sonos to ADT home security to Sirius XM, you know, if you call them on the phone and AI will pick up with you, you know, chat with them on the Sirius XM homepage. It's an AI agent called Harmony [00:24:00] that they've built on our platform. We're what are the contours of what it means for someone to build an end to end complete customer experience with AI with conversational AI.[00:24:09] Bret: You know, we really want to dive into the deep end of, of all the trade offs to do it. You know, where do you use fine tuning? Where do you string models together? You know, where do you use reasoning? Where do you use generation? How do you use reasoning? How do you express the guardrails of an agentic process?[00:24:25] Bret: How do you impose determinism on a fundamentally non deterministic technology? There's just a lot of really like as an important design space. And I could sit here and tell you, we have the best approach. Every entrepreneur will, you know. But I hope that in two years, we look back at our platform and laugh at how naive we were, because that's the pace of change broadly.[00:24:45] Bret: If you talk about like the startup opportunities, I'm not wholly skeptical of tools companies, but I'm fairly skeptical. There's always an exception for every role, but I believe that certainly there's a big market for [00:25:00] frontier models, but largely for companies with huge CapEx budgets. So. Open AI and Microsoft's Anthropic and Amazon Web Services, Google Cloud XAI, which is very well capitalized now, but I think the, the idea that a company can make money sort of pre training a foundation model is probably not true.[00:25:20] Bret: It's hard to, you're competing with just, you know, unreasonably large CapEx budgets. And I just like the cloud infrastructure market, I think will be largely there. I also really believe in the applications of AI. And I define that not as like building agents or things like that. I define it much more as like, you're actually solving a problem for a business.[00:25:40] Bret: So it's what Harvey is doing in legal profession or what cursor is doing for software engineering or what we're doing for customer experience and customer service. The reason I believe in that is I do think that in the age of AI, what's really interesting about software is it can actually complete a task.[00:25:56] Bret: It can actually do a job, which is very different than the value proposition of [00:26:00] software was to ancient history two years ago. And as a consequence, I think the way you build a solution and For a domain is very different than you would have before, which means that it's not obvious, like the incumbent incumbents have like a leg up, you know, necessarily, they certainly have some advantages, but there's just such a different form factor, you know, for providing a solution and it's just really valuable.[00:26:23] Bret: You know, it's. Like just think of how much money cursor is saving software engineering teams or the alternative, how much revenue it can produce tool making is really challenging. If you look at the cloud market, just as a analog, there are a lot of like interesting tools, companies, you know, Confluent, Monetized Kafka, Snowflake, Hortonworks, you know, there's a, there's a bunch of them.[00:26:48] Bret: A lot of them, you know, have that mix of sort of like like confluence or have the open source or open core or whatever you call it. I, I, I'm not an expert in this area. You know, I do think [00:27:00] that developers are fickle. I think that in the tool space, I probably like. Default towards open source being like the area that will win.[00:27:09] Bret: It's hard to build a company around this and then you end up with companies sort of built around open source to that can work. Don't get me wrong, but I just think that it's nowadays the tools are changing so rapidly that I'm like, not totally skeptical of tool makers, but I just think that open source will broadly win, but I think that the CapEx required for building frontier models is such that it will go to a handful of big companies.[00:27:33] Bret: And then I really believe in agents for specific domains which I think will, it's sort of the analog to software as a service in this new era. You know, it's like, if you just think of the cloud. You can lease a server. It's just a low level primitive, or you can buy an app like you know, Shopify or whatever.[00:27:51] Bret: And most people building a storefront would prefer Shopify over hand rolling their e commerce storefront. I think the same thing will be true of AI. So [00:28:00] I've. I tend to like, if I have a, like an entrepreneur asked me for advice, I'm like, you know, move up the stack as far as you can towards a customer need.[00:28:09] Bret: Broadly, but I, but it doesn't reduce my excitement about what is the reactive building agents kind of thing, just because it is, it is the right question to ask, but I think we'll probably play out probably an open source space more than anything else.[00:28:21] swyx: Yeah, and it's not a priority for you. There's a lot in there.[00:28:24] swyx: I'm kind of curious about your idea maze towards, there are many customer needs. You happen to identify customer experience as yours, but it could equally have been coding assistance or whatever. I think for some, I'm just kind of curious at the top down, how do you look at the world in terms of the potential problem space?[00:28:44] swyx: Because there are many people out there who are very smart and pick the wrong problem.[00:28:47] Bret: Yeah, that's a great question.[00:28:48] Future of Software Development[00:28:48] Bret: By the way, I would love to talk about the future of software, too, because despite the fact it didn't pick coding, I have a lot of that, but I can talk to I can answer your question, though, you know I think when a technology is as [00:29:00] cool as large language models.[00:29:02] Bret: You just see a lot of people starting from the technology and searching for a problem to solve. And I think it's why you see a lot of tools companies, because as a software engineer, you start building an app or a demo and you, you encounter some pain points. You're like,[00:29:17] swyx: a lot of[00:29:17] Bret: people are experiencing the same pain point.[00:29:19] Bret: What if I make it? That it's just very incremental. And you know, I always like to use the metaphor, like you can sell coffee beans, roasted coffee beans. You can add some value. You took coffee beans and you roasted them and roasted coffee beans largely, you know, are priced relative to the cost of the beans.[00:29:39] Bret: Or you can sell a latte and a latte. Is rarely priced directly like as a percentage of coffee bean prices. In fact, if you buy a latte at the airport, it's a captive audience. So it's a really expensive latte. And there's just a lot that goes into like. How much does a latte cost? And I bring it up because there's a supply chain from growing [00:30:00] coffee beans to roasting coffee beans to like, you know, you could make one at home or you could be in the airport and buy one and the margins of the company selling lattes in the airport is a lot higher than the, you know, people roasting the coffee beans and it's because you've actually solved a much more acute human problem in the airport.[00:30:19] Bret: And, and it's just worth a lot more to that person in that moment. It's kind of the way I think about technology too. It sounds funny to liken it to coffee beans, but you're selling tools on top of a large language model yet in some ways your market is big, but you're probably going to like be price compressed just because you're sort of a piece of infrastructure and then you have open source and all these other things competing with you naturally.[00:30:43] Bret: If you go and solve a really big business problem for somebody, that's actually like a meaningful business problem that AI facilitates, they will value it according to the value of that business problem. And so I actually feel like people should just stop. You're like, no, that's, that's [00:31:00] unfair. If you're searching for an idea of people, I, I love people trying things, even if, I mean, most of the, a lot of the greatest ideas have been things no one believed in.[00:31:07] Bret: So I like, if you're passionate about something, go do it. Like who am I to say, yeah, a hundred percent. Or Gmail, like Paul as far, I mean I, some of it's Laura at this point, but like Gmail is Paul's own email for a long time. , and then I amusingly and Paul can't correct me, I'm pretty sure he sent her in a link and like the first comment was like, this is really neat.[00:31:26] Bret: It would be great. It was not your email, but my own . I don't know if it's a true story. I'm pretty sure it's, yeah, I've read that before. So scratch your own niche. Fine. Like it depends on what your goal is. If you wanna do like a venture backed company, if its a. Passion project, f*****g passion, do it like don't listen to anybody.[00:31:41] Bret: In fact, but if you're trying to start, you know an enduring company, solve an important business problem. And I, and I do think that in the world of agents, the software industries has shifted where you're not just helping people more. People be more productive, but you're actually accomplishing tasks autonomously.[00:31:58] Bret: And as a consequence, I think the [00:32:00] addressable market has just greatly expanded just because software can actually do things now and actually accomplish tasks and how much is coding autocomplete worth. A fair amount. How much is the eventual, I'm certain we'll have it, the software agent that actually writes the code and delivers it to you, that's worth a lot.[00:32:20] Bret: And so, you know, I would just maybe look up from the large language models and start thinking about the economy and, you know, think from first principles. I don't wanna get too far afield, but just think about which parts of the economy. We'll benefit most from this intelligence and which parts can absorb it most easily.[00:32:38] Bret: And what would an agent in this space look like? Who's the customer of it is the technology feasible. And I would just start with these business problems more. And I think, you know, the best companies tend to have great engineers who happen to have great insight into a market. And it's that last part that I think some people.[00:32:56] Bret: Whether or not they have, it's like people start so much in the technology, they [00:33:00] lose the forest for the trees a little bit.[00:33:02] Alessio: How do you think about the model of still selling some sort of software versus selling more package labor? I feel like when people are selling the package labor, it's almost more stateless, you know, like it's easier to swap out if you're just putting an input and getting an output.[00:33:16] Alessio: If you think about coding, if there's no ID, you're just putting a prompt and getting back an app. It doesn't really matter. Who generates the app, you know, you have less of a buy in versus the platform you're building, I'm sure on the backend customers have to like put on their documentation and they have, you know, different workflows that they can tie in what's kind of like the line to draw there versus like going full where you're managed customer support team as a service outsource versus.[00:33:40] Alessio: This is the Sierra platform that you can build on. What was that decision? I'll sort of[00:33:44] Bret: like decouple the question in some ways, which is when you have something that's an agent, who is the person using it and what do they want to do with it? So let's just take your coding agent for a second. I will talk about Sierra as well.[00:33:59] Bret: Who's the [00:34:00] customer of a, an agent that actually produces software? Is it a software engineering manager? Is it a software engineer? And it's there, you know, intern so to speak. I don't know. I mean, we'll figure this out over the next few years. Like what is that? And is it generating code that you then review?[00:34:16] Bret: Is it generating code with a set of unit tests that pass, what is the actual. For lack of a better word contract, like, how do you know that it did what you wanted it to do? And then I would say like the product and the pricing, the packaging model sort of emerged from that. And I don't think the world's figured out.[00:34:33] Bret: I think it'll be different for every agent. You know, in our customer base, we do what's called outcome based pricing. So essentially every time the AI agent. Solves the problem or saves a customer or whatever it might be. There's a pre negotiated rate for that. We do that. Cause it's, we think that that's sort of the correct way agents, you know, should be packaged.[00:34:53] Bret: I look back at the history of like cloud software and notably the introduction of the browser, which led to [00:35:00] software being delivered in a browser, like Salesforce to. Famously invented sort of software as a service, which is both a technical delivery model through the browser, but also a business model, which is you subscribe to it rather than pay for a perpetual license.[00:35:13] Bret: Those two things are somewhat orthogonal, but not really. If you think about the idea of software running in a browser, that's hosted. Data center that you don't own, you sort of needed to change the business model because you don't, you can't really buy a perpetual license or something otherwise like, how do you afford making changes to it?[00:35:31] Bret: So it only worked when you were buying like a new version every year or whatever. So to some degree, but then the business model shift actually changed business as we know it, because now like. Things like Adobe Photoshop. Now you subscribe to rather than purchase. So it ended up where you had a technical shift and a business model shift that were very logically intertwined that actually the business model shift was turned out to be as significant as the technical as the shift.[00:35:59] Bret: And I think with [00:36:00] agents, because they actually accomplish a job, I do think that it doesn't make sense to me that you'd pay for the privilege of like. Using the software like that coding agent, like if it writes really bad code, like fire it, you know, I don't know what the right metaphor is like you should pay for a job.[00:36:17] Bret: Well done in my opinion. I mean, that's how you pay your software engineers, right? And[00:36:20] swyx: and well, not really. We paid to put them on salary and give them options and they vest over time. That's fair.[00:36:26] Bret: But my point is that you don't pay them for how many characters they write, which is sort of the token based, you know, whatever, like, There's a, that famous Apple story where we're like asking for a report of how many lines of code you wrote.[00:36:40] Bret: And one of the engineers showed up with like a negative number cause he had just like done a big refactoring. There was like a big F you to management who didn't understand how software is written. You know, my sense is like the traditional usage based or seat based thing. It's just going to look really antiquated.[00:36:55] Bret: Cause it's like asking your software engineer, how many lines of code did you write today? Like who cares? Like, cause [00:37:00] absolutely no correlation. So my old view is I don't think it's be different in every category, but I do think that that is the, if an agent is doing a job, you should, I think it properly incentivizes the maker of that agent and the customer of, of your pain for the job well done.[00:37:16] Bret: It's not always perfect to measure. It's hard to measure engineering productivity, but you can, you should do something other than how many keys you typed, you know Talk about perverse incentives for AI, right? Like I can write really long functions to do the same thing, right? So broadly speaking, you know, I do think that we're going to see a change in business models of software towards outcomes.[00:37:36] Bret: And I think you'll see a change in delivery models too. And, and, you know, in our customer base you know, we empower our customers to really have their hands on the steering wheel of what the agent does they, they want and need that. But the role is different. You know, at a lot of our customers, the customer experience operations folks have renamed themselves the AI architects, which I think is really cool.[00:37:55] Bret: And, you know, it's like in the early days of the Internet, there's the role of the webmaster. [00:38:00] And I don't know whether your webmaster is not a fashionable, you know, Term, nor is it a job anymore? I just, I don't know. Will they, our tech stand the test of time? Maybe, maybe not. But I do think that again, I like, you know, because everyone listening right now is a software engineer.[00:38:14] Bret: Like what is the form factor of a coding agent? And actually I'll, I'll take a breath. Cause actually I have a bunch of pins on them. Like I wrote a blog post right before Christmas, just on the future of software development. And one of the things that's interesting is like, if you look at the way I use cursor today, as an example, it's inside of.[00:38:31] Bret: A repackaged visual studio code environment. I sometimes use the sort of agentic parts of it, but it's largely, you know, I've sort of gotten a good routine of making it auto complete code in the way I want through tuning it properly when it actually can write. I do wonder what like the future of development environments will look like.[00:38:55] Bret: And to your point on what is a software product, I think it's going to change a lot in [00:39:00] ways that will surprise us. But I always use, I use the metaphor in my blog post of, have you all driven around in a way, Mo around here? Yeah, everyone has. And there are these Jaguars, the really nice cars, but it's funny because it still has a steering wheel, even though there's no one sitting there and the steering wheels like turning and stuff clearly in the future.[00:39:16] Bret: If once we get to that, be more ubiquitous, like why have the steering wheel and also why have all the seats facing forward? Maybe just for car sickness. I don't know, but you could totally rearrange the car. I mean, so much of the car is oriented around the driver, so. It stands to reason to me that like, well, autonomous agents for software engineering run through visual studio code.[00:39:37] Bret: That seems a little bit silly because having a single source code file open one at a time is kind of a goofy form factor for when like the code isn't being written primarily by you, but it begs the question of what's your relationship with that agent. And I think the same is true in our industry of customer experience, which is like.[00:39:55] Bret: Who are the people managing this agent? What are the tools do they need? And they definitely need [00:40:00] tools, but it's probably pretty different than the tools we had before. It's certainly different than training a contact center team. And as software engineers, I think that I would like to see particularly like on the passion project side or research side.[00:40:14] Bret: More innovation in programming languages. I think that we're bringing the cost of writing code down to zero. So the fact that we're still writing Python with AI cracks me up just cause it's like literally was designed to be ergonomic to write, not safe to run or fast to run. I would love to see more innovation and how we verify program correctness.[00:40:37] Bret: I studied for formal verification in college a little bit and. It's not very fashionable because it's really like tedious and slow and doesn't work very well. If a lot of code is being written by a machine, you know, one of the primary values we can provide is verifying that it actually does what we intend that it does.[00:40:56] Bret: I think there should be lots of interesting things in the software development life cycle, like how [00:41:00] we think of testing and everything else, because. If you think about if we have to manually read every line of code that's coming out as machines, it will just rate limit how much the machines can do. The alternative is totally unsafe.[00:41:13] Bret: So I wouldn't want to put code in production that didn't go through proper code review and inspection. So my whole view is like, I actually think there's like an AI native I don't think the coding agents don't work well enough to do this yet, but once they do, what is sort of an AI native software development life cycle and how do you actually.[00:41:31] Bret: Enable the creators of software to produce the highest quality, most robust, fastest software and know that it's correct. And I think that's an incredible opportunity. I mean, how much C code can we rewrite and rust and make it safe so that there's fewer security vulnerabilities. Can we like have more efficient, safer code than ever before?[00:41:53] Bret: And can you have someone who's like that guy in the matrix, you know, like staring at the little green things, like where could you have an operator [00:42:00] of a code generating machine be like superhuman? I think that's a cool vision. And I think too many people are focused on like. Autocomplete, you know, right now, I'm not, I'm not even, I'm guilty as charged.[00:42:10] Bret: I guess in some ways, but I just like, I'd like to see some bolder ideas. And that's why when you were joking, you know, talking about what's the react of whatever, I think we're clearly in a local maximum, you know, metaphor, like sort of conceptual local maximum, obviously it's moving really fast. I think we're moving out of it.[00:42:26] Alessio: Yeah. At the end of 23, I've read this blog post from syntax to semantics. Like if you think about Python. It's taking C and making it more semantic and LLMs are like the ultimate semantic program, right? You can just talk to them and they can generate any type of syntax from your language. But again, the languages that they have to use were made for us, not for them.[00:42:46] Alessio: But the problem is like, as long as you will ever need a human to intervene, you cannot change the language under it. You know what I mean? So I'm curious at what point of automation we'll need to get, we're going to be okay making changes. To the underlying languages, [00:43:00] like the programming languages versus just saying, Hey, you just got to write Python because I understand Python and I'm more important at the end of the day than the model.[00:43:08] Alessio: But I think that will change, but I don't know if it's like two years or five years. I think it's more nuanced actually.[00:43:13] Bret: So I think there's a, some of the more interesting programming languages bring semantics into syntax. So let me, that's a little reductive, but like Rust as an example, Rust is memory safe.[00:43:25] Bret: Statically, and that was a really interesting conceptual, but it's why it's hard to write rust. It's why most people write python instead of rust. I think rust programs are safer and faster than python, probably slower to compile. But like broadly speaking, like given the option, if you didn't have to care about the labor that went into it.[00:43:45] Bret: You should prefer a program written in Rust over a program written in Python, just because it will run more efficiently. It's almost certainly safer, et cetera, et cetera, depending on how you define safe, but most people don't write Rust because it's kind of a pain in the ass. And [00:44:00] the audience of people who can is smaller, but it's sort of better in most, most ways.[00:44:05] Bret: And again, let's say you're making a web service and you didn't have to care about how hard it was to write. If you just got the output of the web service, the rest one would be cheaper to operate. It's certainly cheaper and probably more correct just because there's so much in the static analysis implied by the rest programming language that it probably will have fewer runtime errors and things like that as well.[00:44:25] Bret: So I just give that as an example, because so rust, at least my understanding that came out of the Mozilla team, because. There's lots of security vulnerabilities in the browser and it needs to be really fast. They said, okay, we want to put more of a burden at the authorship time to have fewer issues at runtime.[00:44:43] Bret: And we need the constraint that it has to be done statically because browsers need to be really fast. My sense is if you just think about like the, the needs of a programming language today, where the role of a software engineer is [00:45:00] to use an AI to generate functionality and audit that it does in fact work as intended, maybe functionally, maybe from like a correctness standpoint, some combination thereof, how would you create a programming system that facilitated that?[00:45:15] Bret: And, you know, I bring up Rust is because I think it's a good example of like, I think given a choice of writing in C or Rust, you should choose Rust today. I think most people would say that, even C aficionados, just because. C is largely less safe for very similar, you know, trade offs, you know, for the, the system and now with AI, it's like, okay, well, that just changes the game on writing these things.[00:45:36] Bret: And so like, I just wonder if a combination of programming languages that are more structurally oriented towards the values that we need from an AI generated program, verifiable correctness and all of that. If it's tedious to produce for a person, that maybe doesn't matter. But one thing, like if I asked you, is this rest program memory safe?[00:45:58] Bret: You wouldn't have to read it, you just have [00:46:00] to compile it. So that's interesting. I mean, that's like an, that's one example of a very modest form of formal verification. So I bring that up because I do think you have AI inspect AI, you can have AI reviewed. Do AI code reviews. It would disappoint me if the best we could get was AI reviewing Python and having scaled a few very large.[00:46:21] Bret: Websites that were written on Python. It's just like, you know, expensive and it's like every, trust me, every team who's written a big web service in Python has experimented with like Pi Pi and all these things just to make it slightly more efficient than it naturally is. You don't really have true multi threading anyway.[00:46:36] Bret: It's just like clearly that you do it just because it's convenient to write. And I just feel like we're, I don't want to say it's insane. I just mean. I do think we're at a local maximum. And I would hope that we create a programming system, a combination of programming languages, formal verification, testing, automated code reviews, where you can use AI to generate software in a high scale way and trust it.[00:46:59] Bret: And you're [00:47:00] not limited by your ability to read it necessarily. I don't know exactly what form that would take, but I feel like that would be a pretty cool world to live in.[00:47:08] Alessio: Yeah. We had Chris Lanner on the podcast. He's doing great work with modular. I mean, I love. LVM. Yeah. Basically merging rust in and Python.[00:47:15] Alessio: That's kind of the idea. Should be, but I'm curious is like, for them a big use case was like making it compatible with Python, same APIs so that Python developers could use it. Yeah. And so I, I wonder at what point, well, yeah.[00:47:26] Bret: At least my understanding is they're targeting the data science Yeah. Machine learning crowd, which is all written in Python, so still feels like a local maximum.[00:47:34] Bret: Yeah.[00:47:34] swyx: Yeah, exactly. I'll force you to make a prediction. You know, Python's roughly 30 years old. In 30 years from now, is Rust going to be bigger than Python?[00:47:42] Bret: I don't know this, but just, I don't even know this is a prediction. I just am sort of like saying stuff I hope is true. I would like to see an AI native programming language and programming system, and I use language because I'm not sure language is even the right thing, but I hope in 30 years, there's an AI native way we make [00:48:00] software that is wholly uncorrelated with the current set of programming languages.[00:48:04] Bret: or not uncorrelated, but I think most programming languages today were designed to be efficiently authored by people and some have different trade offs.[00:48:15] Evolution of Programming Languages[00:48:15] Bret: You know, you have Haskell and others that were designed for abstractions for parallelism and things like that. You have programming languages like Python, which are designed to be very easily written, sort of like Perl and Python lineage, which is why data scientists use it.[00:48:31] Bret: It's it can, it has a. Interactive mode, things like that. And I love, I'm a huge Python fan. So despite all my Python trash talk, a huge Python fan wrote at least two of my three companies were exclusively written in Python and then C came out of the birth of Unix and it wasn't the first, but certainly the most prominent first step after assembly language, right?[00:48:54] Bret: Where you had higher level abstractions rather than and going beyond go to, to like abstractions, [00:49:00] like the for loop and the while loop.[00:49:01] The Future of Software Engineering[00:49:01] Bret: So I just think that if the act of writing code is no longer a meaningful human exercise, maybe it will be, I don't know. I'm just saying it sort of feels like maybe it's one of those parts of history that just will sort of like go away, but there's still the role of this offer engineer, like the person actually building the system.[00:49:20] Bret: Right. And. What does a programming system for that form factor look like?[00:49:25] React and Front-End Development[00:49:25] Bret: And I, I just have a, I hope to be just like I mentioned, I remember I was at Facebook in the very early days when, when, what is now react was being created. And I remember when the, it was like released open source I had left by that time and I was just like, this is so f*****g cool.[00:49:42] Bret: Like, you know, to basically model your app independent of the data flowing through it, just made everything easier. And then now. You know, I can create, like there's a lot of the front end software gym play is like a little chaotic for me, to be honest with you. It is like, it's sort of like [00:50:00] abstraction soup right now for me, but like some of those core ideas felt really ergonomic.[00:50:04] Bret: I just wanna, I'm just looking forward to the day when someone comes up with a programming system that feels both really like an aha moment, but completely foreign to me at the same time. Because they created it with sort of like from first principles recognizing that like. Authoring code in an editor is maybe not like the primary like reason why a programming system exists anymore.[00:50:26] Bret: And I think that's like, that would be a very exciting day for me.[00:50:28] The Role of AI in Programming[00:50:28] swyx: Yeah, I would say like the various versions of this discussion have happened at the end of the day, you still need to precisely communicate what you want. As a manager of people, as someone who has done many, many legal contracts, you know how hard that is.[00:50:42] swyx: And then now we have to talk to machines doing that and AIs interpreting what we mean and reading our minds effectively. I don't know how to get across that barrier of translating human intent to instructions. And yes, it can be more declarative, but I don't know if it'll ever Crossover from being [00:51:00] a programming language to something more than that.[00:51:02] Bret: I agree with you. And I actually do think if you look at like a legal contract, you know, the imprecision of the English language, it's like a flaw in the system. How many[00:51:12] swyx: holes there are.[00:51:13] Bret: And I do think that when you're making a mission critical software system, I don't think it should be English language prompts.[00:51:19] Bret: I think that is silly because you want the precision of a a programming language. My point was less about that and more about if the actual act of authoring it, like if you.[00:51:32] Formal Verification in Software[00:51:32] Bret: I'll think of some embedded systems do use formal verification. I know it's very common in like security protocols now so that you can, because the importance of correctness is so great.[00:51:41] Bret: My intellectual exercise is like, why not do that for all software? I mean, probably that's silly just literally to do what we literally do for. These low level security protocols, but the only reason we don't is because it's hard and tedious and hard and tedious are no longer factors. So, like, if I could, I mean, [00:52:00] just think of, like, the silliest app on your phone right now, the idea that that app should be, like, formally verified for its correctness feels laughable right now because, like, God, why would you spend the time on it?[00:52:10] Bret: But if it's zero costs, like, yeah, I guess so. I mean, it never crashed. That's probably good. You know, why not? I just want to, like, set our bars really high. Like. We should make, software has been amazing. Like there's a Mark Andreessen blog post, software is eating the world. And you know, our whole life is, is mediated digitally.[00:52:26] Bret: And that's just increasing with AI. And now we'll have our personal agents talking to the agents on the CRO platform and it's agents all the way down, you know, our core infrastructure is running on these digital systems. We now have like, and we've had a shortage of software developers for my entire life.[00:52:45] Bret: And as a consequence, you know if you look, remember like health care, got healthcare. gov that fiasco security vulnerabilities leading to state actors getting access to critical infrastructure. I'm like. We now have like created this like amazing system that can [00:53:00] like, we can fix this, you know, and I, I just want to, I'm both excited about the productivity gains in the economy, but I just think as software engineers, we should be bolder.[00:53:08] Bret: Like we should have aspirations to fix these systems so that like in general, as you said, as precise as we want to be in the specification of the system. We can make it work correctly now, and I'm being a little bit hand wavy, and I think we need some systems. I think that's where we should set the bar, especially when so much of our life depends on this critical digital infrastructure.[00:53:28] Bret: So I'm I'm just like super optimistic about it. But actually, let's go to w
Dylan Patel is the founder of SemiAnalysis, a research & analysis company specializing in semiconductors, GPUs, CPUs, and AI hardware. Nathan Lambert is a research scientist at the Allen Institute for AI (Ai2) and the author of a blog on AI called Interconnects. Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep459-sc See below for timestamps, and to give feedback, submit questions, contact Lex, etc. CONTACT LEX: Feedback - give feedback to Lex: https://lexfridman.com/survey AMA - submit questions, videos or call-in: https://lexfridman.com/ama Hiring - join our team: https://lexfridman.com/hiring Other - other ways to get in touch: https://lexfridman.com/contact EPISODE LINKS: Dylan's X: https://x.com/dylan522p SemiAnalysis: https://semianalysis.com/ Nathan's X: https://x.com/natolambert Nathan's Blog: https://www.interconnects.ai/ Nathan's Podcast: https://www.interconnects.ai/podcast Nathan's Website: https://www.natolambert.com/ Nathan's YouTube: https://youtube.com/@natolambert Nathan's Book: https://rlhfbook.com/ SPONSORS: To support this podcast, check out our sponsors & get discounts: Invideo AI: AI video generator. Go to https://invideo.io/i/lexpod GitHub: Developer platform and AI code editor. Go to https://gh.io/copilot Shopify: Sell stuff online. Go to https://shopify.com/lex NetSuite: Business management software. Go to http://netsuite.com/lex AG1: All-in-one daily nutrition drinks. Go to https://drinkag1.com/lex OUTLINE: (00:00) - Introduction (13:28) - DeepSeek-R1 and DeepSeek-V3 (35:02) - Low cost of training (1:01:19) - DeepSeek compute cluster (1:08:52) - Export controls on GPUs to China (1:19:10) - AGI timeline (1:28:35) - China's manufacturing capacity (1:36:30) - Cold war with China (1:41:00) - TSMC and Taiwan (2:04:38) - Best GPUs for AI (2:19:30) - Why DeepSeek is so cheap (2:32:49) - Espionage (2:41:52) - Censorship (2:54:46) - Andrej Karpathy and magic of RL (3:05:17) - OpenAI o3-mini vs DeepSeek r1 (3:24:25) - NVIDIA (3:28:53) - GPU smuggling (3:35:30) - DeepSeek training on OpenAI data (3:45:59) - AI megaclusters (4:21:21) - Who wins the race to AGI? (4:31:34) - AI agents (4:40:16) - Programming and AI (4:47:43) - Open source (4:56:55) - Stargate (5:04:24) - Future of AI PODCAST LINKS: - Podcast Website: https://lexfridman.com/podcast - Apple Podcasts: https://apple.co/2lwqZIr - Spotify: https://spoti.fi/2nEwCF8 - RSS: https://lexfridman.com/feed/podcast/ - Podcast Playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4 - Clips Channel: https://www.youtube.com/lexclips
Ep. 284 What if AI could double your email conversion rates overnight? Kipp and Kieran dive into the revolutionary AI strategies that are transforming the way we approach email marketing, featuring insights from Dan Wolchonok of Reforge. Learn more on how leveraging proprietary data can create hyper-personalized emails for increased engagement, the importance of seamlessly integrating AI solutions into existing workflows, and the innovative use of AI-generated content to make compelling email campaigns. Mentions Dan Wolchonok https://www.linkedin.com/in/danielwolchonok/ Reforge https://www.reforge.com/ Grammarly https://www.grammarly.com/ Andrej Karpathy https://karpathy.ai/ Button Down AI Newsletter https://buttondown.com/ainews Get our guide to build your own Custom GPT: https://clickhubspot.com/customgpt We're creating our next round of content and want to ensure it tackles the challenges you're facing at work or in your business. To understand your biggest challenges we've put together a survey and we'd love to hear from you! https://bit.ly/matg-research Resource [Free] Steal our favorite AI Prompts featured on the show! Grab them here: https://clickhubspot.com/aip We're on Social Media! Follow us for everyday marketing wisdom straight to your feed YouTube: https://www.youtube.com/channel/UCGtXqPiNV8YC0GMUzY-EUFg Twitter: https://twitter.com/matgpod TikTok: https://www.tiktok.com/@matgpod Join our community https://landing.connect.com/matg Thank you for tuning into Marketing Against The Grain! Don't forget to hit subscribe and follow us on Apple Podcasts (so you never miss an episode)! https://podcasts.apple.com/us/podcast/marketing-against-the-grain/id1616700934 If you love this show, please leave us a 5-Star Review https://link.chtbl.com/h9_sjBKH and share your favorite episodes with friends. We really appreciate your support. Host Links: Kipp Bodnar, https://twitter.com/kippbodnar Kieran Flanagan, https://twitter.com/searchbrat ‘Marketing Against The Grain' is a HubSpot Original Podcast // Brought to you by The HubSpot Podcast Network // Produced by Darren Clarke.
Pieter Levels (aka levelsio on X) is a self-taught developer and entrepreneur who has designed, programmed, launched over 40 startups, many of which are highly successful. Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep440-sc See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc. Transcript: https://lexfridman.com/pieter-levels-transcript CONTACT LEX: Feedback - give feedback to Lex: https://lexfridman.com/survey AMA - submit questions, videos or call-in: https://lexfridman.com/ama Hiring - join our team: https://lexfridman.com/hiring Other - other ways to get in touch: https://lexfridman.com/contact EPISODE LINKS: Pieter's X: https://x.com/levelsio Pieter's Techno Optimist Shop: https://levelsio.com/ Indie Maker Handbook: https://readmake.com/ Nomad List: https://nomadlist.com Remote OK: https://remoteok.com Hoodmaps: https://hoodmaps.com SPONSORS: To support this podcast, check out our sponsors & get discounts: Shopify: Sell stuff online. Go to https://shopify.com/lex Motific: Generative ai deployment. Go to https://motific.ai AG1: All-in-one daily nutrition drinks. Go to https://drinkag1.com/lex MasterClass: Online classes from world-class experts. Go to https://masterclass.com/lexpod BetterHelp: Online therapy and counseling. Go to https://betterhelp.com/lex Eight Sleep: Temp-controlled smart mattress. Go to https://eightsleep.com/lex OUTLINE: (00:00) - Introduction (11:38) - Startup philosophy (19:09) - Low points (22:37) - 12 startups in 12 months (29:29) - Traveling and depression (42:08) - Indie hacking (46:11) - Photo AI (1:22:28) - How to learn AI (1:31:04) - Robots (1:39:21) - Hoodmaps (2:03:26) - Learning new programming languages (2:12:58) - Monetize your website (2:19:34) - Fighting SPAM (2:23:07) - Automation (2:34:33) - When to sell startup (2:37:26) - Coding solo (2:43:28) - Ship fast (2:52:13) - Best IDE for programming (3:01:43) - Andrej Karpathy (3:11:09) - Productivity (3:24:56) - Minimalism (3:33:41) - Emails (3:40:54) - Coffee (3:48:40) - E/acc (3:50:56) - Advice for young people PODCAST LINKS: - Podcast Website: https://lexfridman.com/podcast - Apple Podcasts: https://apple.co/2lwqZIr - Spotify: https://spoti.fi/2nEwCF8 - RSS: https://lexfridman.com/feed/podcast/ - Podcast Playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4 - Clips Channel: https://www.youtube.com/lexclips