The Neuron covers the latest AI developments, trends and research, hosted by Pete Huang. Digestible, informative and authoritative takes on AI that get you up to speed and help you become an authority in your own circles. Available on all podcasting platforms and YouTube.

Let's build with v0 in real time. We're going LIVE with Tom Occhino, Chief Product Officer at Vercel, to explore vibe coding and take a hands-on look at v0, Vercel's AI-powered development platform for building apps faster. We'll show v0 live and walk through how it turns a simple prompt into a real, shippable interface. Tom will also explain what “vibe coding” actually looks like in practice, including how teams are using it today and where it fits into modern development workflows. What we'll cover:⚡ You'll see a live build using v0, from the first prompt to a working app.

A team of former Google DeepMind researchers just raised $2B to build America's answer to DeepSeek. In this episode, we sit down with Ioannis Antonoglou (Yannis), co-founder and CTO of Reflection AI, who helped create AlphaGo—the AI that beat the world champion in the game of Go back in 2016. Yannis breaks down what Reflection is building, why they're releasing frontier-level AI models as open-weight, and how mixture-of-experts architecture lets massive models run efficiently. We dig into reinforcement learning, the US vs. China open source gap, sovereign AI, coding agents, and why open science might be the fastest path to the most powerful AI on the planet.Reflection AI: https://www.reflection.aiReflection AI raises $2B at $8B valuation (TechCrunch): https://techcrunch.com/2025/10/09/reflection-raises-2b-to-be-americas-open-frontier-ai-lab-challenging-deepseek/Previous Neuron coverage of DeepSeek: https://www.theneuron.ai/newsletter/deepseek-returns https://www.theneuron.ai/newsletter/10-wild-deepseek-demosSubscribe to The Neuron newsletter: https://theneuron.ai

Most people are still using ChatGPT the way they used Google in 2005: type a question, get an answer, close the tab.In 2026, that's like owning a professional kitchen and only using the microwave.In this episode, Grant and Corey walk through The Neuron's 5-Level AI Proficiency Stack — a framework for going from “I use ChatGPT sometimes” to “AI saves me 10 hours a week.”No coding required. No hype. Just the actual progression that separates casual users from people getting real, compounding value out of AI every single day.The 5 Levels:

Brian Gerkey is the CTO of Intrinsic, the robotics software company that started inside Alphabet and now sits inside Google, working directly with DeepMind and Gemini. Brian co-created ROS (Robot Operating System), the open-source platform used by over 1 million developers that powers everything from factory robots to NASA's Astrobee on the International Space Station. In this episode, Grant talks with Brian about "physical AI" — what happens when AI leaves the screen and starts controlling robots in the real world. They cover why 80% of US manufacturing facilities still have zero automation, how Intrinsic's platform acts as the "Android of robotics," the breakthroughs in AI-powered perception that let robots see with sub-millimeter accuracy using cheap cameras, the challenges of simulating physical contact (friction is a nightmare), and why the best robot application ideas often come from people who know nothing about robots.Subscribe to The Neuron newsletter: https://theneuron.aiIntrinsic: https://www.intrinsic.ai/ROS (Robot Operating System): https://www.ros.org/AI for Industry Challenge: https://www.intrinsic.ai/events/ai-for-industry-challengeIntrinsic joins Google (Feb 2026): https://www.intrinsic.ai/blog/posts/intrinsic-joins-google-to-accelerate-physical-ai

Most businesses don't buy their AI services directly from OpenAI or Google—they buy it through a massive, invisible distribution network called "the channel." Victoria Durgin and Katie Bavoso of Channel Insider join Corey and Grant to explain how this hidden industry works, why AI is shaking it up unlike anything before, and what it means for businesses trying to adopt AI in 2026.Subscribe to The Neuron newsletter: https://theneuron.aiChannel Insider: https://channelinsider.com

In this episode of The Neuron Podcast, Corey Noles and Grant Harvey sit down with Dan Shipper, CEO of Every, to talk about agent-native engineering—the framework his team uses to build and ship AI-powered products at a pace most companies can't match.Dan walks us through what happened when his AI document editor Proof went viral (and then went down), why he believes the way we build software is fundamentally changing, and how Every's small team manages to ship and maintain an entire suite of AI tools: Spiral (automatic style guides from your writing), Sparkle (AI writing cleanup with custom folders), Cora (AI research assistant, now on iOS), Monologue (AI-powered journaling with notes), and Proof (the agent-first document editor that broke the internet for a day), as well as their new to be revealed on Friday: Plus One (a hosted AI agent for Slack).Whether you're a founder, developer, or just someone trying to understand what "agentic" actually means in practice—this conversation is the real-world playbook.Subscribe to The Neuron newsletter: https://theneuron.aiProducts mentioned:• Every: https://every.to• Spiral: https://spiral.computer• Sparkle: https://sparkle.computer• Cora: https://cora.computer• Monologue: https://www.monologue.to/• Proof: https://proofeditor.ai• Plus One (the new one!): https://every.to/plus-one

Nick Heiner leads RL environment development at Surge AI, the bootstrapped company that hit $1.2B in revenue training models for OpenAI, Anthropic, Meta, and Google. In this episode, we break down reinforcement learning environments—the secret training grounds where AI agents learn to actually do work. Nick shares why even the best models fail 40% of real workplace tasks, what happened when 200 Wall Street experts graded GPT-5 and Claude, and his prediction that a $1B company with one human employee could exist by 2030.Resources: • Surge AI Research – Hierarchy of Agentic Capabilities: https://arxiv.org/abs/2601.09032 • Surge AI Blog: https://surgehq.ai/blog • Nick's Sonnet 4.5 Review: https://surgehq.ai/blog/sonnet-4-5-product-take • Nick's Substack: https://nickheiner.substack.com/ • SurgeHQ's enterprisebench: https://surgehq.ai/blog/enterprisebench-corecraft • Nick's hilarious Gemini 3.1 review: https://nickheiner.substack.com/p/gemini-31-pro-not-leading-edge-also • Hemingway-bench AI Writing Leaderboard https://surgehq.ai/blog/hemingway-bench-ai-writing-leaderboard • LMArena is a cancer on AI: https://surgehq.ai/blog/lmarena-is-a-plague-on-ai Subscribe to The Neuron newsletter: https://theneuron.ai

Proton—the company behind the world's largest encrypted email service with 100M+ users—just launched Lumo, a privacy-first AI assistant. We sit down with Eamonn Maguire, who leads Proton's ML team and built Lumo from the ground up. Eamonn has a PhD from Oxford and a postdoc at CERN, and he breaks down how Lumo's encryption actually works, why Big Tech's business model prevents them from building private AI, the real privacy threats hiding inside viral AI trends like Ghibli-fication, and whether AI agents are safe to connect to your bank account. Listeners will learn how encrypted AI handles your data differently, what open-source models power Lumo, and why "set-and-forget" agents are still more hype than reality.

Carta CMO Nicole Baer joins Corey and Grant to break down the real state of startups in 2026. With half of all venture funding now flowing to AI-native companies and seed deals at a six-year low, the startup playbook has fundamentally changed. Nicole shares Carta's data on solo founders, the new billion-dollar timeline, why the Bay Area's grip is tighter than ever, and how AI is reshaping everything from marketing to fund administration.Carta State of Startups 2025 Report: https://carta.com/blog/state-of-startups-2025/Carta Data & Insights (free): https://carta.com/data/Subscribe to The Neuron newsletter: https://theneuron.ai

Recorded live at NVIDIA GTC 2026 in San Jose, Corey sits down with returning guest Kari Briski—VP of Generative AI Software for Enterprise at NVIDIA—to unpack their biggest open-source model yet: Nemotron 3 Super. Kari breaks down why a 120B-parameter model runs as fast as a 12B one, how multi-agent systems are going from science fiction to production, and why Jensen Huang is calling this "a new operating system." We also dig into NVIDIA's work on Open Claw security, the 35x explosion in open-model token generation, and where omni-modal AI is heading next.Subscribe to The Neuron newsletter: https://theneuron.aiRelevant links:NVIDIA Build (try Nemotron): https://build.nvidia.comNemotron on Hugging Face: https://huggingface.co/nvidiaOpen Router: https://openrouter.aiKari's previous Neuron episode (Oct 2025): https://youtu.be/p0INn_w7TYo

Scientific discovery has always been slow. Until now.In this episode, we sit down with Dr. Qichao Hu, CEO of SES AI, to reveal how they are using AI agents to turn a 8-year research cycle into a 2-week sprint. By combining autonomous "wet labs" with advanced AI models, they are solving one of the hardest physics problems in tech: the battery bottleneck.We dive deep into how this "Molecular Universe" project isn't just about EV batteries—it's about unlocking power for data centers, robotics, and AR glasses. If you want to see a concrete example of AI agents working in the physical world to solve material science constraints, do not miss this conversation.

In this episode, we sit down with Yaron Inger, co-founder of Lightricks and LTX, to explore the future of open-source AI video.LTX-2 is currently the #1 ranked open-source audio & video model on Hugging Face — with over 4.5 million downloads in just two months.But what makes it different?It runs locally.It can be fine-tuned on your own IP.It integrates into real video workflows.And it might change how filmmaking, education, and creative work evolve in the AI era.We talk about:• Why open models are catching up to Big Tech• How smaller models are getting better through distillation• Running AI video on consumer GPUs• Infinite, autoregressive video generation• AI teachers that change environments in real time• Whether AI will replace filmmakers — or empower themIf you care about the future of creativity, open AI, or the economics of filmmaking… this one is worth your time.Check out LTX: https://ltx.ioLTX-2 on Hugging Face: https://huggingface.co/Lightricks/LTX-2.3 LTX Desktop Repo: https://github.com/Lightricks/LTX-DeskFor more practical, grounded conversations on AI systems that actually work, subscribe to The Neuron newsletter at https://theneuron.ai.

You've probably used Canva—but you probably haven't seen what it can do with AI. In this episode of The Neuron, we sit down with Danny Wu, Head of AI Products at Canva, to explore how the platform went from a simple design tool to a full-blown "Creative Operating System" powered by AI—serving 230+ million users every month.Danny walks us through how Canva's MCP server lets you create fully editable designs from inside ChatGPT, Claude, and Microsoft Copilot, why their new Canva Design Model is fundamentally different from typical AI image generators (hint: layers), and why 24 billion AI tool uses later, the most surprising use cases are ones they never anticipated.We also get Danny's take on whether AI will homogenize all design, his advice for freelancers who don't want to get replaced, and a live demo of Canva's AI design generation in action.You'll learn:• How MCP powers Canva inside ChatGPT, Claude, and Copilot• What the Canva Design Model understands that GPT-4 doesn't• Why editable layers (not flat images) are the real AI design breakthrough• Danny's advice for freelancers to become irreplaceable in an AI world• How Canva uses AI internally on tens of millions of lines of code• Why AI assistants are becoming "the new SEO" for user acquisitionTry Canva AI at https://canva.com/aiSpecial thanks to the sponsor of this video, Cohesity: https://www.cohesity.com/ResilienceEverywhere/?utm_source=brand-ta-podcast&utm_medium=direct-publisher&utm_campaign=fy26-q2-01-amer-us-digital-awarewbpg-brd-genbr&utm_content=podcastFor more practical, grounded conversations on AI and emerging tech, subscribe to The Neuron newsletter at https://theneuron.ai.

Ryan Carson taught over 1,000,000 people how to code at Treehouse and spent 25% of his entire life doing it. Now he says everything about that process needs to change.In this livestream, Ryan joins Corey Noles and Grant Harvey to rethink programming education from scratch. When AI agents can write production code, pass competitive coding challenges, and ship features while you sleep.We'll cover:

AI data centers are going to double their power consumption by 2030—so where's all that energy coming from? One answer is fusion, the same process that powers the sun.In this episode of The Neuron, we're joined by Brandon Sorbom, Chief Science Officer and Co-founder of Commonwealth Fusion Systems, to explore how his company is racing to build the world's first commercial fusion power plant—and how AI is helping them get there faster.Brandon explains why fusion has been "30 years away" for decades, what changed with high-temperature superconducting magnets, and why fusion is fundamentally safer than fission (hint: fusion is "default off"). We dive into CFS's collaborations with Google DeepMind and NVIDIA, what it takes to wrangle 10,000 unique parts, and when we might actually see fusion on the grid.You'll learn:• What fusion actually is (and why it's not nuclear fission)• Why high-temperature superconducting magnets changed everything• How AI is accelerating plasma control and simulation• The safety profile that makes fusion regulated like an MRI, not a reactor• When CFS expects to hit Q > 1 (net energy) and beyondTo learn more about Commonwealth Fusion Systems, visit https://cfs.energy.For more practical, grounded conversations on AI and emerging tech, subscribe to The Neuron newsletter at https://theneuron.ai

Google just dropped Gemini 3 Flash—a model that outperforms Gemini 2.5 Pro (their last top model) while running 3x faster at less than 1/4 the cost. It's frontier-level reasoning at Flash-level speed, and it's rolling out globally right now.We're sitting down with Logan Kilpatrick from Google DeepMind to explore what this actually means for developers, knowledge workers, and anyone trying to figure out how AI fits into their workflow.What we'll cover:

Diffusion models changed how we generate images and video—now they're coming for text.In this episode, we sit down with Stefano Ermon, Stanford computer science professor and founder of Inception Labs, to unpack how diffusion works for language, why it can generate in parallel (instead of token-by-token), and what that means for latency, cost, and real-time AI products.We talk through:The simplest mental model for diffusion: generate a full draft, then refine it by “fixing mistakes”Why today's autoregressive LLM inference is often memory-bound—and why diffusion can shift it toward a more GPU-friendly compute profileWhere Mercury wins today (IDEs, voice/real-time agents, customer support, EdTech—anywhere humans can't wait)What changes (and what doesn't) for long context and architecture choicesThe real-world way to evaluate models in production: offline evals + the gold-standard A/B testStefano also shares what's next on Mercury's roadmap—especially around stronger planning and reasoning for agentic use cases.Try Mercury + learn more: inceptionlabs.aiFor more practical, grounded conversations on AI systems that actually work, subscribe to The Neuron newsletter at https://theneuron.ai.

Customer service is one of the industries most impacted by AI — but what if AI alone isn't the answer?In this episode of The Neuron Podcast, Grant Harvey and Corey Noles sit down with Matt Price, Founder & CEO of Crescendo, to explore how AI and humans working together can outperform automation alone. After spending 13+ years at Zendesk, Matt is now building an AI-native customer experience platform that automates up to 90% of tickets with 99.8% accuracy — without sacrificing empathy, trust, or outcomes.We cover: • Why LLMs are the biggest shift in customer service since the telephone • Why bolting AI onto old CX workflows fails • How Crescendo's multimodal AI can chat, talk, see images, and control devices in one conversation • Real-world examples (like smart sprinkler troubleshooting via voice + vision + APIs) • Why Crescendo combines AI agents with forward-deployed human experts • How outcome-based pricing aligns incentives around real customer satisfaction • How AI is reshaping (not eliminating) customer service jobs • Why “deflection” is the wrong mindset for CX — and what replaces it • What customer support roles look like in an AI-native futureThis is a deep dive into the next generation of customer experience, where AI handles scale and speed — and humans deliver judgment, empathy, and innovation.Subscribe for weekly conversations with the builders shaping the future of AI and work.Subscribe to The Neuron newsletter for more interviews with the leaders shaping the future of work and AI: https://theneuron.ai

Taylor Mullen, Principal Engineer at Google and creator of Gemini CLI, reveals how his team ships 100-150 features and bug fixes every week—using Gemini CLI to build itself. In this first in-depth interview about Gemini CLI's origin story, we explore why command-line AI agents are having a "terminal renaissance," how Taylor manages swarms of parallel AI agents, and the techniques (like the viral "Ralph Wiggum" method) that separate 10x engineers from 100x engineers. Whether you're a developer or AI-curious, you'll learn practical strategies for using AI coding tools more effectively.

In this week's live-stream replay, we go live for a 2-hour, hands-on deep dive into GPT-5.1 Codex Max with Alexander Embiricos, product lead for OpenAI Codex. You'll walk out feeling like an agentic-coding wizard, even if you're starting from zero. GPT-5.1 Codex Max is OpenAI's latest frontier agentic coding model. It's built on an upgraded reasoning backbone and trained to handle real-world software engineering tasks end to end: PRs, refactors, frontend builds, and deep debugging. It can work independently for hours, compacting its own history so it can refactor entire projects and run multi-hour agent loops without losing context. In this live session, we'll set it up together, build real agents, and push Codex Max to its limits.

Modern AI has been dominated by one idea: predict the next token. But what if intelligence doesn't have to work that way?In this episode of The Neuron, we're joined by Eve Bodnia, Founder and CEO of Logical Intelligence, to explore energy-based models (EBMs)—a radically different approach to AI reasoning that doesn't rely on language, tokens, or next-word prediction.With a background in theoretical physics and quantum information, Eve explains how EBMs operate over an energy landscape, allowing models to reason about many possible solutions at once rather than guessing sequentially. We discuss why this matters for tasks like spatial reasoning, planning, robotics, and safety-critical systems—and where large language models begin to show their limits.You'll learn:What energy-based models are (in plain English)Why token-free architectures change how AI reasonsHow EBMs reduce hallucinations through constraints and verificationWhy EBMs and LLMs may work best together, not in competitionWhat this approach reveals about the future of AI systemsTo learn more about Eve's work, visit https://logicalintelligence.com.For more practical, grounded conversations on AI systems that actually work, subscribe to The Neuron newsletter at https://theneuron.ai.

AI is moving fast — and 2026 is shaping up to be a turning point.In this livestream, Corey and Grant from The Neuron break down our biggest AI predictions for 2026, including:

In this special episode, we go hands-on with three cutting-edge AI tools from Google Labs. First, Jaclyn Konzelman (Director of Product Management) demos Mixboard, an AI-powered concepting board that transforms ideas into visual presentations using Nano Banana Pro. Then, Thomas Iljic (Senior Director of Product Management) shows us Flow, Google's AI filmmaking tool that lets you create, edit, and animate video clips with unprecedented control. Finally, Megan Li (Senior Product Manager) walks us through Opal, a no-code AI app builder that lets anyone create custom AI workflows and mini-apps using natural language.Subscribe to The Neuron newsletter: https://theneuron.aiLinks:Mixboard: https://mixboard.google.com Flow: https://flow.google Opal: https://opal.google Google Labs: https://labs.google

Autonomous coding agents are moving from demos to real production workflows. In this episode, Factory AI co-founder and CTO Eno Reyes explains what "Droids" really are—fully autonomous agents that can take tickets, modify real codebases, run tests, and work inside existing dev workflows.We dig into Factory's context compression research (which outperformed both OpenAI and Anthropic), what makes a codebase "agent-ready," and why Stanford research found that the ONLY predictor of AI success was codebase quality—not adoption rates or token usage.Whether you're a developer curious about autonomous coding tools or just want to understand where AI engineering is headed, this episode is packed with practical insights.

AI reasoning models don't just give answers — they plan, deliberate, and sometimes try to cheat.In this episode of The Neuron, we're joined by Bowen Baker, Research Scientist at OpenAI, to explore whether we can monitor AI reasoning before things go wrong — and why that transparency may not last forever.Bowen walks us through real examples of AI reward hacking, explains why monitoring chain-of-thought is often more effective than checking outputs, and introduces the idea of a “monitorability tax” — trading raw performance for safety and transparency.We also cover:Why smaller models thinking longer can be safer than bigger modelsHow AI systems learn to hide misbehaviorWhy suppressing “bad thoughts” can backfireThe limits of chain-of-thought monitoringBowen's personal view on open-source AI and safety risksIf you care about how AI actually works — and what could go wrong — this conversation is essential.Resources: Title URLEvaluating chain-of-thought monitorability | OpenAI https://openai.com/index/evaluating-chain-of-thought-monitorability/Understanding neural networks through sparse circuits | OpenAI https://openai.com/index/understanding-neural-networks-through-sparse-circuits/OpenAI's alignment blog: https://alignment.openai.com/

Everyone is rushing to build AI agents — but most companies are setting themselves up for failure.In this episode of The Neuron, Darin Patterson, VP of Market Strategy at Make, explains why agentic AI only works if your automation foundation is solid first. We break down when to use deterministic workflows vs AI agents, how to avoid fragile automation sprawl, and why visibility into your entire automation landscape is now mission-critical.You'll see real examples of building agents in Make, how Model Context Protocol (MCP) fits into modern workflows, and why orchestration — not hype — is the real unlock for scaling AI safely inside organizations.Subscribe to The Neuron newsletter for more interviews with the leaders shaping the future of work and AI: https://theneuron.ai

IBM just released Granite 4.0, a new family of open language models designed to be fast, memory-efficient, and enterprise-ready — and it represents a very different philosophy from today's frontier AI race.In this episode of The Neuron, IBM Research's David Cox joins us to unpack why IBM treats AI models as tools rather than entities, how hybrid architectures dramatically reduce memory and cost, and why openness, transparency, and external audits matter more than ever for real-world deployment.We dive into long-context efficiency, agent safety, LoRA adapters, on-device AI, voice interfaces, and why the future of AI may look a lot more boring — in the best possible way.If you're building AI systems for production, agents, or enterprise workflows, this conversation is required listening.Subscribe to The Neuron newsletter for more interviews with the leaders shaping the future of work and AI: https://theneuron.ai

Imagine an AI that doesn't just output answers — it remembers, adapts, and reasons over time like a living system. In this episode of The Neuron, Corey Noles and Grant Harvey sit down with Zuzanna Stamirowska, CEO & Cofounder of Pathway, to break down the world's first post-Transformer frontier model: BDH — the Dragon Hatchling architecture.Zuzanna explains why current language models are stuck in a “Groundhog Day” loop — waking up with no memory — and how Pathway's architecture introduces true temporal reasoning and continual learning. We explore:• Why Transformers lack real memory and time awareness • How BDH uses brain-like neurons, synapses, and emergent structure • How models can “get bored,” adapt, and strengthen connections • Why Pathway sees reasoning — not language — as the core of intelligence • How BDH enables infinite context, live learning, and interpretability • Why gluing two trained models together actually works in BDH • The path to AGI through generalization, not scaling • Real-world early adopters (Formula 1, NATO, French Postal Service) • Safety, reversibility, checkpointing, and building predictable behavior • Why this architecture could power the next era of scientific innovationFrom brain-inspired message passing to emergent neural structures that literally appear during training, this is one of the most ambitious rethinks of AI architecture since Transformers themselves.If you want a window into what comes after LLMs, this interview is essential.Subscribe to The Neuron newsletter for more interviews with the leaders shaping the future of work and AI: https://theneuron.ai

Carina Hong dropped out of Stanford's PhD program to build "mathematical superintelligence" — and just raised $64M to do it. In this episode, we explore what that actually means: an AI that doesn't just solve math problems but discovers new theorems, proves them formally, and gets smarter with each iteration. Carina explains how her team solved a 130-year-old problem about Lyapunov functions, disproved a 30-year-old graph theory conjecture, and why math is the secret "bedrock" for everything from chip design to quant trading to coding agents. We also discuss the fascinating connections between neuroscience, AI, and mathematics.Lean more about Axiom: https://axiommath.ai/ Subscribe to The Neuron newsletter: https://theneuron.ai

Nick Talken started a 3D printing materials company in a trailer lab in his co-founder's backyard, sold it to a 145-year-old German chemical giant, then spun out an AI platform that's now transforming R&D for Fortune 100 companies. Albert Invent's foundational AI model—trained on 15 million molecular structures—is helping scientists at companies like Kenvue (maker of Tylenol, Neutrogena, and Listerine) compress projects from 3 months to 2 days. We dig into how enterprises train bespoke AI models on proprietary data, why you can't just use ChatGPT for chemistry, and what becomes possible when AI can "think like a chemist."Subscribe to The Neuron newsletter: https://theneuron.aiAlbert Invent website: https://www.albertinvent.comKenvue partnership announcement: https://www.businesswire.com/news/home/20251014240355/en/

Sam Liang worked on the team that built the "blue dot" for Google Maps and now he's transforming how we think about meetings with Otter.ai. Fresh off crossing $100M in ARR with a lean team of less than 200, Sam joins us to discuss how Otter evolved from passive transcription to active AI agents that participate in your meetings. Learn practical strategies for building reliable voice AI, implementing enterprise knowledge bases, and deploying AI agents that actually deliver ROI.Resources mentioned:• Otter.ai $100M ARR announcement: https://otter.ai/blog/otter-ai-breaks-100m-arr-barrier• HIPAA compliance: https://otter.ai/blog/otter-ai-achieves-hipaa-complianceSubscribe to The Neuron newsletter: https://theneuron.ai

In this episode, we sit down with Pavan Davuluri, Corporate Vice President of Microsoft's Windows + Devices business, to explore how Windows is evolving into an AI-native platform. Pavan leads the team responsible for strategy, design, and delivery of Windows products across the full stack - from silicon and devices to platform, OS, apps, experiences, security, and cloud. With 23 years at Microsoft, he's driven the creation of the Surface line and now oversees how hardware and software fuse together with AI at the center. We explore how Copilot is being deeply integrated into Windows, the engineering shifts required to make Windows a more proactive and intelligent platform, and how Microsoft balances powerful automation with user control. From Surface design standards influencing the broader ecosystem to supporting OEM partners in the AI PC era, Pavan reveals the principles guiding Windows' transformation and what the computing experience will look like in the next five years.Subscribe to The Neuron newsletter: https://theneuron.aiMicrosoft Surface: https://www.microsoft.com/surfaceWindows AI features: https://www.microsoft.com/windows/ai-features

While everyone obsesses over which AI model is smartest, a quiet revolution is happening in the infrastructure layer underneath. Modular just raised $250M at a $1.6B valuation to solve a problem most people don't know exists: AI is locked into expensive, vendor-specific hardware ecosystems. Tim Davis, Co-Founder & President of Modular, joins us to explain why his company is building the "hypervisor for AI"—making it possible to write code once and run it on any GPU, from NVIDIA to AMD to Apple Silicon. We dive into why this matters for businesses, what the Android analogy really means, how companies are seeing 70-80% cost reductions, and whether we're even on the right path to superintelligence.Subscribe to The Neuron newsletter: https://theneuron.aiTry Modular: https://modular.comGetting Started Guide: https://modular.com/get-started

In this episode, we sit down with Scott Guthrie, EVP of Microsoft's Cloud + AI Group, to explore the architecture behind Azure's AI Superfactory. Scott oversees Microsoft's hyperscale cloud computing solutions including Azure, generative AI platforms, and next-generation infrastructure. We dive into Microsoft's strategic approach to AI datacenter buildout, the innovative Fairwater architecture with its 120,000+ fiber miles of AI WAN backbone, and how Microsoft is balancing performance, sustainability, and cost at planet-scale. From dense GPU clusters drawing 140kW per rack to closed-loop liquid cooling systems, Scott reveals the engineering trade-offs behind infrastructure that powers frontier AI models with trillions of parameters. Whether you're an enterprise leader planning AI adoption or a developer curious about cloud architecture, you'll leave understanding how Microsoft is executing on next-gen infrastructure that transforms global challenges into opportunities.Subscribe to The Neuron newsletter: https://theneuron.ai

Retool CEO David Hsu reveals that 48% of non-engineers are now shipping software. We explore how AI is democratizing software development, why engineers might stop coding internal apps within 18-24 months, and what this means for the future of work. David shares insights from Retool's survey of 10,000+ companies, Retool's new AppGen program, and how "tomorrow's developers" are using AI to build real production applications on enterprise data.Subscribe to The Neuron newsletter: https://theneuron.aiLearn more about Retool: https://retool.com

Computers can see and hear, but they've never been able to smell—until now. In this episode, we sit down with Alex Wiltschko, Founder & CEO of Osmo, to explore how his company is using AI to digitize scent. Alex walks us through how they "teleported" the smell of a fresh plum across their lab, created the world's first AI-designed fragrance molecules, and built Osmo Studio—a platform that lets anyone design custom fragrances in one week instead of two years. We discuss the read/map/write framework for digitizing smell, why scent is tied directly to memory and emotion, and how this technology could eventually detect diseases like cancer and Parkinson's earlier than any current diagnostic. Plus: what does the Museum of Pop Culture smell like, and can AI really create a fragrance from a Bon Iver song?Links: Osmo: https://www.osmo.aiScent Teleportation Update: https://www.osmo.ai/blog/update-scent-teleportation-we-did-itOsmo Studio: https://osmostudios.ai/ Subscribe to The Neuron newsletter: https://theneuron.aiCheck out the sponsor of this video, Flora: https://dub.florafauna.ai/neuronSubscribe to The Neuron newsletter: https://theneuron.aiHosted by: Corey Noles and Grant HarveyGuest: Alex Wiltschko (Founder & CEO, Osmo)Published by: Manique SantosEdited by: Kush Felisilda

Behind every AI response, there's an invisible army of humans who trained it. In this episode, we talk with Casper Elliott from Invisible Technologies - the company that's trained 80% of the world's top AI models. We explore how models actually learn, why data quality matters more than quantity, what enterprises get wrong about AI deployment, and whether AI will really automate everyone's jobs. Casper shares insights from working with frontier labs, reveals the surprising skills that make great AI trainers (hint: League of Legends helps), and explains why the future needs more humans, not fewer.Subscribe to The Neuron newsletter: https://theneuron.aiLearn more about Invisible Technologies: https://invisibletech.ai

Ever wondered who's actually teaching ChatGPT and Claude how to think? Meet Caspar Eliot from Invisible Technologies - the company behind 80% of the world's top AI model training. In this eye-opening conversation, we uncover the massive human workforce behind "artificial" intelligence, why your League of Legends skills might land you an AI job, and the shocking mistakes enterprises make when deploying AI.We discuss:• How AI models really learn (hint: it's not just scraping the internet)• Why data quality beats data quantity every time• The Charlotte Hornets' revolutionary AI scouting system• Whether robots will actually take your job (spoiler: probably not)• The $14.8 billion Scale AI valuation and what it means• Why Mark Andreessen thinks VCs won't be automatedPlus: Caspar reveals the #1 mistake companies make with AI deployment and why "AI-ifying" your current process is doomed to fail.Subscribe to The Neuron newsletter: https://theneuron.aiConnect with Caspar on LinkedIn: https://uk.linkedin.com/in/caspar-eliot-46b9a55aLearn more about Invisible Technologies: https://invisibletech.aiPlease check out the sponsor of this video, Warp.dev: https://warp.dev

From Adobe Max 2025 in Los Angeles, Corey and Grant sit down with Ely Greenfield, Adobe's Chief Technology Officer, to explore the philosophy behind Adobe's practical AI strategy. Discover why the crowd went wild over AI renaming layers, how Adobe thinks about "additive not subtractive" AI, and where creative tools are heading next. Ely shares Adobe's vision for making AI a creative partner that enhances rather than replaces human artistry, and explains why the best AI features are often the most boring ones.Topics covered include: the Photoshop AI Assistant, Harmonize for instant compositing, auto-masking in Premiere Pro, the Express conversational workflow, and Adobe's unique approach to balancing automation with creative control.Read our Adobe Max coverage:• Adobe Reinvents Creative Suite with AI• Day 2 Keynote Recap• NVIDIA's Beyond-GPUs StrategyThis episode was made possible by our sponsor, Clutch: https://clutch.co/resources/how-smbs-see-ai-crawlers?source=theneuron&utm_medium=referral&utm_campaign=newsletter_10-14-2025Related resources:• Adobe Max 2025 announcements: https://www.theneuron.ai/explainer-articles/adobe-goes-all-in-on-ai-max-2025-unleashes-creative-ai-arsenal-across-every-tool• Day 2 Keynote and Sneaks recap: https://www.theneuron.ai/explainer-articles/adobe-max-day-2-the-storyteller-is-still-king-but-ai-is-their-new-superpower• Check out Adobe Firefly: https://firefly.adobe.com/• Project Graph demo: https://www.youtube.com/live/wQza2t9Qs64?t=10409sMake sure to check out Clutch's new report on AI crawling for SMBS! https://clutch.co/resources/how-smbs-see-ai-crawlers?source=theneuron&utm_medium=referral&utm_campaign=newsletter_10-14-2025Subscribe to The Neuron newsletter for daily AI news: https://theneuron.aiOriginal article: https://www.theneuron.ai/explainer-articles/adobe-goes-all-in-on-ai-max-2025-unleashes-creative-ai-arsenal-across-every-tool

AI is changing what we need from our computers—but does that mean you need an "AI PC"? Corey and Grant sit down with Logan Lawler from Dell Technologies who leads Dell Pro Max AI solutions to decode what matters (and what doesn't) when buying or upgrading your next computer. From CPUs and GPUs to memory, NPUs, and traps to avoid, this episode is your practical roadmap for staying future-ready through the next five years of AI-powered work.Dell Pro Max Workstations: https://www.dell.com/en-us/plcp/lp/dell-pro-max-pcs LM Studio LIVE tutorial: https://www.youtube.com/watch?v=Ai3sBeBdA1Y Kiwix Wikipedia Download: https://en.wikipedia.org/wiki/Kiwix One Trainer: https://github.com/Nerogar/OneTrainerJawset Postshot: https://www.jawset.com/Subscribe to The Neuron newsletter: https://theneuron.aiCheck out the Reshaping Workflows Podcast: https://reshaping-workflows.simplecast.com/

AI is changing what we need from our computers—but does that mean you need an "AI PC"? Corey and Grant sit down with Logan Lawler from Dell Technologies who leads Dell Pro Max AI solutions to decode what matters (and what doesn't) when buying or upgrading your next computer. From CPUs and GPUs to memory, NPUs, and traps to avoid, this episode is your practical roadmap for staying future-ready through the next five years of AI-powered work.Dell Pro Max Workstations: https://www.dell.com/en-us/plcp/lp/dell-pro-max-pcs LM Studio LIVE tutorial: https://www.youtube.com/watch?v=Ai3sBeBdA1Y Kiwix Wikipedia Download: https://en.wikipedia.org/wiki/KiwixOne Trainer: https://github.com/Nerogar/OneTrainerJawset Postshot: https://www.jawset.com/Subscribe to The Neuron newsletter: https://theneuron.aiCheck out the Reshaping Workflows Podcast: https://reshaping-workflows.simplecast.com/

Learn how to use NVIDIA's Nemotron open-source AI models with VP Kari Briski. We cover what Nemotron is, minimum hardware specs, the difference between Nano/Super/Ultra tiers, when to choose local vs cloud AI, and practical deployment patterns for businesses. Perfect for anyone wanting to run powerful AI locally with full control and privacy.Resources mentioned:NVIDIA Nemotron Models: https://www.nvidia.com/en-us/ai-data-science/foundation-models/nemotron/Start prototyping for free: https://build.nvidia.com/explore/discoverSubscribe to The Neuron newsletter: https://theneuron.aiWatch more AI interviews: https://www.youtube.com/@TheNeuronAI

AI search is fundamentally changing how people find information online, but it's also creating a Wild West of spam, manipulation, and brand impersonation. SEO expert Mark Williams-Cook joins us to discuss why he calls AI a "leaky bucket," how expired domains are gaming LLMs, and what the death of the link graph means for the future of search. We'll explore practical strategies for making your site visible to AI, the risks brands face from AI phishing, and whether SEO is truly dead or just evolving. Perfect for anyone who owns a website or runs a business.Subscribe to The Neuron newsletter: https://theneuron.aiGuest: Mark Williams-Cook - Director at Candour, Founder of AlsoAskedFind Mark on LinkedIn: https://www.linkedin.com/in/markseo Search with Candour podcast: https://withcandour.co.uk/podcast

Everyone's talking about the AI datacenter boom right now. Billion dollar deals here, hundred billion dollar deals there. Well, why do data centers matter? It turns out, AI inference (actually calling the AI and running it) is the hidden bottleneck slowing down every AI application you use (and new stuff yet to be released). In this episode, Kwasi Ankomah from SambaNova Systems explains why running AI models efficiently matters more than you think, how their revolutionary chip architecture delivers 700+ tokens per second, and why AI agents are about to make this problem 10x worse.

In this special hands-on episode, Corey Noles and Grant Harvey dive into OpenAI's Sora 2 - the AI video platform that's part TikTok, part meme generator, and 100% chaos. Watch as they navigate the new social media-style interface, create ridiculous videos featuring Sam Altman at a Berlin techno rave filled with clowns, and discover why Sam has become the "Tom from MySpace" of AI-generated content.The hosts explore Sora 2's key features including the viral "cameo" system that lets you loan your likeness to other creators, the remix functionality, and the surprisingly robust prompt editing capabilities. They demonstrate the platform's strengths (incredibly fast generation, social features, creative possibilities) and weaknesses (no timeline editor for scrubbing through footage, occasional voice mismatches, server delays during peak times).Key takeaways include practical prompting tips for better results, how to set up and optimize your cameo preferences, and why being descriptive in your prompts makes all the difference. Grant and Corey also discuss the broader implications: Is this OpenAI's answer to TikTok? How does this fit into the AI landscape where every major player now has a social platform? And most importantly - why is everyone making Sam Altman breakdance?Whether you're AI-curious or a seasoned prompt engineer, you'll learn how to navigate Sora 2's interface, avoid common pitfalls, and maybe even create your own viral AI video. Plus, find out why Corey's "realistic physique was not okay on Sora" and had to optimize his cameo settings with ChatGPT's help.➤ CHAPTERSTimecode - Chapter Title0:00 - Introduction: What is Sora 21:03 - Sam Altman is the Tom from MySpace of AI1:57 - Mobile App Tour & Social Features3:42 - Remix Feature: Editing Sam's Bedtime4:12 - The Secret to Better Prompting6:40 - Profile Features & Your Drafts8:44 - Understanding Cameos10:40 - How to Set Up Your Cameo13:00 - Optimizing Cameo Preferences with ChatGPT15:05 - Live Demo of Creating A Video18:25 - Using the Edit Feature20:09 - First Video Results23:32 - Fixing a Bad Video26:49 - Finding & Following People30:33 - Exploring Trending Videos32:50 - Why OpenAI Built a Social Platform35:34 - Training Data Implications38:00 - Voice Input and Pro Prompting Tips40:02 - The First AI-Native Social Media45:43 - Final ThoughtResources: - Sora 2 launch: https://openai.com/index/sora-2/- Download the app https://apps.apple.com/us/app/sora-by-openai/id6744034028- Sora app on the web: https://sora.chatgpt.com/exploreP.S: First comment gets an invite code. Grant has 4 atm :)

In this episode, we're joined by Ahmed El-Kishky, research lead at OpenAI, to discuss their historic victory at the International Collegiate Programming Contest (ICPC) where their AI system solved all 12 problems, beating every human team in the world finals. We dive into how they combined GPT-5 with experimental reasoning models, the dramatic last-minute solve, and what this means for the future of programming and AI-assisted science. Ahmed shares behind-the-scenes stories from Azerbaijan, explains how AI learns to test its own code, and discusses OpenAI's path from this win to automating scientific discovery over months and years.Subscribe to The Neuron: https://theneuron.aiWisprFlow: https://wisprflow.ai/neuronOpenAI: https://openai.com

Microsoft AI CEO Mustafa Suleyman (co-founder of DeepMind) joins The Neuron to discuss his provocative essay on "Seemingly Conscious AI" and why machines that mimic consciousness pose unprecedented risks - even when they're not actually alive. We explore how 700 million people are already using AI as life coaches, Microsoft's massive $208B revenue strategy for AI, and exclusive features like Copilot Vision that can see everything you see in real-time.Key topics:• Why AI consciousness is an illusion - and why that's dangerous • Microsoft's 2 gigawatt datacenter expansion (2.5x Seattle's power usage)• MAI-1 Preview breaking into the top 10 models globally• The future of AI browsers and autonomous agents• Why granting AI rights could threaten humanitySubscribe to The Neuron newsletter (580,000+ readers): https://theneuron.aiResources mentioned:• Mustafa's essay "Seemingly Conscious AI Is Coming" https://mustafa-suleyman.ai/seemingly...• Try Copilot Vision: https://copilot.microsoft.com• Microsoft Edge AI features: https://www.microsoft.com/en-us/edge• MAI-1 Preview models: https://microsoft.ai/news/two-new-in-...Special thanks to today's sponsor, Wispr Flow: https://wisprflow.ai/neuron

Steve Brown's house burned down in a wildfire—and accidentally saved his life. When doctors missed his aggressive blood cancer for over a year, Steve built a swarm of AI agents that diagnosed it in minutes and helped design his treatment. Now he's turning that breakthrough into CureWise, a precision oncology platform helping cancer patients become better advocates. We explore agentic medicine, AI safety in healthcare, and how swarms of specialized AI agents are changing cancer care from diagnosis to treatment selection.

Illia Polosukhin, co-author of Attention Is All You Need and co-founder of NEAR Protocol, believes today's centralized AI ecosystem is broken. In this episode, he explains why User-Owned AI is the path forward — making systems private, verifiable, and aligned with users rather than corporations. We explore confidential computing, interoperable AI agents, and what a more sustainable AI future might really look like.Subscribe to The Neuron newsletter: https://www.theneurondaily.com

Thomson Reuters just launched Deep Research—an AI system that doesn't just search legal databases, but plans and strategizes like an experienced attorney. In this episode, we explore how one of the world's largest legal research companies is using AI agents to transform how lawyers work, the challenges of building AI for high-stakes legal decisions, and what this means for the future of knowledge work. CTO Joel Hron shares insights from testing with 1,200+ customers, tackling hallucination risks in legal settings, and building professional-grade AI systems.Resources mentioned: Thomson Reuters Deep Research: https://www.prnewswire.com/news-releases/thomson-reuters-launches-cocounsel-legal-transforming-legal-work-with-agentic-ai-and-deep-research-302521761.html Westlaw & KeyCite: https://legal.thomsonreuters.com/en/products/westlaw/keycite Claude Code for development: https://www.anthropic.com/claude-code LinkedIn: Joel HronThomson Reuters Medium blog: https://medium.com/tr-labs-ml-engineering-blog Subscribe to The Neuron newsletter: https://theneuron.ai

Today we go deeper on Google's AI stack with Logan Kilpatrick: what AI Studio is great at, how it fits with Firebase/Colab/Gemini CLI/Jules, and where "thinking" models make sense. We cover real-world workflows—from game prototyping and screen-share assistance to legal/privacy basics and on-device micro-apps. Logan shares his insights on vibe coding, the future of AI development, and Google's open-source strategy with Gemma models.Resources mentioned:Google AI Studio: https://aistudio.google.com/Gemini CLI: https://github.com/google-gemini/gemini-cliKaggle Game Arena: https://www.kaggle.com/competitionsGoogle Firebase: https://firebase.google.com/Gemma models: https://ai.google.dev/gemmaSubscribe to The Neuron newsletter: https://theneuron.ai