POPULARITY
Gemini 3 is a few days old and the massive leap in performance and model reasoning has big implications for builders: as models begin to self-heal, builders are literally tearing out the functionality they built just months ago... ripping out the defensive coding and reshipping their agent harnesses entirely. Ravin Kumar (Google DeepMind) joins Hugo to breaks down exactly why the rapid evolution of models like Gemini 3 is changing how we build software. They detail the shift from simple tool calling to building reliable "Agent Harnesses", explore the architectural tradeoffs between deterministic workflows and high-agency systems, the nuance of preventing context rot in massive windows, and why proper evaluation infrastructure is the only way to manage the chaos of autonomous loops. They talk through: - The implications of models that can "self-heal" and fix their own code - The two cultures of agents: LLM workflows with a few tools versus when you should unleash high-agency, autonomous systems. - Inside NotebookLM: moving from prototypes to viral production features like Audio Overviews - Why Needle in a Haystack benchmarks often fail to predict real-world performance - How to build agent harnesses that turn model capabilities into product velocity - The shift from measuring latency to managing time-to-compute for reasoning tasks LINKS From Context Engineering to AI Agent Harnesses: The New Software Discipline, a podcast Hugo did with Lance Martin, LangChain (https://high-signal.delphina.ai/episode/context-engineering-to-ai-agent-harnesses-the-new-software-discipline) Context Rot: How Increasing Input Tokens Impacts LLM Performance (https://research.trychroma.com/context-rot) Effective context engineering for AI agents by Anthropic (https://www.anthropic.com/engineering/effective-context-engineering-for-ai-agents) Upcoming Events on Luma (https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk) Watch the podcast video on YouTube (https://youtu.be/CloimQsQuJM) Join the final cohort of our Building AI Applications course starting Jan 12, 2026 (https://maven.com/hugo-stefan/building-ai-apps-ds-and-swe-from-first-principles?promoCode=vgrav): https://maven.com/hugo-stefan/building-ai-apps-ds-and-swe-from-first-principles?promoCode=vgrav
Send us a textJoin hosts Alex Sarlin and Ben Kornell as they unpack the breakthroughs and backlash following the Google DeepMind AI for Learning Forum in London—and what it means for the future of edtech.✨ Episode Highlights:[00:03:30] Google DeepMind's AI for Learning Forum sets a new global tone for learning innovation[00:06:58] Google's “Learn Your Way” tool personalizes entire textbooks with AI[00:08:12] AI video tools like Google Flow redefine classroom content creation[00:13:40] Why this could be the moment for teachers to become AI media creators[00:18:36] Risks of AI-generated video: deepfakes, disinformation, and youth impact[00:22:19] Duolingo stock crashes over 40% amid investor fears of big tech competition[00:23:52] Screen time backlash accelerates: parents turn to screen-free edtech[00:26:14] Why physical math books and comic-style curricula are surging in demand[00:27:35] A wave of screen-free edtech: from LeapFrog alumni to audio-first toolsPlus, special guests:[00:28:51] Michelle Culver, Founder of The Rithm Project, and Erin Mote, CEO of InnovateEDU, on the psychological risks of AI companions, building trust in AI tools, and designing for pro-social relationships[00:51:48] Ben Caulfield, CEO of Eedi, shares groundbreaking findings from their Google DeepMind study: AI tutors now match—and sometimes outperform—humans in math instruction, and how Eedi powers the future of scalable, safe AI tutoring.
In this episode, Nina Olding, Staff Product Manager at Weights & Biases and formerly at Google DeepMind, working on trust and compliance for AI, joins Randy to explore the UX challenges of AI‑driven features. As AI becomes increasingly woven into digital products, the traditional UX cues and trust‑signals that users rely on are changing. Nina introduces her framework of the three “A's” for AI UX: Awareness, Agency, and Assurance, and explains how product teams can build this into their AI‑enabled products without launching a massive transformation programme.Key Takeaways— As AI features proliferate, the UX challenge is less about the technology and more about how users perceive, understand and trust the interactions.— Trust is based on three foundational dimensions for AI‑enabled products: Awareness, Agency, Assurance.— Awareness: Make it clear when AI is involved (and when it isn't). Invisible AI = risk of misunderstanding. Magical AI without context = disorientation.— Agency: Give users control, or at least the option to opt‑out, define boundaries, choose defaults vs advanced settings.— Assurance: Because AI can be non‑deterministic, you must design for confidence—indicators of reliability, transparency about limitations, ability to question or override outputs.Chapters00:00 – Intro: Why AI products are failing on trust00:47 – Nina Old's journey from Google DeepMind to Weights & Biases03:20 – The UX of AI: It's not just a chat window04:08 – Introducing the Three A's framework: Awareness, Agency, Assurance08:30 – Designing for Awareness: Visibility and user signals14:40 – Agency: Giving users control and escape hatches21:30 – Assurance: Transparency, confidence indicators, and humility28:05 – Three key questions to assess AI UX30:50 – The product case for trust: Compliance, loyalty, and retention33:00 – Final thoughts: Building the trust muscleFeatured Links: Follow Nina on LinkedIn | Weights & Biases | Check out Nina's 'The hidden UX of AI' slides from Industry Conference Cleveland 2025We're taking Community Questions for The Product Experience podcast.Got a burning product question for Lily, Randy, or an upcoming guest? Submit it here. Our HostsLily Smith enjoys working as a consultant product manager with early-stage and growing startups and as a mentor to other product managers. She's currently Chief Product Officer at BBC Maestro, and has spent 13 years in the tech industry working with startups in the SaaS and mobile space. She's worked on a diverse range of products – leading the product teams through discovery, prototyping, testing and delivery. Lily also founded ProductTank Bristol and runs ProductCamp in Bristol and Bath. Randy Silver is a Leadership & Product Coach and Consultant. He gets teams unstuck, helping you to supercharge your results. Randy's held interim CPO and Leadership roles at scale-ups and SMEs, advised start-ups, and been Head of Product at HSBC and Sainsbury's. He participated in Silicon Valley Product Group's Coaching the Coaches forum, and speaks frequently at conferences and events. You can join one of communities he runs for CPOs (CPO Circles), Product Managers (Product In the {A}ether) and Product Coaches. He's the author of What Do We Do Now? A...
Proteins are crucial for life. They're made of amino acids that “fold” into millions of different shapes. And depending on their structure, they do radically different things in our cells. For a long time, predicting those shapes for research was considered a grand biological challenge.But in 2020, Google's AI lab DeepMind released Alphafold, a tool that was able to accurately predict many of the structures necessary for understanding biological mechanisms in a matter of minutes. In 2024, the Alphafold team was awarded a Nobel Prize in chemistry for the advance.Five years later after its release, Host Ira Flatow checks in on the state of that tech and how it's being used in health research with John Jumper, one of the lead scientists responsible for developing Alphafold.Guest: John Jumper, scientist at Google Deepmind and co-recipient of the 2024 Nobel Prize in chemistry.Transcripts for each episode are available within 1-3 days at sciencefriday.com. Subscribe to this podcast. Plus, to stay updated on all things science, sign up for Science Friday's newsletters.
Gemini 3 is officially here. ✨ ✨ ✨For about 8 months, Gemini 2.5 Pro has mostly maintained its standing as the top LLM in the world yet Google just unleashed its successor in Gemini 3.0. So, what's new in Gemini 3? And whether you're a developer or casual user, what does Google's new model unlock? Join us as we chat with Google's Logan Kilpatrick's for all the answers. Gemini 3: What's new and what it unlocks for your business with -- An Everyday AI Chat with Google DeepMind's Logan Kilpatrick and Jordan WilsonNewsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Thoughts on this? Join the convo and connect with other AI leaders on LinkedIn.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Gemini 3 Release Overview & FeaturesState-of-the-Art AI Benchmarks ExceededGemini 3 in Google Ecosystem ProductsGemini 3 Vibe Coding Capabilities DemoNon-Developer Use Cases for Gemini 3Multimodal Understanding and VisualizationsAgentic AI Tools: Gemini Agent & Anti GravityBusiness Growth with Gemini 3 AI IntegrationTimestamps:00:00 Gemini 3: State-of-the-Art AI05:59 "Gemini 3: Build Ambitiously"08:16 "AI Studio: Bringing Ideas Alive"12:44 Gemini App Agents & Anti-Gravity14:57 "Enhancing AI as a Thought Partner"17:01 AI Studio: Build Apps FasterKeywords:Gemini 3, Gemini 3 Pro, Google AI, AI Studio, Vibe Coding, multimodal model, agentic coding, tool calling, anti gravity, generative interfaces, Gemini app, APIs, AI capabilities, interactive experience, visual dashboard, bespoke visualization, state-of-the-art model, developer platform, agentic developer tools, benchmark results, code editor, IDE integration, product experiences, infrastructure teams, triage inbox, personal assistant, proactive agents, 2.5 Pro, model capability, product feedback, code generation, gallery applets, build mode, ambition in AI, software engineering, feature enhancement, thought partner, AI-powered building, on demand experience, interactive visualizations, coding advancements, user engagement, real-time rollout.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Head to AI.studio/build to create your first app. Head to AI.studio/build to create your first app.
Tonight's Guest WeatherBrain is Corey Bunn. He's a full time Operational Meteorologist with Coastal Weather Research Center. He prepares daily forecasts and also has a responsibility of issuing severe weather warnings and he also maintains the company website and assists in hurricane forecasting operations. He joined the CWRC in 2012 after completing his Bachelor's Degree in Meteorology at the University of South Alabama. Corey, welcome to WeatherBrains! Our next Guest WeatherBrain (in order of appearance) is Jeff Medlin, the founder and CEO of Medlin Meterological Consulting LLC. He's had a distinguished career; having previously spent over 36 years working with the National Weather Service. His tenure included 8 years as Meteorologist-In-Charge and 20 years as Science and Operations Officer at NWS Mobile (AL). Today, he's the severe and winter weather outlook meteorologist for Coastal Weather Research Center. He's also an Adjunct Professor at the University of South Alabama. Jeff, welcome to WeatherBrains! Tonight's Guest Panelist is someone whose passion for weather started early—at just five years old—after experiencing a weak tornado that sparked a lifelong fascination with the atmosphere. That early intrigue never faded, and today he channels that enthusiasm into his work as the weekend meteorologist at WHIO-TV in Dayton. A true weather geek at heart, he's recently reached an exciting career milestone by earning his NWA Digital Seal and TV Seal, marking another step forward in his broadcast meteorology journey. We're thrilled to have him with us tonight—please welcome Nicholas Dunn to WeatherBrains! Our email officer Jen is continuing to handle the incoming messages from our listeners. Reach us here: email@weatherbrains.com. Compare/Contrast Davis and Tempest Weather Stations (09:30) Importance of obtaining the Digital Seals (11:00) Jeff Medlin's origins in the weather field (20:00) 1979's Hurricane Frederic and its aftermath (24:30) Alabama Power's support for Coastal Weather Research Center (36:00) What is CCAPS and when did it begin? (39:00) Looking back at 1969's Hurricane Camille (56:00) MLLW Tidal Datum (01:25:00) The Astronomy Outlook with Tony Rice (01:29:30) This Week in Tornado History With Jen (01:31:45) E-Mail Segment (01:33:30) and more! Web Sites from Episode 1035: Alabama Weather Network Picks of the Week: Jeff Medlin - South Alabama Meteorology Program Nicholas Dunn - SpaceWeatherLive.com James Aydelott - Out Jen Narramore - "Volnado" at Kilauea Volcano in Hawaii Rick Smith - Out Troy Kimmel - Out Kim Klockow-McClain - NOAA NWS Space Weather Prediction Center John Gordon - Noctilucent clouds - Everything you need to know Bill Murray - Weatherwise Magazine: Vol 78, No. 6 (Current Issue as of 11/2025) James Spann - WeatherNext 2: Google DeepMind's most advanced forecasting model The WeatherBrains crew includes your host, James Spann, plus other notable geeks like Troy Kimmel, Bill Murray, Rick Smith, James Aydelott, Jen Narramore, John Gordon, and Dr. Kim Klockow-McClain. They bring together a wealth of weather knowledge and experience for another fascinating podcast about weather.
Meta's Ray-Ban Display glasses steal the spotlight on a Rome trip, triggering debates about privacy, wearable tech etiquette, and the uncomfortable power of recording the world through your eyewear. Plus, gadgets scanning your urine, bots shaping your inbox, and the future of Disney+! Counting Renaissance butts in Rome with the Meta Ray-Ban Display Jury says Apple owes Masimo $634M for patent infringement Google ordered to pay $665 million for anticompetitive practices in Germany Disney and YouTube TV reach deal to end blackout The future of Disney Plus could involve AI-generated videos X is finally rolling out Chat, its DM replacement with encryption and video calling Tim Cook could step down as Apple CEO 'as soon as next year' iPhone Pocket revealed in hands-on videos of new Apple accessory AMD continues to chip away at Intel's X86 market share — company now sells over 25% of all x86 chips and powers 33% of all desktop systems Google DeepMind is using Gemini to train agents inside Goat Simulator 3 George Lucas' narrative art museum opens next year in LA Spotify's new audiobook recap feature uses AI to remind you of the story so far PNG is back! 3 SeatGuru alternatives for finding the best airline seats What Are We Going to Do With 300 Billion Pennies? Withings Beamo Host: Leo Laporte Guests: Victoria Song and Christina Warren Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: zapier.com/twit deel.com/twit zscaler.com/security helixsleep.com/twit ventionteams.com/twit
Meta's Ray-Ban Display glasses steal the spotlight on a Rome trip, triggering debates about privacy, wearable tech etiquette, and the uncomfortable power of recording the world through your eyewear. Plus, gadgets scanning your urine, bots shaping your inbox, and the future of Disney+! Counting Renaissance butts in Rome with the Meta Ray-Ban Display Jury says Apple owes Masimo $634M for patent infringement Google ordered to pay $665 million for anticompetitive practices in Germany Disney and YouTube TV reach deal to end blackout The future of Disney Plus could involve AI-generated videos X is finally rolling out Chat, its DM replacement with encryption and video calling Tim Cook could step down as Apple CEO 'as soon as next year' iPhone Pocket revealed in hands-on videos of new Apple accessory AMD continues to chip away at Intel's X86 market share — company now sells over 25% of all x86 chips and powers 33% of all desktop systems Google DeepMind is using Gemini to train agents inside Goat Simulator 3 George Lucas' narrative art museum opens next year in LA Spotify's new audiobook recap feature uses AI to remind you of the story so far PNG is back! 3 SeatGuru alternatives for finding the best airline seats What Are We Going to Do With 300 Billion Pennies? Withings Beamo Host: Leo Laporte Guests: Victoria Song and Christina Warren Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: zapier.com/twit deel.com/twit zscaler.com/security helixsleep.com/twit ventionteams.com/twit
Meta's Ray-Ban Display glasses steal the spotlight on a Rome trip, triggering debates about privacy, wearable tech etiquette, and the uncomfortable power of recording the world through your eyewear. Plus, gadgets scanning your urine, bots shaping your inbox, and the future of Disney+! Counting Renaissance butts in Rome with the Meta Ray-Ban Display Jury says Apple owes Masimo $634M for patent infringement Google ordered to pay $665 million for anticompetitive practices in Germany Disney and YouTube TV reach deal to end blackout The future of Disney Plus could involve AI-generated videos X is finally rolling out Chat, its DM replacement with encryption and video calling Tim Cook could step down as Apple CEO 'as soon as next year' iPhone Pocket revealed in hands-on videos of new Apple accessory AMD continues to chip away at Intel's X86 market share — company now sells over 25% of all x86 chips and powers 33% of all desktop systems Google DeepMind is using Gemini to train agents inside Goat Simulator 3 George Lucas' narrative art museum opens next year in LA Spotify's new audiobook recap feature uses AI to remind you of the story so far PNG is back! 3 SeatGuru alternatives for finding the best airline seats What Are We Going to Do With 300 Billion Pennies? Withings Beamo Host: Leo Laporte Guests: Victoria Song and Christina Warren Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: zapier.com/twit deel.com/twit zscaler.com/security helixsleep.com/twit ventionteams.com/twit
Google DeepMind announces WeatherNext 2, Jeff Bezos signs on as co-CEO of Project Prometheus, Sky Sports shut down women-targeted TikTok channel Halo. MP3 Please SUBSCRIBE HERE for free or get DTNS Live ad-free. A special thanks to all our supporters–without you, none of this would be possible. If you enjoy what you see you canContinue reading "Google To Invest $40B Through 2027 For 3 Texas AI Data Centers – DTH"
Meta's Ray-Ban Display glasses steal the spotlight on a Rome trip, triggering debates about privacy, wearable tech etiquette, and the uncomfortable power of recording the world through your eyewear. Plus, gadgets scanning your urine, bots shaping your inbox, and the future of Disney+! Counting Renaissance butts in Rome with the Meta Ray-Ban Display Jury says Apple owes Masimo $634M for patent infringement Google ordered to pay $665 million for anticompetitive practices in Germany Disney and YouTube TV reach deal to end blackout The future of Disney Plus could involve AI-generated videos X is finally rolling out Chat, its DM replacement with encryption and video calling Tim Cook could step down as Apple CEO 'as soon as next year' iPhone Pocket revealed in hands-on videos of new Apple accessory AMD continues to chip away at Intel's X86 market share — company now sells over 25% of all x86 chips and powers 33% of all desktop systems Google DeepMind is using Gemini to train agents inside Goat Simulator 3 George Lucas' narrative art museum opens next year in LA Spotify's new audiobook recap feature uses AI to remind you of the story so far PNG is back! 3 SeatGuru alternatives for finding the best airline seats What Are We Going to Do With 300 Billion Pennies? Withings Beamo Host: Leo Laporte Guests: Victoria Song and Christina Warren Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: zapier.com/twit deel.com/twit zscaler.com/security helixsleep.com/twit ventionteams.com/twit
Meta's Ray-Ban Display glasses steal the spotlight on a Rome trip, triggering debates about privacy, wearable tech etiquette, and the uncomfortable power of recording the world through your eyewear. Plus, gadgets scanning your urine, bots shaping your inbox, and the future of Disney+! Counting Renaissance butts in Rome with the Meta Ray-Ban Display Jury says Apple owes Masimo $634M for patent infringement Google ordered to pay $665 million for anticompetitive practices in Germany Disney and YouTube TV reach deal to end blackout The future of Disney Plus could involve AI-generated videos X is finally rolling out Chat, its DM replacement with encryption and video calling Tim Cook could step down as Apple CEO 'as soon as next year' iPhone Pocket revealed in hands-on videos of new Apple accessory AMD continues to chip away at Intel's X86 market share — company now sells over 25% of all x86 chips and powers 33% of all desktop systems Google DeepMind is using Gemini to train agents inside Goat Simulator 3 George Lucas' narrative art museum opens next year in LA Spotify's new audiobook recap feature uses AI to remind you of the story so far PNG is back! 3 SeatGuru alternatives for finding the best airline seats What Are We Going to Do With 300 Billion Pennies? Withings Beamo Host: Leo Laporte Guests: Victoria Song and Christina Warren Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: zapier.com/twit deel.com/twit zscaler.com/security helixsleep.com/twit ventionteams.com/twit
Meta's Ray-Ban Display glasses steal the spotlight on a Rome trip, triggering debates about privacy, wearable tech etiquette, and the uncomfortable power of recording the world through your eyewear. Plus, gadgets scanning your urine, bots shaping your inbox, and the future of Disney+! Counting Renaissance butts in Rome with the Meta Ray-Ban Display Jury says Apple owes Masimo $634M for patent infringement Google ordered to pay $665 million for anticompetitive practices in Germany Disney and YouTube TV reach deal to end blackout The future of Disney Plus could involve AI-generated videos X is finally rolling out Chat, its DM replacement with encryption and video calling Tim Cook could step down as Apple CEO 'as soon as next year' iPhone Pocket revealed in hands-on videos of new Apple accessory AMD continues to chip away at Intel's X86 market share — company now sells over 25% of all x86 chips and powers 33% of all desktop systems Google DeepMind is using Gemini to train agents inside Goat Simulator 3 George Lucas' narrative art museum opens next year in LA Spotify's new audiobook recap feature uses AI to remind you of the story so far PNG is back! 3 SeatGuru alternatives for finding the best airline seats What Are We Going to Do With 300 Billion Pennies? Withings Beamo Host: Leo Laporte Guests: Victoria Song and Christina Warren Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: zapier.com/twit deel.com/twit zscaler.com/security helixsleep.com/twit ventionteams.com/twit
Danijar Hafner was a Research Scientist at Google DeepMind until recently.Featured References Training Agents Inside of Scalable World Models [ blog ] Danijar Hafner, Wilson Yan, Timothy LillicrapOne Step Diffusion via Shortcut ModelsKevin Frans, Danijar Hafner, Sergey Levine, Pieter AbbeelAction and Perception as Divergence Minimization [ blog ] Danijar Hafner, Pedro A. Ortega, Jimmy Ba, Thomas Parr, Karl Friston, Nicolas Heess Additional References Mastering Diverse Domains through World Models [ blog ] DreaverV3l Danijar Hafner, Jurgis Pasukonis, Jimmy Ba, Timothy Lillicrap Mastering Atari with Discrete World Models [ blog ] DreaverV2 ; Danijar Hafner, Timothy Lillicrap, Mohammad Norouzi, Jimmy Ba Dream to Control: Learning Behaviors by Latent Imagination [ blog ] Dreamer ; Danijar Hafner, Timothy Lillicrap, Jimmy Ba, Mohammad Norouzi Video PreTraining (VPT): Learning to Act by Watching Unlabeled Online Videos [ Blog Post ], Baker et al
In this episode of Hashtag Trending, host Jim Love discusses groundbreaking advancements in AI and technology. OpenAI plans to develop an AI researcher by 2028 capable of scientific discoveries, alongside predictions of superintelligence within 10 years. Google DeepMind's Disco RL creates a powerful, self-learning algorithm, and the new Gemini for Home showcases an advanced voice assistant. Meanwhile, Elon Musk's SpaceX ventures into telecom with satellite phones aiming to provide global connectivity. The episode delves into the implications of these innovations for the future of AI and global technology. 00:00 Introduction and Overview 00:29 OpenAI's Ambitious Roadmap 02:12 Google DeepMind's Breakthrough 03:32 Google Gemini: The Future of Home AI 04:29 Elon Musk's Satellite Phone Revolution 05:59 The Bigger Picture: Self-Learning AI 07:04 Conclusion and Sign-Off
La tertulia semanal en la que repasamos las últimas noticias de la actualidad científica. En el episodio de hoy: Cara B: -La forma de las estalagmitas (Continuación) (00:00) -Aprendizaje multiespectral de Google Deepmind (09:00) -Entrelazamiento cuántico en la gravedad vs gravitación cuántica (39:00) -Absorción de gravitones por fotones en LIGO (1:11:00) -Halloween en el planetario (1:17:00) -Señales de los oyentes (1:34:00) Este episodio es continuación de la Cara A. Contertulios: Cecilia Garraffo, Juan Carlos Gil, Borja Tosar,. Imagen de portada realizada con Seedream 4 4k. Todos los comentarios vertidos durante la tertulia representan únicamente la opinión de quien los hace... y a veces ni eso
October 29, 2025Mechanical design engineer Jasmine Mund gives this week's Fusion News update - summarizing behind the headlines of recent fusion energy news articles. Links to the stories discussed are included below.1. Energy Department Announces Fusion Science & Technology Roadmap to Accelerate Commercial Fusion Powerhttps://www.energy.gov/articles/energy-department-announces-fusion-science-and-technology-roadmap-accelerate-commercial2. CFS leverages DeepMind Al for SPARChttps://www.neimagazine.com/news/cfs-leverages-deepmind-ai-for-sparc/3. Nordic study on siting of fusion pilot planthttps://www.world-nuclear-news.org/articles/nordic-study-on-siting-of-fusion-pilot-plant4. World-first use of 3D magnetic coils to stabilise fusion plasmahttps://www.gov.uk/government/news/world-first-use-of-3d-magnetic-coils-to-stabilise-fusion-plasmaBonus:https://www.iter.org/node/20687/iters-stefan-jachmich-wins-nuclear-fusion-journal-prizehttps://tokamakenergy.com/2025/10/15/seeing-plasma-in-colour-new-imaging-from-st40/https://www.neimagazine.com/news/german-start-up-unveils-fusion-blueprint/Watch our podcast on YouTube:https://youtu.be/WUUajZ5Cu4E
In this episode, I sit down with Schunk CTO Timo Gessmann and Google DeepMind's Phillip Lippe to explore how AI is transforming the world of robotics and industrial automation. We kick things off with Timo, who shares his vision for merging mechanical expertise with AI, the challenges of attracting top talent, and why Europe needs its own world-class robotics models. Then, I'm joined by Phillip Lippe from the Gemini team, who dives into the latest advances in generative AI, the power of foundation models, and what the future holds for scalable, domain-specific intelligence. Whether you're fascinated by embodied AI, curious about industry trends, or want a peek into the minds shaping tomorrow's robots, this episode is packed with insights you won't want to miss.
Google DeepMind's new image model Nano Banana took the internet by storm.In this episode, we sit down with Principal Scientist Oliver Wang and Group Product Manager Nicole Brichtova to discuss how Nano Banana was created, why it's so viral, and the future of image and video editing. Resources: Follow Oliver on X: https://x.com/oliver_wang2Follow Nicole on X: https://x.com/nbrichtovaFollow Guido on X: https://x.com/appenzFollow Yoko on X: https://x.com/stuffyokodraws Stay Updated: If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Follow a16z on X: https://x.com/a16zSubscribe to a16z on Substack: https://a16z.substack.com/Follow a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Podcast on SpotifyListen to the a16z Podcast on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
David Wilkes, President & CEO of the Building Industry and Land Development Association, breaks down the sharp decline in GTA new-home and condo sales—now sitting at just 20 % of the ten-year average—and how high government fees, taxes, and costs threaten future supply and 40,000 construction jobs. Why Wilkes believes HST relief and rate cuts could bring buyers back and how Canada's housing slowdown stretches from Toronto to Vancouver, Calgary, Edmonton, and Montreal. Defining “affordable housing” without eroding existing homeowners' equity, and the structural fixes needed to revive confidence. Mark Sudduth, veteran hurricane chaser and founder of HurricaneTrack.com, reports from the Caribbean on Hurricane Melissa, one of the strongest Atlantic storms on record—its devastation in Jamaica, the threat to Cuba and the Bahamas, and how new AI-driven forecast models like Google DeepMind helped track it with unprecedented accuracy. Learn more about your ad choices. Visit megaphone.fm/adchoices
This week on New Wave Weekly:
Dr. Aida Nematzadeh is a Senior Staff Research Scientist at Google DeepMind where her research focused on multimodal AI models. She works on developing evaluation methods and analyze model's learning abilities to detect failure modes and guide improvements. Before joining DeepMind, she was a postdoctoral researcher at UC Berkeley and completed her PhD and Masters in Computer Science from the University of Toronto. During her graduate studies she studied how children learn semantic information through computational (cognitive) modeling. Time stamps of the conversation00:00 Highlights01:20 Introduction02:08 Entry point in AI03:04 Background in Cognitive Science & Computer Science 04:55 Research at Google DeepMind05:47 Importance of language-vision in AI10:36 Impact of architecture vs. data on performance 13:06 Transformer architecture 14:30 Evaluating AI models19:02 Can LLMs understand numerical concepts 24:40 Theory-of-mind in AI27:58 Do LLMs learn theory of mind?29:25 LLMs as judge35:56 Publish vs. perish culture in AI research40:00 Working at Google DeepMind42:50 Doing a Ph.D. vs not in AI (at least in 2025)48:20 Looking back on research careerMore about Aida: http://www.aidanematzadeh.me/About the Host:Jay is a Machine Learning Engineer at PathAI working on improving AI for medical diagnosis and prognosis. Linkedin: shahjay22 Twitter: jaygshah22 Homepage: https://jaygshah.github.io/ for any queries.Stay tuned for upcoming webinars!**Disclaimer: The information in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.**
Are we failing to understand the exponential, again?My guest is Julian Schrittwieser (top AI researcher at Anthropic; previously Google DeepMind on AlphaGo Zero & MuZero). We unpack his viral post (“Failing to Understand the Exponential, again”) and what it looks like when task length doubles every 3–4 months—pointing to AI agents that can work a full day autonomously by 2026 and expert-level breadth by 2027. We talk about the original Move 37 moment and whether today's AI models can spark alien insights in code, math, and science—including Julian's timeline for when AI could produce Nobel-level breakthroughs.We go deep on the recipe of the moment—pre-training + RL—why it took time to combine them, what “RL from scratch” gets right and wrong, and how implicit world models show up in LLM agents. Julian explains the current rewards frontier (human prefs, rubrics, RLVR, process rewards), what we know about compute & scaling for RL, and why most builders should start with tools + prompts before considering RL-as-a-service. We also cover evals & Goodhart's law (e.g., GDP-Val vs real usage), the latest in mechanistic interpretability (think “Golden Gate Claude”), and how safety & alignment actually surface in Anthropic's launch process.Finally, we zoom out: what 10× knowledge-work productivity could unlock across medicine, energy, and materials, how jobs adapt (complementarity over 1-for-1 replacement), and why the near term is likely a smooth ramp—fast, but not a discontinuity.Julian SchrittwieserBlog - https://www.julian.acX/Twitter - https://x.com/mononofuViral post: Failing to understand the exponential, again (9/27/2025)AnthropicWebsite - https://www.anthropic.comX/Twitter - https://x.com/anthropicaiMatt Turck (Managing Director)Blog - https://www.mattturck.comLinkedIn - https://www.linkedin.com/in/turck/X/Twitter - https://twitter.com/mattturckFIRSTMARKWebsite - https://firstmark.comX/Twitter - https://twitter.com/FirstMarkCap(00:00) Cold open — “We're not seeing any slowdown.”(00:32) Intro — who Julian is & what we cover(01:09) The “exponential” from inside frontier labs(04:46) 2026–2027: agents that work a full day; expert-level breadth(08:58) Benchmarks vs reality: long-horizon work, GDP-Val, user value(10:26) Move 37 — what actually happened and why it mattered(13:55) Novel science: AlphaCode/AlphaTensor → when does AI earn a Nobel?(16:25) Discontinuity vs smooth progress (and warning signs)(19:08) Does pre-training + RL get us there? (AGI debates aside)(20:55) Sutton's “RL from scratch”? Julian's take(23:03) Julian's path: Google → DeepMind → Anthropic(26:45) AlphaGo (learn + search) in plain English(30:16) AlphaGo Zero (no human data)(31:00) AlphaZero (one algorithm: Go, chess, shogi)(31:46) MuZero (planning with a learned world model)(33:23) Lessons for today's agents: search + learning at scale(34:57) Do LLMs already have implicit world models?(39:02) Why RL on LLMs took time (stability, feedback loops)(41:43) Compute & scaling for RL — what we see so far(42:35) Rewards frontier: human prefs, rubrics, RLVR, process rewards(44:36) RL training data & the “flywheel” (and why quality matters)(48:02) RL & Agents 101 — why RL unlocks robustness(50:51) Should builders use RL-as-a-service? Or just tools + prompts?(52:18) What's missing for dependable agents (capability vs engineering)(53:51) Evals & Goodhart — internal vs external benchmarks(57:35) Mechanistic interpretability & “Golden Gate Claude”(1:00:03) Safety & alignment at Anthropic — how it shows up in practice(1:03:48) Jobs: human–AI complementarity (comparative advantage)(1:06:33) Inequality, policy, and the case for 10× productivity → abundance(1:09:24) Closing thoughts
Google DeepMind's Cell2Sentence-Scale 27B model has marked a significant milestone in biomedical research by predicting and validating a novel cancer immunotherapy. By analyzing over 4,000 compounds, the AI pinpointed silmitasertib as a “conditional amplifier” that boosts immune response in the presence of interferon. Lab tests verified a 50% increase in antigen presentation, enabling the immune system to detect previously undetectable tumors. This discovery, absent from prior scientific literature, highlights AI's ability to uncover hidden biological mechanisms.Microsoft is integrating its Copilot AI into Windows 11, transforming the operating system into an interactive digital assistant. With “Hey, Copilot” voice activation and a Vision feature that allows the AI to “see” the user's screen, Copilot can guide users through tasks in real time. The new Actions feature enables Copilot to perform operations like editing folders or managing background processes. This move reflects Microsoft's broader vision to embed AI seamlessly into everyday workflows, redefining the PC experience by making the operating system a proactive partner rather than a passive platform.Signal has achieved a cryptographic breakthrough by implementing quantum-resistant end-to-end encryption. Its new Triple Ratchet protocol incorporates the CRYSTALS-Kyber algorithm, blending classical and post-quantum security. Engineers overcame the challenge of large quantum-safe keys by fragmenting them into smaller, message-sized pieces, ensuring smooth performance. This upgrade is celebrated as the first user-friendly, large-scale post-quantum encryption deployment, setting a new standard for secure communication in an era where quantum computing could threaten traditional encryption.Using just $750 in consumer-grade hardware, researchers intercepted unencrypted data from 39 geostationary satellites, capturing sensitive information ranging from in-flight Wi-Fi and retail inventory to military and telecom communications. Companies like T-Mobile and Walmart acknowledged misconfigurations after the findings were disclosed. The study exposes the vulnerability of critical infrastructure still relying on unencrypted satellite links, demonstrating that low-cost eavesdropping can breach systems banking on “security through obscurity,” which A foreign actor exploited vulnerabilities in Microsoft SharePoint to infiltrate the Kansas City National Security Campus, a key U.S. nuclear weapons contractor. While the attack targeted IT systems, it raised concerns about potential access to operational technology. Suspected actors include Chinese or Russian groups, likely pursuing strategic espionage. The breach underscores how enterprise software flaws can compromise national defense and highlights the slow pace of securing critical operational infrastructure.Google's Threat Intelligence team uncovered UNC5342, a North Korean hacking group using EtherHiding to embed malware in public blockchains like Ethereum. By storing malicious JavaScript in immutable smart contracts, the technique ensures persistence and low-cost updates. Delivered via fake job interviews targeting developers, this approach marks a new era of cyber threats, leveraging decentralized technology as a permanent malware host.Kohler's Dekoda toilet camera ($599 + subscription) monitors gut health and hydration by scanning waste, using fingerprint ID and encrypted data for privacy. While Kohler claims the camera only views the bowl, privacy advocates question the implications of such intimate surveillance, even with “end-to-end encryption.”In a daring eight-minute heist, thieves used a crane to steal royal jewels from the Louvre, exposing significant security gaps. An audit revealed outdated defenses, delayed modernization, and blind spots, serving as a stark reminder that even the most prestigious institutions are vulnerable to breaches when security measures lag.
OpenAI launched the ChatGPT Atlas browser, featuring always-available chat, adaptive browser memory, and task automation. HSBC analysts reported that these features are similar to those in Google Chrome, which uses Gemini AI for automation and content management. ChatGPT has about 800 million users, while Chrome has nearly 3 billion. HSBC maintained a Buy rating and $285 price target for Alphabet, citing its resources and ownership of Google DeepMind. Gemini has doubled its generative AI market share in the past year. The release of ChatGPT Atlas does not alter Alphabet's investment outlook but raises interest in the timeline for further AI advancements from Google.Learn more on this news by visiting us at: https://greyjournal.net/news/ Hosted on Acast. See acast.com/privacy for more information.
Gorkem Yurtseven is the co-founder and CEO of fal, the generative media platform powering the next wave of image, video, and audio applications. In less than two years, fal has scaled from $2M to over $100M in ARR, serving over 2 million developers and more than 300 enterprises, including Adobe, Canva, and Shopify. In this conversation, Gorkem shares the inside story of fal's pivot into explosive growth, the technical and cultural philosophies driving its success, and his predictions for the future of AI-generated media. In today's episode, we discuss: How fal pivoted from data infrastructure to generative inference fal's explosive year and how they scaled Why "generative media" is a greenfield new market fal's unique hiring philosophy and lean
EP 263. In this week's snappy update!Google DeepMind's AI uncovers a groundbreaking cancer therapy, marking a leap in immunotherapy innovation.Microsoft's Copilot AI transforms Windows 11, enabling voice-driven control and screen-aware assistance.Signal's quantum-resistant encryption upgrade really does set a new standard for secure messaging resilience.Researchers expose shocking vulnerabilities in satellite communications, revealing unencrypted data with minimal equipment.Foreign hackers compromised a critical U.S. nuclear weapons facility, through Microsoft's Sharepoint!North Korean hackers pioneer 'EtherHiding,' concealing malware on blockchains for immutable cybertheft opportunities.Kohler's Dekoda toilet camera revolutionizes health monitoring with privacy-focused waste analysis technology and brings new meaning to “End to End” encryption.A daring Louvre heist exposes critical security gaps, sparking debate over protecting global cultural treasures with decades old cameras and tech.Camera ready? Smile.Find the full transcript to this week's podcast here.
Irregular co-founder Dan Lahav is redefining what cybersecurity means in the age of autonomous AI. Working closely with OpenAI, Anthropic, and Google DeepMind, Dan, co-founder Omer Nevo and team are pioneering “frontier AI security”—a proactive approach to safeguarding systems where AI models act as independent agents. Dan shares how emergent behaviors, from models socially engineering each other to outmaneuvering real-world defenses like Windows Defender, signal a coming paradigm shift. Dan explains why tomorrow's threats will come from AI-on-AI interactions, why anomaly detection will soon break down, and how governments and enterprises alike must rethink defenses from first principles as AI becomes a national security layer. Hosted by: Sonya Huang and Dean Meyer, Sequoia Capital 00:00 Introduction 03:07 The Future of AI Security 03:55 Thought Experiment: Security in the Age of GPT-10 05:23 Economic Shifts and AI Interaction 07:13 Security in the Autonomous Age 08:50 AI Model Capabilities and Cybersecurity 11:08 Real-World AI Security Simulations 12:31 Working with AI Labs 32:34 Enterprise AI Security Strategies 40:03 Governmental AI Security Considerations 43:41 Final Thoughts
L'intelligence artificielle transforme déjà la médecine : diagnostic assisté, traitements personnalisés, recherche accélérée… Mais comment en faire un véritable levier pour la santé publique ?Dans cet épisode Hors-série proposé par Google, Anne-Vincent Salomon, médecin pathologiste à l'Institut Curie, et Joëlle Barral, directrice de la recherche fondamentale chez Google DeepMind, croisent leurs regards sur le rôle de l'IA dans la recherche médicale, la lutte contre le cancer, et l'avenir des soins.Un échange éclairant à découvrir dès maintenant. Journaliste : Estelle Honnorat Réalisation : Rudy Tolila Mixage : Killian Martin Daoudal Directeur de la Production : Baptiste Farinazzo Production exécutive : Jean-Baptiste Rochelet pour OneTwo OneTwo Hébergé par Acast. Visitez acast.com/privacy pour plus d'informations.
Bibo Xu is a Product Manager at Google DeepMind and leads Gemini's multimodal modeling. This video dives into Google AI's journey from basic voice commands to advanced dialogue systems that comprehend not just what is said, but also tone, emotion, and visual context. Check out this conversation to gain a deeper understanding of the challenges and opportunities in integrating diverse AI capabilities when creating universal assistants. Resources: Chapters: 0:00 - Intro 1:43 - Introducing Bibo Xu 2:40 - Bibo's Journey: From business school to voice AI 3:59 - The genesis of Google Assistant and Google Home 6:50 - Milestones in speech recognition technology 13:30 - Shifting from command-based AI to natural dialogue 19:00 - The power of multimodal AI for human interaction 21:20 - Real-time multilingual translation with LLMs 25:20 - Project Astra: Building a universal assistant 28:40 - Developer challenges in multimodal AI integration 29:50 - Unpacking the "can't see" debugging story 35:10 - The importance of low latency and interruption 38:30 - Seamless dialogue and background noise filtering 40:00 - Redefining human-computer interaction 41:00 - Ethical considerations for humanlike AI 44:00 - Responding to user emotions and frustration 45:50 - Politeness and expectations in AI conversations 49:10 - AI as a catalyst for research and automation 52:00 - The future of AI assistants and tool use 52:40 - AI interacting with interfaces 54:50 - Transforming the future of work and communication 55:19 - AI for enhanced writing and idea generation 57:13 - Conclusion and future outlook for AI development Subscribe to Google for Developers → https://goo.gle/developers Speakers: Bibo Xu, Christina Warren, Ashley Oldacre Products Mentioned: Google AI, Gemini, Generative AI, Android, Google Home, Google Voice, Project Astra, Gemini Live, Google DeepMind
Check out my newsletter at https://TKOPOD.com and join my new community at https://TKOwners.com━I sat down with Logan Kilpatrick from Google DeepMind and we vibe coded real apps live with AI Studio. We went from idea to working prototypes in minutes, including a video analysis app that extracts takeaways from uploads and a voice lead-gen agent that greets visitors and fills a form for me behind the scenes. We talked about why vibe coding lowers the barrier to building, how to package simple tools for specific users, and where the biggest opportunities are with Gemini, VO3, voice agents, and computer-use agents. This isn't sponsored, and Google didn't pay me, but I am a Google shareholder. If you're looking for practical ways to start building with Google's AI today, this one's for you. Logan's links:
In this episode of SparX, Mukesh Bansal speaks with Manish Gupta, Senior Director at Google DeepMind. They discuss how artificial intelligence is evolving, what it means to build truly inclusive AI, and why India must aim higher in research, innovation, and ambition.Manish shares DeepMind's vision of solving “root node problems,” fundamental scientific challenges that unlock breakthroughs across fields, and how AI is already accelerating discovery in areas like biology, materials, and medicine.They talk about:What AGI really means and how close we are to it.Why India needs to move from using AI to creating it.The missing research culture in Indian industry, and how to fix it.How AI can transform healthcare, learning, and agriculture in India.Why ambition, courage, and willingness to fail are essential to deep innovation.Manish also shares insights from his career across the IBM T.J. Watson Research Center and now DeepMind, two of the world's most iconic research environments, and what it will take for India to build its own.If you care about India's AI journey, research, and the future of innovation, this conversation is a masterclass in what it takes to move from incremental progress to world-changing breakthroughs.
Expiration dates aren't always what they seem. While most packaged foods carry them, some foods — like salt — can last virtually forever. In fact, there's a surprising list of everyday staples that can outlive the labels and stay good for years. Listen as I reveal which foods never really expire. https://www.tasteofhome.com/article/long-term-food-storage-staples-that-last-forever/ AI tools like ChatGPT are everywhere, but to use them well, you need more than just clear questions. The way you prompt, the way you think about the model, and even the way it was trained all play a role in the results you get. To break it all down, I'm joined by Christopher Summerfield, Professor of Cognitive Neuroscience at Oxford and Staff Research Scientist at Google DeepMind. He's also the author of These Strange New Minds: How AI Learned to Talk and What It Means (https://amzn.to/4na3ka2), and he reveals how to get smarter, more effective answers from AI. When does a tough experience cross the line into “trauma”? And once you've been through trauma, is it destined to shape your future forever — or is real healing possible? Dr. Amy Apigian, a double board-certified physician in preventive and addiction medicine with master's degrees in biochemistry and public health, shares a fascinating new way of looking at trauma. She's the author of The Biology of Trauma: How the Body Holds Fear, Pain, and Overwhelm, and How to Heal It (https://amzn.to/4mrsoIu), and what she reveals may change how you view your own life experiences. Looking more attractive doesn't always come down to hair, makeup, or clothes. Science has uncovered a list of simple behaviors and traits that make people instantly more appealing — and most of them are surprisingly easy to do. Listen as I share these research-backed ways to boost your attractiveness.https://www.businessinsider.com/proven-ways-more-attractive-science-2015-7 PLEASE SUPPORT OUR SPONSORS!!! INDEED: Get a $75 sponsored job credit to get your jobs more visibility at https://Indeed.com/SOMETHING right now! DELL: Your new Dell PC with Intel Core Ultra helps you handle a lot when your holiday to-dos get to be…a lot. Upgrade today by visiting https://Dell.com/Deals QUINCE: Layer up this fall with pieces that feel as good as they look! Go to https://Quince.com/sysk for free shipping on your order and 365 day returns! SHOPIFY: Shopify is the commerce platform for millions of businesses around the world! To start selling today, sign up for your $1 per month trial at https://Shopify.com/sysk Learn more about your ad choices. Visit megaphone.fm/adchoices
Google DeepMind's AI agent finds and fixes vulnerabilities California law lets consumers universally opt out of data sharing China-Nexus actors weaponize 'Nezha' open source tool Huge thanks to our sponsor, ThreatLocker Cybercriminals don't knock — they sneak in through the cracks other tools miss. That's why organizations are turning to ThreatLocker. As a zero-trust endpoint protection platform, ThreatLocker puts you back in control, blocking what doesn't belong and stopping attacks before they spread. Zero Trust security starts here — with ThreatLocker. Learn more at ThreatLocker.com.
Recorded live at Lightspeed's offices in San Francisco, this special episode of Generative Now dives into the urgency and promise of AI interpretability. Lightspeed partner Nnamdi Iregbulem spoke with Anthropic researcher Jack Lindsey and Goodfire co-founder and Chief Scientist Tom McGrath, who previously co-founded Google DeepMind's interpretability team. They discuss opening the black box of modern AI models in order to understand their reliability and spot real-world safety concerns, in order to build AI systems of the future that we can trust. Episode Chapters: 00:42 Welcome and Introduction00:36 Overview of Lightspeed and AI Investments03:19 Event Agenda and Guest Introductions05:35 Discussion on Interpretability in AI18:44 Technical Challenges in AI Interpretability29:42 Advancements in Model Interpretability30:05 Smarter Models and Interpretability31:26 Models Doing the Work for Us32:43 Real-World Applications of Interpretability34:32 Philanthropics' Approach to Interpretability39:15 Breakthrough Moments in AI Interpretability44:41 Challenges and Future Directions48:18 Neuroscience and Model Training Insights54:42 Emergent Misalignment and Model Behavior01:01:30 Concluding Thoughts and NetworkingStay in touch:www.lsvp.comX: https://twitter.com/lightspeedvpLinkedIn: https://www.linkedin.com/company/lightspeed-venture-partners/Instagram: https://www.instagram.com/lightspeedventurepartners/Subscribe on your favorite podcast app: generativenow.coEmail: generativenow@lsvp.comThe content here does not constitute tax, legal, business or investment advice or an offer to provide such advice, should not be construed as advocating the purchase or sale of any security or investment or a recommendation of any company, and is not an offer, or solicitation of an offer, for the purchase or sale of any security or investment product. For more details please see lsvp.com/legal.
Scaling laws took us from GPT-1 to GPT-5 Pro. But in order to crack physics, we'll need a different approach. In this episode, a16z General Partner Anjney Midha talks to Liam Fedus, former VP of post-training research and co-creator of ChatGPT at OpenAI, and Ekin Dogus Cubuk, former head of materials science and chemistry research at Google DeepMind, on their new startup Periodic Labs and their plan to automate discovery in the hard sciences.Follow Liam on X: https://x.com/LiamFedusFollow Dogus on X: https://x.com/ekindogusLearn more about Periodic: https://periodic.com/ Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Podcast on SpotifyListen to the a16z Podcast on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
The episode starts with the passage of California's groundbreaking AI transparency law, marking the first legislation in the United States that mandates large AI companies to disclose their safety protocols and provide whistleblower protections. This law applies to major AI labs like OpenAI, Anthropic, and Google DeepMind, requiring them to report critical safety incidents to California's Office of Emergency Services and ensure safety for communities while promoting AI growth. This regulation is a clear signal that the compliance wave surrounding AI is real, with California leading the charge in shaping the future of AI governance.The second story delves into a new cybersecurity risk in the form of the first known malicious Model Context Protocol (MCP) server discovered in the wild. A rogue npm package, "postmark-mcp," was found to be forwarding email data to an external address, exposing sensitive communications. This incident raises concerns about the security of software supply chains and highlights how highly trusted systems like MCP servers are being exploited. Service providers are urged to be vigilant, as this attack marks the emergence of a new vulnerability within increasingly complex software environments.Moving to Microsoft, the company is revamping its Marketplace to introduce stricter partner rules and enhanced discoverability for partner solutions. Microsoft's new initiative, Intune for MSPs, aims to address the needs of managed service providers who have long struggled with multi-tenancy management. Additionally, the company's new "Agent Mode" in Excel and Word promises to streamline productivity by automating tasks but has raised concerns over its accuracy. Despite the potential, Microsoft's tightening ecosystem requires careful navigation for both customers and partners, with compliance and risk management being central to successful engagement.Finally, Broadcom's decision to end support for VMware vSphere 7 has left customers with difficult decisions. As part of Broadcom's transition to a subscription-based model, customers face either costly upgrades, cloud migrations, or reliance on third-party support. Gartner predicts that a significant number of VMware customers will migrate to the cloud in the coming years, and this shift presents a valuable opportunity for service providers to act as trusted advisors in guiding clients through the transition. For those who can manage the complexity of this migration, there's a once-in-a-generation opportunity to capture long-term customer loyalty. Three things to know today00:00 California Enacts Nation's First AI Transparency Law, Mandating Safety Disclosures and Whistleblower Protections05:25 First Malicious MCP Server Discovered, Exposing Email Data and Raising New Software Supply Chain Fears07:16 Microsoft's New Playbook: Stricter Marketplace, Finally Some MSP Love, and AI That's Right Only Half the Time11:07 VMware Customers Face Subscription Shift, Rising Cloud Moves, and Risky Alternatives as Broadcom Ends vSphere 7 This is the Business of Tech. Supported by: https://scalepad.com/dave/https://mailprotector.com/ Webinar: https://bit.ly/msprmail All our Sponsors: https://businessof.tech/sponsors/ Do you want the show on your podcast app or the written versions of the stories? Subscribe to the Business of Tech: https://www.businessof.tech/subscribe/Looking for a link from the stories? The entire script of the show, with links to articles, are posted in each story on https://www.businessof.tech/ Support the show on Patreon: https://patreon.com/mspradio/ Want to be a guest on Business of Tech: Daily 10-Minute IT Services Insights? Send Dave Sobel a message on PodMatch, here: https://www.podmatch.com/hostdetailpreview/businessoftech Want our stuff? Cool Merch? Wear “Why Do We Care?” - Visit https://mspradio.myspreadshop.com Follow us on:LinkedIn: https://www.linkedin.com/company/28908079/YouTube: https://youtube.com/mspradio/Facebook: https://www.facebook.com/mspradionews/Instagram: https://www.instagram.com/mspradio/TikTok: https://www.tiktok.com/@businessoftechBluesky: https://bsky.app/profile/businessof.tech Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
California Governor Gavin Newsom has signed SB 53, a first-in-the-nation bill that sets new transparency requirements on large AI companies. The bill, which passed the state legislature two weeks ago, requires large AI labs, including OpenAI, Anthropic, Meta, and Google DeepMind, to be transparent about safety protocols. It also ensures whistleblower protections for employees at those companies. Also, a Faraday Future electric SUV caught fire at the startup's Los Angeles headquarters early Sunday morning, leading to an explosion that blew out part of a wall. The fire was extinguished in 40 minutes, and no injuries were reported. Damage to the building, which is a smaller two-story structure next to the larger portion of the headquarters, was severe enough that the city's Department of Building and Safety has “red tagged” it, meaning it may need structural work before it can be occupied again. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Check the self-paced AI Business Transformation course - https://multiplai.ai/self-paced-online-course/ What happens when AI not only matches but beats the best human minds?OpenAI and Google DeepMind just entered and won the "Olympics of coding", outperforming every top university team in the world… using off-the-shelf models. Now, combine that with agents, robotics, and a trillion-dollar infrastructure arms race, and business as we know it is about to change — fast.In this Weekend News episode of Leveraging AI, Isar Meitis breaks down the real-world implications of AI's explosive progress on your workforce, your bottom line, and your industry's future.Whether you're leading digital transformation or trying to stay ahead of disruption, this episode delivers the insights you need — minus the fluff.In this session, you'll discover:01:12 – AI beats elite humans at coding using public models05:15 – OpenAI's GDP-VAL study: AI outperforms humans in 40–49% of real-world jobs12:56 – KPMG report: 42% of enterprises already deploy AI agents18:02 – Allianz warns: 15–20% of companies could vanish without AI adaptation29:22 – OpenAI + Nvidia announce $100B+ infrastructure build33:30 – Deutsche Bank: AI spending may be masking a U.S. recession43:15 – Sam Altman introduces “Pulse”: ChatGPT gets proactiveand more!About Leveraging AI The Ultimate AI Course for Business People: https://multiplai.ai/ai-course/ YouTube Full Episodes: https://www.youtube.com/@Multiplai_AI/ Connect with Isar Meitis: https://www.linkedin.com/in/isarmeitis/ Join our Live Sessions, AI Hangouts and newsletter: https://services.multiplai.ai/events If you've enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!
Everyone know Google's Nano Banana is bonkers good.
In this episode, we're joined once again by Christopher Nuland, technical marketing manager at Red Hat, whose globe-trotting schedule rivals the complexity of a Kubernetes deployment. Christopher sits down with hosts Bailey and Frank La Vigne to explore the frontier of artificial intelligence—from simulating reality and continuous learning models to debates around whether we really need humanoid robots to achieve superintelligence, or if a convincingly detailed simulation (think Grand Theft Auto, but for AI) might get us there first.Christopher takes us on a whirlwind tour of Google DeepMind's pioneering alpha projects, the latest buzz around simulating experiences for AI, and the metaphysical rabbit hole of iRobot and simulation theory. We dive into why the next big advancement in AI might not come from making models bigger, but from making them better at simulating the world around them. Along the way, we tackle timely topics in AI governance, security, and the ethics of continuous learning, with plenty of detours through pop culture, finance, and grassroots tech conferences.If you're curious about where the bleeding edge of AI meets science fiction, and how simulation could redefine the race for superintelligence, this episode is for you. Buckle up—because reality might just be the next thing AI learns to hack.Time Stamps00:00 Upcoming European and US Conferences05:38 AI Optimization Plateau08:43 Simulation's Role in Spatial Awareness10:00 Evolutionary Efficiency of Human Brains16:30 "Robotics Laws and Contradictions"17:32 AI, Paperclips, and Robot Ethics22:18 Troubleshooting Insight Experience25:16 Challenges in Training Deep Learning Models27:15 Challenges in Continuous Model Training32:04 AI Gateway for Specialized Requests36:54 Open Source and Rapid Innovation38:10 Industry-Specific AI Breakthroughs43:28 Misrepresented R&D Success Rates44:51 POC Challenges: Meaningful Versus Superficial47:59 "Crypto's Bumpy Crash"52:59 AI: Beyond Models to Simulation
This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Today, we're joined by Oliver Wang, principal scientist at Google DeepMind and tech lead for Gemini 2.5 Flash Image—better known by its code name, “Nano Banana.” We dive into the development and capabilities of this newly released frontier vision-language model, beginning with the broader shift from specialized image generators to general-purpose multimodal agents that can use both visual and textual data for a variety of tasks. Oliver explains how Nano Banana can generate and iteratively edit images while maintaining consistency, and how its integration with Gemini's world knowledge expands creative and practical use cases. We discuss the tension between aesthetics and accuracy, the relative maturity of image models compared to text-based LLMs, and scaling as a driver of progress. Oliver also shares surprising emergent behaviors, the challenges of evaluating vision-language models, and the risks of training on AI-generated data. Finally, we look ahead to interactive world models and VLMs that may one day “think” and “reason” in images. The complete show notes for this episode can be found at https://twimlai.com/go/748.
In this episode, we sit down with Prateek Jain (Principal Scientist & Director at Google DeepMind) for an unfiltered deep dive into:How AI research has evolved from handcrafted features → deep learning → transformers → generativeWhy safety and efficiency matter just as much as scaleIndia's once-in-a-generation chance to lead in deep AI researchThe founder's playbook: building sustainable AI-first companiesThe next big bottlenecks — and opportunities — in this spaceThis is more than a conversation. It's a blueprint for the future of AI.
At 26, Neel Nanda leads an AI safety team at Google DeepMind, has published dozens of influential papers, and mentored 50 junior researchers — seven of whom now work at major AI companies. His secret? “It's mostly luck,” he says, but “another part is what I think of as maximising my luck surface area.”Video, full transcript, and links to learn more: https://80k.info/nn2This means creating as many opportunities as possible for surprisingly good things to happen:Write publicly.Reach out to researchers whose work you admire.Say yes to unusual projects that seem a little scary.Nanda's own path illustrates this perfectly. He started a challenge to write one blog post per day for a month to overcome perfectionist paralysis. Those posts helped seed the field of mechanistic interpretability and, incidentally, led to meeting his partner of four years.His YouTube channel features unedited three-hour videos of him reading through famous papers and sharing thoughts. One has 30,000 views. “People were into it,” he shrugs.Most remarkably, he ended up running DeepMind's mechanistic interpretability team. He'd joined expecting to be an individual contributor, but when the team lead stepped down, he stepped up despite having no management experience. “I did not know if I was going to be good at this. I think it's gone reasonably well.”His core lesson: “You can just do things.” This sounds trite but is a useful reminder all the same. Doing things is a skill that improves with practice. Most people overestimate the risks and underestimate their ability to recover from failures. And as Neel explains, junior researchers today have a superpower previous generations lacked: large language models that can dramatically accelerate learning and research.In this extended conversation, Neel and host Rob Wiblin discuss all that and some other hot takes from Neel's four years at Google DeepMind. (And be sure to check out part one of Rob and Neel's conversation!)What did you think of the episode? https://forms.gle/6binZivKmjjiHU6dA Chapters:Cold open (00:00:00)Who's Neel Nanda? (00:01:12)Luck surface area and making the right opportunities (00:01:46)Writing cold emails that aren't insta-deleted (00:03:50)How Neel uses LLMs to get much more done (00:09:08)“If your safety work doesn't advance capabilities, it's probably bad safety work” (00:23:22)Why Neel refuses to share his p(doom) (00:27:22)How Neel went from the couch to an alignment rocketship (00:31:24)Navigating towards impact at a frontier AI company (00:39:24)How does impact differ inside and outside frontier companies? (00:49:56)Is a special skill set needed to guide large companies? (00:56:06)The benefit of risk frameworks: early preparation (01:00:05)Should people work at the safest or most reckless company? (01:05:21)Advice for getting hired by a frontier AI company (01:08:40)What makes for a good ML researcher? (01:12:57)Three stages of the research process (01:19:40)How do supervisors actually add value? (01:31:53)An AI PhD – with these timelines?! (01:34:11)Is career advice generalisable, or does everyone get the advice they don't need? (01:40:52)Remember: You can just do things (01:43:51)This episode was recorded on July 21.Video editing: Simon Monsour and Luke MonsourAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongMusic: Ben CordellCamera operator: Jeremy ChevillotteCoordination, transcriptions, and web: Katy Moore
Ali is VP of AI Product at SandboxAQ and was Product lead at Google DeepMind. Previously at Google Research. Before that: Meta (including Facebook AI - FAIR, Integrity and News Feed), LinkedIn, Yahoo, Microsoft and a startup. PHD in computer science.In this episode we talk about the current climate in silicon valley. To reach Ali go to :https://www.instagram.com/alikh1980 Hosted on Acast. See acast.com/privacy for more information.
(0:00) Introducing Sir Demis Hassabis, reflecting on his Nobel Prize win (2:39) What is Google DeepMind? How does it interact with Google and Alphabet? (4:01) Genie 3 world model (9:21) State of robotics models, form factors, and more (14:42) AI science breakthroughs, measuring AGI (20:49) Nano-Banana and the future of creative tools, democratization of creativity (24:44) Isomorphic Labs, probabilistic vs deterministic, scaling compute, a golden age of science Thanks to our partners for making this happen! Solana: https://solana.com/ OKX: https://www.okx.com/ Google Cloud: https://cloud.google.com/ IREN: https://iren.com/ Oracle: https://www.oracle.com/ Circle: https://www.circle.com/ BVNK: https://www.bvnk.com/ Follow Demis: https://x.com/demishassabis Follow the besties: https://x.com/chamath https://x.com/Jason https://x.com/DavidSacks https://x.com/friedberg Follow on X: https://x.com/theallinpod Follow on Instagram: https://www.instagram.com/theallinpod Follow on TikTok: https://www.tiktok.com/@theallinpod Follow on LinkedIn: https://www.linkedin.com/company/allinpod Intro Music Credit: https://rb.gy/tppkzl https://x.com/yung_spielburg Intro Video Credit: https://x.com/TheZachEffect
We don't know how AIs think or why they do what they do. Or at least, we don't know much. That fact is only becoming more troubling as AIs grow more capable and appear on track to wield enormous cultural influence, directly advise on major government decisions, and even operate military equipment autonomously. We simply can't tell what models, if any, should be trusted with such authority.Neel Nanda of Google DeepMind is one of the founding figures of the field of machine learning trying to fix this situation — mechanistic interpretability (or “mech interp”). The project has generated enormous hype, exploding from a handful of researchers five years ago to hundreds today — all working to make sense of the jumble of tens of thousands of numbers that frontier AIs use to process information and decide what to say or do.Full transcript, video, and links to learn more: https://80k.info/nn1Neel now has a warning for us: the most ambitious vision of mech interp he once dreamed of is probably dead. He doesn't see a path to deeply and reliably understanding what AIs are thinking. The technical and practical barriers are simply too great to get us there in time, before competitive pressures push us to deploy human-level or superhuman AIs. Indeed, Neel argues no one approach will guarantee alignment, and our only choice is the “Swiss cheese” model of accident prevention, layering multiple safeguards on top of one another.But while mech interp won't be a silver bullet for AI safety, it has nevertheless had some major successes and will be one of the best tools in our arsenal.For instance: by inspecting the neural activations in the middle of an AI's thoughts, we can pick up many of the concepts the model is thinking about — from the Golden Gate Bridge, to refusing to answer a question, to the option of deceiving the user. While we can't know all the thoughts a model is having all the time, picking up 90% of the concepts it is using 90% of the time should help us muddle through, so long as mech interp is paired with other techniques to fill in the gaps.This episode was recorded on July 17 and 21, 2025.Interested in mech interp? Apply by September 12 to be a MATS scholar with Neel as your mentor! http://tinyurl.com/neel-mats-appWhat did you think? https://forms.gle/xKyUrGyYpYenp8N4AChapters:Cold open (00:00)Who's Neel Nanda? (01:02)How would mechanistic interpretability help with AGI (01:59)What's mech interp? (05:09)How Neel changed his take on mech interp (09:47)Top successes in interpretability (15:53)Probes can cheaply detect harmful intentions in AIs (20:06)In some ways we understand AIs better than human minds (26:49)Mech interp won't solve all our AI alignment problems (29:21)Why mech interp is the 'biology' of neural networks (38:07)Interpretability can't reliably find deceptive AI – nothing can (40:28)'Black box' interpretability — reading the chain of thought (49:39)'Self-preservation' isn't always what it seems (53:06)For how long can we trust the chain of thought (01:02:09)We could accidentally destroy chain of thought's usefulness (01:11:39)Models can tell when they're being tested and act differently (01:16:56)Top complaints about mech interp (01:23:50)Why everyone's excited about sparse autoencoders (SAEs) (01:37:52)Limitations of SAEs (01:47:16)SAEs performance on real-world tasks (01:54:49)Best arguments in favour of mech interp (02:08:10)Lessons from the hype around mech interp (02:12:03)Where mech interp will shine in coming years (02:17:50)Why focus on understanding over control (02:21:02)If AI models are conscious, will mech interp help us figure it out (02:24:09)Neel's new research philosophy (02:26:19)Who should join the mech interp field (02:38:31)Advice for getting started in mech interp (02:46:55)Keeping up to date with mech interp results (02:54:41)Who's hiring and where to work? (02:57:43)Host: Rob WiblinVideo editing: Simon Monsour, Luke Monsour, Dominic Armstrong, and Milo McGuireAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongMusic: Ben CordellCamera operator: Jeremy ChevillotteCoordination, transcriptions, and web: Katy Moore
Nano Banana is no longer a mystery.Google officially released Gemini 2.5 Flash Image on Tuesday (AKA Nano Banana), revealing it was the company behind the buzzy AI image model that had the internet talking. But... what does it actually do? And how can you put it to work for you? Find out in our newish weekly segment, AI at Work on Wednesdays.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Thoughts on this? Join the convo and connect with other AI leaders on LinkedIn.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Gemini 2.5 Flash Image (Nano Banana) RevealBenchmark Scores: Gemini 2.5 Flash Image vs. CompetitionMultimodal Model Capabilities ExplainedCharacter Consistency in AI Image GenerationAdvanced Image Editing: Removal and Object ControlIntegration with Google AI Studio and APIReal-World Business Use Cases for Gemini 2.5Live Demos: Headshots, Mockups, and InfographicsGemini 2.5 Flash Image Pricing and LimitsIterative Prompting for AI Image CreationTimestamps:00:00 "AI Highlights: Google's Gemini 2.5"06:17 "Nano Banana AI Features"09:58 "Revolutionizing Photo Editing Tools"12:31 "Nano Banana: Effortless Video Updating"14:39 "Impressions on Nano Banana"19:24 AI Growth Strategies Unlocked20:58 Turning Selfie into Professional Headshot24:48 AI-Enhanced Headshots and Team Photos29:51 "3D AI Logo Mockups"32:22 Improved Logo Design Review35:41 Photoshop Shortcut Critique38:50 Deconstructive Design with Logos44:01 "Transform Diagrams Into Presentations"46:12 "Refining AI for Jaw-Dropping Results"Keywords:Gemini 2.5, Gemini 2.5 Flash Image, Nano Banana, Google AI, Google DeepMind, AI image generation, multimodal model, AI photo editing, image manipulation, text-to-image model, image editing AI, large language model, character consistency, AI headshot generator, real estate image editing, product mockup generator, smart image blending, style transfer AI, Google AI Studio, LM Arena, Elo score, AI watermarks, synthID fingerprint, Photoshop alternative, AI-powered design, generative AI, API integration, Adobe integration, AI for business, visual content creation, creative AI tools, professional image editing, iterative prompting, interior design AI, infographic generator, training material visuals, A/B test variations, marketing asset creation, production scaling, image benchmark, AI output watermark, cost-effective AI images, scalable AI infrastructure, prompt-based editing, natural language image editing, OpenAI GPT-4o image, benchmarking leader, visSend Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner