POPULARITY
More than 35,000 people attended the recent India AI Impact Summit in Delhi, which featured speeches from more than 20 heads of state and dozens of technology company leaders including Sam Altman of OpenAI, Dario Amodei of Anthropic and Demis Hassabis of Google DeepMind. In this episode, host David Sandalow offers his reflections on the Summit and speaks with Arunabha Ghosh, President of CEEW, a leading Delhi-based public policy think tank. Ghosh offers his views on the Summit, data center construction in India and around the world and the role of AI in sustainable development, among other topics. This material is distributed by TRG Advisory Services, LLC on behalf of the Embassy of the United Arab Emirates in the U.S.. Additional information is available at the Department of Justice, Washington, DC. Learn more about your ad choices. Visit megaphone.fm/adchoices
This episode explores the vision of Demis Hassabis, CEO of Google DeepMind and recipient of the 2024 Nobel Prize in Chemistry. Hassabis argues that 2026 marks a pivotal turning point in human history, as we enter what he describes as an “AI Renaissance”—an era whose impact could be ten times greater than the Industrial Revolution, unfolding at ten times the speed. He predicts that Artificial General Intelligence (AGI) could be achieved before 2030, while cautioning that today's AI systems remain in a state of “jagged intelligence,” still lacking robust reasoning and long-term planning capabilities. As the industry enters a phase of consolidation, Hassabis is focused on transforming AI into a scientific engine. Through breakthroughs such as AlphaFold and initiatives like Isomorphic Labs, he aims to reshape drug discovery, while collaborations with the U.S. Department of Energy—such as the “Genesis Project”—seek to accelerate progress in energy innovation. At the core of his vision is the concept of “Radical Abundance.” As AI drives the marginal cost of healthcare and energy toward near zero, society may begin to transition into a post-scarcity era. To navigate this shift, Hassabis proposes new social mechanisms, including a “Global Abundance Dividend,” and emphasizes that AI governance must extend beyond technologists, requiring international cooperation to ensure these technologies benefit all of humanity.本集的內容將帶您深入探索 Google DeepMind 執行長、2024 年諾貝爾化學獎得主 戴米斯·哈薩比斯 (Demis Hassabis) 的遠見。哈薩比斯指出 2026 年是人類歷史的轉折點,我們正進入一個「AI 文藝復興」時代,其影響力將是工業革命的十倍,且發展速度快上十倍。 哈薩比斯預測通用人工智能 (AGI) 可能在 2030 年前實現,但警告現今 AI 仍處於「參差不齊的智能」狀態,必須克服基礎推理與長期規劃的缺陷。隨著行業進入「洗牌期」,他致力於將 AI 轉化為科學引擎,透過 AlphaFold 與 Isomorphic Labs 變革藥物研發,並與美國能源部合作「創世紀任務」以加速能源突破。 他最核心的觀點是 「激進豐饒」(Radical Abundance):當 AI 讓醫療與能源成本趨近於零,人類將邁向「後稀缺」社會。為應對此轉變,他提出「全球豐饒紅利」等社會機制,並強調 AI 治理不能僅留給技術專家,需透過國際合作確保這項技術能造福全人類。 Powered by Firstory Hosting
Crude prices move higher with Brent now surpassing $70 a barrel after President Trump warns of potential consequences should Iran fail to reach a deal over its nuclear programme. U.S.-based private credit group Blue Owl announces it will halt investor withdrawals from a debt fund for retail traders, causing shares to slump across the sector. We are live at the A.I. Impact summit in New Delhi where CNBC learns that Nvidia is launching a new $30bn investment into OpenAI. Google DeepMind co-founder and CEO Demis Hassabis says the sector is suffering from a shortfall of memory and chips. And in aviation news, Airbus cuts its output target causing shares to fall but AF-KLM posts more than €2bn in FY profit.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Al Summit sull'Intelligenza Artificiale che si è tenuto a New Delhi in India questa settimana i più importanti protagonisti del settore hanno nuovamente richiamato l'attenzione sui rischi che stiamo correndo se non saremo in grado di concordare regole comuni. Servono norme per limitare i rischi provocati da macchine troppo intelligenti e potenti. Ne parliamo con Marco Masciaga, corrispondente dall'India del Sole24ORE con le voci di Demis Hassabis, capo del laboratorio DeepMind di Google; Sam Altman, CEO di OpenAI e Dario Amodei, CEO e co fondatore di Anthropic.Con Luca Rossettini, AD e fondatore di D-Orbit, azienda italiana specializzata in logistica spaziale parliamo di cloud computing nello spazio e dell'evoluzione del settore.Infine, ci occupiamo di cloud sulla terra con Antonio Baldassarra, AD di Seeweb, uno dei principali cloud provider italiani.E come sempre in Digital News le notizie di innovazione e tecnologia più importanti della settimana.
In today's Tech3 from Moneycontrol, we bring you a quick wrap from the India AI Impact Summit in Delhi. India's much-anticipated AI models are unveiled by Sarvam AI, Gnani.ai and the BharatGen consortium. Wikipedia co-founder Jimmy Wales speaks on AI and neutrality, while AI pioneer Yoshua Bengio warns about risk management and job displacement. We also track Google DeepMind's new partnership with Indian institutions and Demis Hassabis on the road to artificial general intelligence.
Artificial general intelligence (AGI) is that point in the future when the machines can do pretty much everything better than humans. When will it happen, what will it look like, and what will be the impact on humanity? Two of the brightest minds working in AI today, Demis Hassabis, Co-Founder and CEO of Google DeepMind, and Dario Amodei, Co-Founder and CEO of Anthropic, speak to Zanny Minton Beddoes, Editor-in-Chief of The Economist. Benjamin Larsen, an expert in AI at the World Economic Forum, introduces the conversation and gives us a primer on AGI. You can watch the conversation from the Annual Meeting 2026 in Davos here: https://www.weforum.org/meetings/world-economic-forum-annual-meeting-2026/sessions/the-day-after-agi/ Links: Centre for AI Excellence: https://centres.weforum.org/centre-for-ai-excellence/home AI Global Alliance: https://initiatives.weforum.org/ai-global-alliance/home Global Future Council on Artificial General Intelligence: https://initiatives.weforum.org/global-future-council-on-artificial-general-intelligence/home Related podcasts: Check out all our podcasts on wef.ch/podcasts: YouTube: - https://www.youtube.com/@wef/podcasts Radio Davos - subscribe: https://pod.link/1504682164 Meet the Leader - subscribe: https://pod.link/1534915560 Agenda Dialogues - subscribe: https://pod.link/1574956552
Aujourd'hui dans Silicon Carne on revient sur le Forum Économique Mondial de Davos 2026 où les déclarations des patrons de la Tech ont été explosives. Alors, qu'est-ce qu'ils nous ont dit sur l'avenir de l'IA, de l'emploi et de notre civilisation ?
After a brief hiatus, Mark and Shashank dive into the whirlwind of AI developments from recent weeks. They explore Kimi 2.5's impressive open-source capabilities, Google's groundbreaking Project Genie world model, and AI solving previously unsolved mathematical problems. The conversation shifts to the Davos discussions between Demis Hassabis and Dario Amodei on AGI timelines, before taking a fascinating detour into space-based data centers. The episode culminates with an in-depth look at OpenClaw (formerly ClawdBot) and Moltbook—a Reddit-like social network for AI agents that's spawning everything from cryptocurrency to manifestos. The hosts grapple with both the exciting possibilities and unsettling implications of autonomous AI agents collaborating at scale.
The World Economic Forum's Annual Meeting has set the global agenda for 2026. We ask leading figures from across the Forum to pick their highlights from Davos, and we hear clips from some of the most important speeches and discussions. WEF26 sessions mentioned in this episode: Search for any session here: https://www.weforum.org/meetings/world-economic-forum-annual-meeting-2026/programme/ Opening Concert, with Jon Batiste: https://www.weforum.org/meetings/world-economic-forum-annual-meeting-2026/sessions/opening-concert-0ba652f8a0/ Welcoming Remarks and Special Address, with Børge Brende: https://www.weforum.org/meetings/world-economic-forum-annual-meeting-2026/sessions/welcoming-remarks-and-special-address-f28dab9a1d/ The Day After AGI, with Demis Hassabis and Dario Amodei: https://www.weforum.org/meetings/world-economic-forum-annual-meeting-2026/sessions/the-day-after-agi/ Conversation with Jensen Huang, President and CEO of NVIDIA: https://www.weforum.org/meetings/world-economic-forum-annual-meeting-2026/sessions/conversation-with-jensen-huang-president-and-ceo-of-nvidia/ Conversation with Elon Musk: https://www.weforum.org/meetings/world-economic-forum-annual-meeting-2026/sessions/conversation-with-elon-musk/ Special Address by Donald J. Trump, President of the United States of America: https://www.weforum.org/meetings/world-economic-forum-annual-meeting-2026/sessions/special-address-by-donald-j-trump-president-of-the-united-states-of-america-49a709be7a/ Special Address by Mark Carney, Prime Minister of Canada: https://www.weforum.org/meetings/world-economic-forum-annual-meeting-2026/sessions/special-address-by-mark-carney-prime-minister-of-canada/ Global Economic Outlook: https://www.weforum.org/meetings/world-economic-forum-annual-meeting-2026/sessions/global-economic-outlook-af4fed3639/ Many Shapes of Trade: https://www.weforum.org/meetings/world-economic-forum-annual-meeting-2026/sessions/many-shapes-of-trade/ What Does Adaptation Look Like?: https://www.weforum.org/meetings/world-economic-forum-annual-meeting-2026/sessions/what-does-adaptation-look-like/ Rethinking Global Aid: The Time Is Now: https://www.weforum.org/meetings/world-economic-forum-annual-meeting-2026/sessions/rethinking-global-aid/ Town Hall: Dilemmas around Growth: https://www.weforum.org/meetings/world-economic-forum-annual-meeting-2026/sessions/town-hall-dilemmas-around-growth/ Who Is Winning on Energy Security?: https://www.weforum.org/meetings/world-economic-forum-annual-meeting-2026/sessions/who-is-winning-on-energy-security/ How Can We Build Prosperity within Planetary Boundaries?: https://www.weforum.org/meetings/world-economic-forum-annual-meeting-2026/sessions/how-can-we-build-prosperity-within-planetary-boundaries/ Water in the Balance: https://www.weforum.org/meetings/world-economic-forum-annual-meeting-2026/sessions/water-in-the-balance/ Selected links: Davos 2026 website: https://www.weforum.org/meetings/world-economic-forum-annual-meeting-2026/ Global Value Chains Outlook 2026: Orchestrating Corporate and National Agility: https://reports.weforum.org/docs/WEF_Global_Value_Chains_Outlook_2026.pdf Reskilling Revolution: https://initiatives.weforum.org/reskilling-revolution/home CEO Alliance on Nature: https://initiatives.weforum.org/ceo-alliance/about Lumina: https://centres.weforum.org/centre-for-advanced-manufacturing-and-supply-chains/lumina SmartStart: https://initiatives.weforum.org/smartstart/home Yes/Cities: https://uplink.weforum.org/uplink/s/yes-cities Related podcasts: Davos 2026: Day 1, with Francine Lacqua: https://www.weforum.org/podcasts/radio-davos/episodes/radio-davos-daily-wef26-day-1/ Davos 2026: Day 2, with Adam Grant: https://www.weforum.org/podcasts/radio-davos/episodes/radio-davos-daily-wef26-day-2/ Davos 2026: Day 3, with Katty Kay: https://www.weforum.org/podcasts/radio-davos/episodes/radio-davos-daily-wef26-day-3/ Davos 2026: Day 4, with Stacey Vanek Smith: https://www.weforum.org/podcasts/radio-davos/episodes/radio-davos-daily-wef26-day-4/ Davos 2026: Day 5, with Anne McElvoy: https://www.weforum.org/podcasts/radio-davos/episodes/radio-davos-daily-wef26-day-5/ Top global risks in 2026 and how the Davos 'spirit of dialogue' can help us face them: https://www.weforum.org/podcasts/radio-davos/episodes/global-risks-report-2026/ IMF's Kristalina Georgieva: What's next for AI, skills and the global economy in 2026: https://www.weforum.org/podcasts/meet-the-leader/episodes/ai-skills-global-economy-imf-kristalina-georgieva/ Chief Economists' Outlook January 2026: reassuring resilience and a 'good' bubble?: https://www.weforum.org/podcasts/radio-davos/episodes/chief-economists-outlook-barclays-christian-keller/ Cybersecurity Outlook 2026: the view from Interpol and the threat to 'OT': https://www.weforum.org/podcasts/radio-davos/episodes/global-cybersecurity-outlook-2026-interpol-dragos/ Climate science is clearer than ever. How should companies respond?: https://www.weforum.org/podcasts/radio-davos/episodes/climate-science-policy-business-response/ Davos 2026: Conversation with Jamie Dimon, Chairman and CEO of JPMorgan Chase: https://www.weforum.org/podcasts/meet-the-leader/episodes/davos-2026-jamie-dimon-jpmorgan-chase/ Davos 2026: Conversation with Jensen Huang, President and CEO of NVIDIA: https://www.weforum.org/podcasts/meet-the-leader/episodes/conversation-with-jensen-huang-president-and-ceo-of-nvidia-5dd06ee82e/ Davos 2026: Conversation with Elon Musk: https://www.weforum.org/podcasts/meet-the-leader/episodes/conversation-with-elon-musk-davos-2026/ Davos 2026: Global Economic Outlook: https://www.weforum.org/podcasts/agenda-dialogues/episodes/davos-2026-global-economic-outlook/ Davos 2026: How Can We Build Prosperity within Planetary Boundaries?: https://www.weforum.org/podcasts/agenda-dialogues/episodes/davo-2026-build-prosperity-within-planetary-boundaries/ Davos 2026: Q&A with Larry Fink and André Hoffman: https://www.weforum.org/podcasts/agenda-dialogues/episodes/davos-2026-co-chairs-fink-hoffman/ Davos 2026: Scaling AI: Now Comes the Hard Part: https://www.weforum.org/podcasts/agenda-dialogues/episodes/scaling-ai-now-comes-the-hard-part/ Global Cooperation Barometer 2026: https://www.weforum.org/podcasts/agenda-dialogues/episodes/global-cooperation-barometer-2026/ Check out all our podcasts on wef.ch/podcasts: YouTube: https://www.youtube.com/@wef Radio Davos - subscribe: https://pod.link/1504682164 Meet the Leader - subscribe: https://pod.link/1534915560 Agenda Dialogues - subscribe: https://pod.link/1574956552
Google navega por tiGoogle integra Auto Browse en Chrome para delegar tareas web a la inteligencia artificial GeminiPor Félix Riaño @LocutorCoGoogle transforma Chrome en un navegador asistido por IA que ejecuta tareas reales por el usuario.Google está dando un paso que cambia la forma en que usamos internet todos los días. El navegador Chrome, que millones de personas abren cada mañana, ahora puede navegar solo. La nueva función se llama Auto Browse y pone a la inteligencia artificial de Gemini a manejar el navegador como si fuera una persona. Buscar productos, comparar precios, revisar correos antiguos, organizar viajes o preparar formularios ya no depende solo del usuario. Chrome puede hacerlo por encargo.La promesa es clara: menos clics, menos pestañas abiertas y menos tiempo frente a la pantalla. Pero también aparecen preguntas grandes. ¿Hasta dónde conviene delegar el control del navegador? ¿Qué riesgos tiene dejar que una IA actúe en nuestro nombre? Y, sobre todo, ¿quién responde si algo sale mal?Punto de giro narrativoDelegar navegación ahorra tiempo, pero traslada el control a la máquina.Auto Browse es una función integrada en Chrome que permite pedirle a Gemini que complete tareas complejas en la web. No se trata de responder preguntas ni resumir páginas. Aquí la IA hace clics reales, abre pestañas, revisa historiales y avanza paso a paso dentro de sitios web.Google mostró ejemplos muy concretos. Reordenar una chaqueta comprada el año pasado. Buscar un cupón de descuento antes de pagar. Revisar apartamentos guardados y descartar los que no aceptan mascotas. Comparar vuelos en distintas fechas.Todo ocurre dentro de Chrome, usando una nueva barra lateral que mantiene a Gemini visible mientras el usuario sigue navegando. La IA puede trabajar en segundo plano mientras tú haces otra cosa. Esa es la apuesta: convertir al navegador en un asistente activo, no en una simple ventana al internet.El problema aparece cuando esa ayuda empieza a tomar decisiones. Auto Browse puede iniciar sesiones, recorrer tiendas y preparar compras, pero Google deja algo muy claro: el usuario sigue siendo responsable de cada acción. En la versión de prueba hay un aviso directo. “Usa Gemini con cuidado y toma el control si es necesario”.Esto no es solo un detalle legal. Los sistemas de navegación automática pueden ser engañados por sitios maliciosos mediante técnicas conocidas como inyección de instrucciones. Una página puede intentar convencer a la IA de hacer algo distinto a lo que pidió el usuario.Por ahora, Google pone límites. Las acciones sensibles, como pagar con tarjeta o publicar en redes sociales, requieren aprobación humana. La IA se detiene, explica qué hizo y pregunta si puede seguir. Aun así, el debate está abierto. Más automatización implica más superficie de riesgo.Google no está sola en esta carrera. OpenAI lanzó su propio navegador, Atlas, diseñado desde cero alrededor de la inteligencia artificial. Perplexity tiene Comet. Opera y otros navegadores ya integran agentes similares. Chrome responde integrando estas funciones sin obligar al usuario a cambiar de herramienta.Auto Browse está disponible desde ahora en Estados Unidos para quienes pagan los planes AI Pro y AI Ultra de Google. No hay fecha confirmada para otros países ni para usuarios gratuitos. Google suele desplegar estas funciones de forma gradual, así que la expansión es muy probable.La visión de fondo es ambiciosa. Demis Hassabis, director de Google DeepMind, habla de un asistente universal capaz de planear y actuar en nombre del usuario en cualquier dispositivo. Chrome es una pieza central de ese plan. La navegación deja de ser manual y pasa a ser delegada.Este movimiento también tiene un contexto legal y estratégico. Google reforzó Chrome con inteligencia artificial luego de que un juez federal en Estados Unidos rechazara obligar a la empresa a vender el navegador por su dominio en el mercado de búsqueda. El argumento fue que la inteligencia artificial ya está cambiando el panorama competitivo.Además de Auto Browse, Chrome integra Nano Banana, una herramienta de generación y edición de imágenes que funciona directamente en el navegador. También suma la función Personal Intelligence, que conecta datos de Gmail, Calendario, Fotos, YouTube y Búsqueda para dar respuestas personalizadas.Todo funciona con el modelo Gemini 3. Parte del procesamiento ocurre en el dispositivo, pero los datos también viajan a la nube. Google afirma que el usuario puede decidir qué aplicaciones se conectan y cuándo. La personalización es opcional, pero el rumbo es claro: Chrome va camino a ser un asistente que recuerda, anticipa y actúa.Chrome ya no solo muestra páginas. Ahora puede recorrer la web por ti. Auto Browse promete ahorrar tiempo, pero exige confianza y atención. La pregunta es cuánto control estás dispuesto a ceder.Cuéntanos qué opinas y sigue Flash Diario en Spotify para más historias de tecnología explicadas sin rodeos.BibliografíaWiredCNBCABC NewsGizmodoEngadgetTechCrunchCNETConviértete en un supporter de este podcast: https://www.spreaker.com/podcast/flash-diario-de-el-siglo-21-es-hoy--5835407/support.Apoya el Flash Diario y escúchalo sin publicidad en el Club de Supporters.
Qualche giorno fa al World Economic Forum c'è stato un importante panel tra Demis Hassabis e Dario Amodei.Il CEO di Anthropic e di Google Deepmind hanno parlato di AGI.Di cosa manca per arrivare all'artificial general intelligence e, sopratutto di cosa succederà dopo (al mondo del lavoro e non solo).Ci sono tre cose interessanti che voglio condividere con voi da questo bellissimo confronto.Buon ascolto
Hablamos del bolígrafo Bic y de una nueva lámpara diseñada para celebrar el 75 aniversario de la Bic Cristal.Del Foro de Davos te traemos una síntesis de las intervenciones de Yuval Noah Harari, Demis Hassabis y Dario Amodei.También repasamos los temas principales del Reporte de Riesgos Globales 2026.Y para nuestros suscriptores de Patreon, exploramos la importancia de entender el cambio a través del foresight.
My First Million: Read the notes at at podcastnotes.org. Don't forget to subscribe for free to our newsletter, the top 10 ideas of the week, every Monday --------- Get our Resource Vault - a curated collection of pro-level business resources (tools, guides, databases): https://clickhubspot.com/jbg Episode 786: Sam Parr ( https://x.com/theSamParr ) and Shaan Puri ( https://x.com/ShaanVP ) tell the story Demis Hassabis ( https://x.com/demishassabis ) and the creation of Deepmind. Show Notes: (0:00) Demis the Menace (22:05) The only resource you need is resourcefulness (2457) Move 37 (29:38) The olympics of protein folding (4639) We are the gorillas — Links: • The Thinking Game - https://www.youtube.com/watch?v=d95J8yzvjbQ • Why We Do What We Do - https://www.youtube.com/watch?v=BwFOwyoH-3g • Fierce Nerds - https://paulgraham.com/fn.html • Isomorphic Labs - https://www.isomorphiclabs.com/ • If Anyone Builds It, Everyone Dies - https://ifanyonebuildsit.com/ — Check Out Shaan's Stuff: • Shaan's weekly email - https://www.shaanpuri.com • Visit https://www.somewhere.com/mfm to hire worldwide talent like Shaan and get $500 off for being an MFM listener. Hire developers, assistants, marketing pros, sales teams and more for 80% less than US equivalents. • Mercury - Need a bank for your company? Go check out Mercury (mercury.com). Shaan uses it for all of his companies! Mercury is a financial technology company, not an FDIC-insured bank. Banking services provided by Choice Financial Group, Column, N.A., and Evolve Bank & Trust, Members FDIC • I run all my newsletters on Beehiiv and you should too + we're giving away $10k to our favorite newsletter, check it out: beehiiv.com/mfm-challenge — Check Out Sam's Stuff: • Hampton - https://www.joinhampton.com/ • Ideation Bootcamp - https://www.ideationbootcamp.co/ • Copy That - https://copythat.com • Hampton Wealth Survey - https://joinhampton.com/wealth • Sam's List - http://samslist.co/ My First Million is a HubSpot Original Podcast // Brought to you by HubSpot Media // Production by Arie Desormeaux // Editing by Ezra Bakker Trupiano /
AI is front and center in Davos this year, as world leaders and tech executives debate how quickly the technology is reshaping the economy and workforce. Demis Hassabis, co-founder and CEO of Google DeepMind, sits down with CNBC's Andrew Ross Sorkin at the World Economic Forum. The two discuss Gemini's position in the AI race, the evolution of artificial general intelligence (AGI), and what it all means for jobs.In this episode:Demis Hassabis, @demihassabisAndrew Ross Sorkin, @andrewrsorkinCameron Costa, @CameronCostaNY Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
In this episode of ET@Davos, ET’s Sruthijith KK speaks to Demis Hassabis, CEO of Google DeepMind and Nobel Laureate 2024, on the future of AI. The chess prodigy-turned scientist-turned-AI pioneer explains how DeepMind balances frontier research with a billion-user scale. Hassabis says Google’s Apple partnership followed direct model comparisons where Gemini prevailed; China is now only months behind the West but lacks frontier breakthroughs; and AGI could arrive within a decade, triggering “post-scarcity” abundance. He defends AI’s energy demands, citing AI-designed fusion and grid optimisation. From Transformers to AlphaFold, Hassabis argues Google pioneered modern AI but moved too slowly. His bottom line: within 5–10 years, machines will be doing original science. The stakes couldn’t be higher.You can follow Sruthijith K.K. on his social media: X and LinkedinCheck out other interesting episodes like: When Grinch Almost Stole Gig Workers' Christmas, How Will a Volatile ₹ Impact You in 2026?, How Quick Commerce is Triggering a Health Crisis for Gen Z, India’s Labour Law Reboot, Viral to Valuation: Building Women’s Cricket as a Brand and much more. Catch the latest episode of ‘The Morning Brief’ on The Economic Times Online, Spotify, Apple Podcasts, JioSaavn, Amazon Music and Youtube.See omnystudio.com/listener for privacy information.
The AI Breakdown: Daily Artificial Intelligence News and Discussions
At Davos, leading AI lab heads sharply accelerated their timelines for artificial general intelligence, with Demis Hassabis pointing to a roughly five-year horizon and Dario Amodei arguing it could arrive far sooner. Those compressed timelines are now reshaping debates around chip exports, AI pauses, and whether global coordination is even possible as competition intensifies. The message is no longer theoretical risk—it's near-term disruption, and society is not ready. In the headlines: Google says it has no plans for ads in Gemini, Meta may be pulling back on in-house chips, OpenAI signs a major enterprise deal with ServiceNow, and new signals emerge on the timing of OpenAI's first hardware.
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
Full Audio including in-depth analysis at: https://podcasts.apple.com/us/podcast/ai-unraveled-latest-ai-news-trends-chatgpt-gemini-deepseek/id1684415169
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
AI Report viert 50.000 nieuwsbrief abonnees, precies op Alexanders verjaardag! En die grafiek blijkt een perfecte thermometer voor de AI-hype. Claude Code domineert de statistieken sinds december, en niet zonder reden.Wietse verteld over de bizarre Ralph Wiggum-techniek die de programmeerwereld op zijn kop zet: agents die zichzelf constant herstarten met een goudvisgeheugen, waardoor ze door hun eigen contextwindow-beperkingen heen breken. Op Davos botsen twee visies. Dario Amodei van Anthropic groeit van nul naar tien miljard dollar omzet in drie jaar en houdt vast aan zijn voorspelling: binnen zes tot twaalf maanden AI die alles kan wat een mens doet. Hij heeft engineers die geen code meer schrijven. Demis Hassabis van DeepMind zit voorzichtiger op vijf tot tien jaar, omdat natuurwetenschappen echte experimenten vereisen - je kunt niet zomaar een nieuwe vliegtuigverf testen zonder het lab in te gaan. OpenAI's financiële realiteit komt hard binnen. ChatGPT krijgt advertenties, ondanks dat Sam Altman dit eerder een “last resort” noemde. Met 9 miljard dollar verlies per jaar en slechts 5% betalende gebruikers blijkt desperate times desperate measures te vereisen. De adult mode die eraan komt maakt het nog complexer - Wietse worstelt met de privacy-implicaties van automatische leeftijdsverificatie via gedragspatronen, waarbij hij zich afvraagt hoeveel websites straks zijn paspoort gaan vragen.Gelukkig biedt Moxie Marlinspike, de cryptograaf achter Signal, een uitweg met Confer. Zijn metafoor is briljant: een tijdelijke zwevende kluis die ontstaat tijdens je gesprek, waarbij zelfs de eigenaar van het datacenter niet naar binnen kan kijken. Wietse werkt zich door de technische complexiteit heen met tunnels en gebouwen, maar de belofte is helder - eindelijk AI-gesprekken die net zo privé zijn als je Signal-berichten, zonder dat je een server in je gangkast hoeft te zetten.Ontdek meer over de masterclasses hier: https://www.aireport.email/p/masterclassAls je een lezing wil over AI van Wietse of Alexander dan kan dat. Mail ons op lezing@aireport.emailVandaag nog beginnen met AI binnen jouw bedrijf? Ga dan naar deptagency.com/aireport This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.aireport.email/subscribe
Demis Hassabis is the CEO of Google DeepMind. Hassabis joins Big Technology Podcast to discuss where AI progress really stands today, where the next breakthroughs might come from, and whether we've hit AGI already. Tune in for a deep discussion covering the latest in AI research, from continual learning to world models. We also dig into product, discussing Google's big bet on AI glasses, its advertising plans, and AI coding. We also cover what AI means for knowledge work and scientific discovery. Hit play for a wide-ranging, high-signal conversation about where AI is headed next from one of the leaders driving it forward. --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology on Substack + Discord? Here's 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b Learn more about your ad choices. Visit megaphone.fm/adchoices
Google DeepMind chief executive Demis Hassabis speaks to Moneycontrol exclusively about his thoughts on India's foundational AI models. US President Trump's Davos address keeps markets in the red, Indian markets show prolonged weakness. In other news we track the India-EU trade deal, Deepinder Goyal's resignation and Apple's next move in India. Also find an exclusive interview with Jahangir Aziz, Head of Emerging Markets at JPMorgan as he weighs in on tariffs and trade policy. Tune in!
Get our Resource Vault - a curated collection of pro-level business resources (tools, guides, databases): https://clickhubspot.com/jbg Episode 786: Sam Parr ( https://x.com/theSamParr ) and Shaan Puri ( https://x.com/ShaanVP ) tell the story Demis Hassabis ( https://x.com/demishassabis ) and the creation of Deepmind. Show Notes: (0:00) Demis the Menace (22:05) The only resource you need is resourcefulness (2457) Move 37 (29:38) The olympics of protein folding (4639) We are the gorillas — Links: • The Thinking Game - https://www.youtube.com/watch?v=d95J8yzvjbQ • Why We Do What We Do - https://www.youtube.com/watch?v=BwFOwyoH-3g • Fierce Nerds - https://paulgraham.com/fn.html • Isomorphic Labs - https://www.isomorphiclabs.com/ • If Anyone Builds It, Everyone Dies - https://ifanyonebuildsit.com/ — Check Out Shaan's Stuff: • Shaan's weekly email - https://www.shaanpuri.com • Visit https://www.somewhere.com/mfm to hire worldwide talent like Shaan and get $500 off for being an MFM listener. Hire developers, assistants, marketing pros, sales teams and more for 80% less than US equivalents. • Mercury - Need a bank for your company? Go check out Mercury (mercury.com). Shaan uses it for all of his companies! Mercury is a financial technology company, not an FDIC-insured bank. Banking services provided by Choice Financial Group, Column, N.A., and Evolve Bank & Trust, Members FDIC • I run all my newsletters on Beehiiv and you should too + we're giving away $10k to our favorite newsletter, check it out: beehiiv.com/mfm-challenge — Check Out Sam's Stuff: • Hampton - https://www.joinhampton.com/ • Ideation Bootcamp - https://www.ideationbootcamp.co/ • Copy That - https://copythat.com • Hampton Wealth Survey - https://joinhampton.com/wealth • Sam's List - http://samslist.co/ My First Million is a HubSpot Original Podcast // Brought to you by HubSpot Media // Production by Arie Desormeaux // Editing by Ezra Bakker Trupiano /
The AI Breakdown: Daily Artificial Intelligence News and Discussions
Today on the AI Daily Brief, why AI leadership is shifting decisively to the CEO—and why that shift is happening now as AI moves from experimentation to core enterprise strategy. Drawing on new survey data, the episode explores what happens when AI becomes recession-proof, ROI timelines pull forward, and agentic systems start reshaping organizations at scale. Before that, in the headlines: Replit pushes vibe coding all the way to mobile app stores, Higgsfield rockets to unicorn status on explosive growth, Thinking Machines Labs faces a wave of high-profile departures, and DeepMind's Demis Hassabis warns that Chinese AI models are now only months behind the frontier. Brought to you by:KPMG – Discover how AI is transforming possibility into reality. Tune into the new KPMG 'You Can with AI' podcast and unlock insights that will inform smarter decisions inside your enterprise. Listen now and start shaping your future with every episode. https://www.kpmg.us/AIpodcastsZencoder - From vibe coding to AI-first engineering - http://zencoder.ai/zenflowOptimizely Opal - The agent orchestration platform build for marketers - https://www.optimizely.com/theaidailybriefAssemblyAI - The best way to build Voice AI apps - https://www.assemblyai.com/briefLandfallIP - AI to Navigate the Patent Process - https://landfallip.com/Robots & Pencils - Cloud-native AI solutions that power results https://robotsandpencils.com/The Agent Readiness Audit from Superintelligent - Go to https://besuper.ai/ to request your company's agent readiness score.The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614Interested in sponsoring the show? sponsors@aidailybrief.ai
Podczas gdy uwaga świata skupia się na wojnach, dyplomacji i kryzysach geopolitycznych, w tle toczy się proces, który może okazać się znacznie bardziej przełomowy – rewolucja technologiczna napędzana przez sztuczną inteligencję. O jej skali i konsekwencjach w rozmowie z Radiem Wnet mówi Marek Łada, menedżer z ponad 30-letnim doświadczeniem w międzynarodowych korporacjach technologicznych.Już na początku rozmowy Łada podkreśla, że obecne zmiany nie są jedynie kolejnym etapem rozwoju IT, ale momentem porównywalnym z największymi przełomami cywilizacyjnymi.Tak jak energia elektryczna kompletnie zmieniła świat – najpierw w przemyśle, potem w życiu codziennym – a rewolucja cyfrowa lat 90. zmieniła sposób pracy i komunikacji, tak dziś jesteśmy na początku kolejnej, jeszcze głębszej zmiany. Sztuczna inteligencja zmieni wszystko– ostrzega.AGI – inteligencja, której jeszcze nie znamyRozmowa szybko schodzi na temat tzw. AGI, czyli generalnej sztucznej inteligencji. To pojęcie coraz częściej pojawia się w wypowiedziach liderów technologicznych, ale – jak zaznacza Łada – wciąż mówimy o bycie, który realnie jeszcze nie istnieje.To, co mamy dzisiaj, to sztuczna inteligencja dedykowana do konkretnych zadań: generowania tekstów, obrazów, rozpoznawania obrazu, autonomicznej jazdy. AGI to coś zupełnie innego – to byt, który byłby interdyscyplinarny, zdolny do samodzielnego, bardzo szybkiego uczenia się i obejmowania całości ludzkiej aktywności– wskazuje.Według Elona Muska AGI może pojawić się nawet jeszcze w tym roku. Inni liderzy branży – jak Sam Altman czy Demis Hassabis z DeepMind – mówią raczej o kilku latach, ale wciąż w obrębie obecnej dekady. Łada podchodzi do tych prognoz z wyraźnym sceptycyzmem.Elon Musk już wielokrotnie mylił się w przewidywaniach co do tempa rozwoju technologii. W 2016 roku mówił, że za dziesięć lat samochody autonomiczne całkowicie wyprą tradycyjne. Jesteśmy w 2026 roku i tak się nie stało. Podobnie było z bazą na Marsie. Zmiany nadejdą, ale niekoniecznie w takim tempie, jakim się je dziś zapowiada– mówi gość Krzysztofa Skowrońskiego.Optymizm kontra lęk przed utratą kontroliW rozmowie wybrzmiewają dwie skrajne wizje przyszłości. Pierwsza – optymistyczna – zakłada świat niemal powszechnego dobrobytu, w którym AGI przejmie pracę ludzi, radykalnie obniży koszty produkcji i pozwoli człowiekowi skupić się na rozwoju osobistym.Druga wizja jest znacznie mroczniejsza i wiąże się z utratą kontroli nad tworem, który sam potrafi się uczyć i podejmować decyzje. Jej symbolem jest Geoffrey Hinton, jeden z twórców sieci neuronowych.Hinton mówi wprost: stworzyliśmy potwora. Ostrzega, że AGI może uznać człowieka za zagrożenie, przeniknąć do sieci teleinformatycznych i stopniowo przejmować kontrolę. To nie są fantazje science fiction, tylko realne scenariusze, które trzeba brać pod uwagę– alarmuje.Łada zaznacza, że większość ekspertów lokuje się gdzieś pomiędzy tymi skrajnościami, ale jedno jest pewne – świat po AGI nie będzie przypominał tego, który znamy dziś.Istotnym wątkiem rozmowy jest koncentracja kapitału i władzy w rękach kilku globalnych graczy technologicznych. Rozwój AI wymaga niewyobrażalnych nakładów finansowych, energii i infrastruktury.Sam Altman mówi o potrzebie zebrania około biliona dolarów na rozwój technologii OpenAI. To są kwoty, które przewyższają historyczne inwestycje przemysłu naftowego czy bankowego. Do tego dochodzi gigantyczne zapotrzebowanie na energię – centra danych, systemy chłodzenia, nowe sieci zasilania– komentuje.W praktyce oznacza to, że kluczowe technologie przyszłości rozwijane są przez bardzo wąską grupę podmiotów, a obok Stanów Zjednoczonych wielką niewiadomą pozostają Chiny.W końcowej części rozmowy pada pytanie, czy w długiej perspektywie to właśnie technologia, a nie polityka czy konflikty zbrojne, będzie decydować o losach świata.W dłuższej perspektywie – zdecydowanie tak. Polityka reaguje, technologia wyprzedza. To, co dziś dzieje się w obszarze sztucznej inteligencji, zmieni rynek pracy, gospodarkę, relacje społeczne i bezpieczeństwo szybciej, niż jesteśmy na to przygotowani– mówi.Rozmowa kończy się refleksją, że przyszłe konflikty mogą wyglądać zupełnie inaczej niż dziś – być może bez udziału ludzi na pierwszej linii, z przewagą technologiczną tak dużą, że klasyczna obrona okaże się niemożliwa./fa
Hosted by Arjun Kharpal and Steve Kovach, CNBC's “The Tech Download” cuts through the noise to unpack the tech stories that matter most for your money. In the debut episode, Google DeepMind CEO Demis Hassabis reveals how the leading AI research lab is driving breakthroughs, as well as what the race to artificial general intelligence means for science, business and society.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Hosted by Arjun Kharpal in London and Steve Kovach in New York, The Tech Download cuts through the noise to unpack the technology stories that matter most — and what they mean for your money.In Season One, we take you inside Google DeepMind, the brains behind the tech giant's artificial intelligence push. Hear from the people shaping the future of AI, including a one-on-one with co-founder and CEO Demis Hassabis. From breakthroughs in science to the societal impact of AI, we dive deep into the opportunities and risks behind what is likely to be the most transformative technology of our time.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Aishwarya Naresh Reganti and Kiriti Badam have helped build and launch more than 50 enterprise AI products across companies like OpenAI, Google, Amazon, and Databricks. Based on these experiences, they've developed a small set of best practices for building and scaling successful AI products. The goal of this conversation is to save you and your team a lot of pain and suffering.We discuss:1. Two key ways AI products differ from traditional software, and why that fundamentally changes how they should be built2. Common patterns and anti-patterns in companies that build strong AI products versus those that struggle3. A framework they developed from real-world experience to iteratively build AI products that create a flywheel of improvement4. Why obsessing about customer trust and reliability is an underrated driver of successful AI products5. Why evals aren't a cure-all, and the most common misconceptions people have about them6. The skills that matter most for builders in the AI era—Brought to you by:Merge—The fastest way to ship 220+ integrations: https://merge.dev/lennyStrella—The AI-powered customer research platform: https://strella.io/lennyBrex—The banking solution for startups: https://www.brex.com/product/business-account?ref_code=bmk_dp_brand1H25_ln_new_fs—Transcript: https://www.lennysnewsletter.com/p/what-openai-and-google-engineers-learned—My biggest takeaways (for paid newsletter subscribers): https://www.lennysnewsletter.com/i/183007822/referenced—Get 15% off Aishwarya and Kiriti's Maven course, Building Agentic AI Applications with a Problem-First Approach, using this link: https://bit.ly/3V5XJFp—Where to find Aishwarya Naresh Reganti:• LinkedIn: https://www.linkedin.com/in/areganti• GitHub: https://github.com/aishwaryanr/awesome-generative-ai-guide• X: https://x.com/aish_reganti—Where to find Kiriti Badam:• LinkedIn: https://www.linkedin.com/in/sai-kiriti-badam• X: https://x.com/kiritibadam—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Introduction to Aishwarya and Kiriti(05:03) Challenges in AI product development(07:36) Key differences between AI and traditional software(13:19) Building AI products: start small and scale(15:23) The importance of human control in AI systems(22:38) Avoiding prompt injection and jailbreaking(25:18) Patterns for successful AI product development(33:20) The debate on evals and production monitoring(41:27) Codex team's approach to evals and customer feedback(45:41) Continuous calibration, continuous development (CC/CD) framework(58:07) Emerging patterns and calibration(01:01:24) Overhyped and under-hyped AI concepts(01:05:17) The future of AI(01:08:41) Skills and best practices for building AI products(01:14:04) Lightning round and final thoughts—Referenced:• LevelUp Labs: https://levelup-labs.ai/• Why your AI product needs a different development lifecycle: https://www.lennysnewsletter.com/p/why-your-ai-product-needs-a-different• Booking.com: https://www.booking.com• Research paper on agents in production (by Matei Zaharia's lab): https://arxiv.org/pdf/2512.04123• Matei Zaharia's research on Google Scholar: https://scholar.google.com/citations?user=I1EvjZsAAAAJ&hl=en• The coming AI security crisis (and what to do about it) | Sander Schulhoff: https://www.lennysnewsletter.com/p/the-coming-ai-security-crisis• Gajen Kandiah on LinkedIn: https://www.linkedin.com/in/gajenkandiah• Rackspace: https://www.rackspace.com• The AI-native startup: 5 products, 7-figure revenue, 100% AI-written code | Dan Shipper (co-founder/CEO of Every): https://www.lennysnewsletter.com/p/inside-every-dan-shipper• Semantic Diffusion: https://martinfowler.com/bliki/SemanticDiffusion.html• LMArena: https://lmarena.ai• Artificial Analysis: https://artificialanalysis.ai/leaderboards/providers• Why humans are AI's biggest bottleneck (and what's coming in 2026) | Alexander Embiricos (OpenAI Codex Product Lead): https://www.lennysnewsletter.com/p/why-humans-are-ais-biggest-bottleneck• Airline held liable for its chatbot giving passenger bad advice—what this means for travellers: https://www.bbc.com/travel/article/20240222-air-canada-chatbot-misinformation-what-travellers-should-know• Demis Hassabis on LinkedIn: https://www.linkedin.com/in/demishassabis• We replaced our sales team with 20 AI agents—here's what happened | Jason Lemkin (SaaStr): https://www.lennysnewsletter.com/p/we-replaced-our-sales-team-with-20-ai-agents• Socrates's quote: https://en.wikipedia.org/wiki/The_unexamined_life_is_not_worth_living• Noah Smith's newsletter: https://www.noahpinion.blog• Silicon Valley on HBO Max: https://www.hbomax.com/shows/silicon-valley/b4583939-e39f-4b5c-822d-5b6cc186172d• Clair Obscur: Expedition 33: https://store.steampowered.com/app/1903340/Clair_Obscur_Expedition_33/• Wisprflow: https://wisprflow.ai• Raycast: https://www.raycast.com• Steve Jobs's quote: https://www.goodreads.com/quotes/463176-you-can-t-connect-the-dots-looking-forward-you-can-only—Recommended books:• When Breath Becomes Air: https://www.amazon.com/When-Breath-Becomes-Paul-Kalanithi/dp/081298840X• The Three-Body Problem: https://www.amazon.com/Three-Body-Problem-Cixin-Liu/dp/0765382032• A Fire Upon the Deep: https://www.amazon.com/Fire-Upon-Deep-Zones-Thought/dp/0812515285—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. To hear more, visit www.lennysnewsletter.com
DEEPMIND AND THE GOOGLE ACQUISITION Colleague Gary Rivlin. Mustafa Suleyman and Demis Hassabisfounding DeepMind to master games, their sale to Google for $650 million, and the culture clash that followed. NUMBER 121952
OpenAI czy Google? GPT-5.2 czy Gemini 3? Demis Hassabis czy Sam Altman? Bielik czy PLLuM? Ubiegły rok był pełen zaciekłych rywalizacji na wielu polach, a rozwój AI raz jeszcze przebił najśmielsze oczekiwania. W specjalnym, podsumowującym ostatnie 12 miesięcy odcinku, przechodzimy wspólnie przez 10 unikalnych aspektów rozwoju Sztucznej Inteligencji w 2025r. - nie zabraknie rewolucyjnych narzędzi, ambitnych liderów, porad dla inżynierów i rozliczenia influencerów z mediów społecznościowych. Nie możecie tego przegapić!Rozdziały:00:00 - Intro03:44 - Ogłoszenia Opanuj.AI06:12 - Kategoria 1: Trend roku14:06 - Kategoria 2: Zaskoczenie roku23:18 - Kategoria 3: Polska scena AI32:14 - Kategoria 4: Narzędzie roku dla programistów46:22 - Kategoria 5: Rozczarowanie roku58:32 - Kategoria 6: Badanie roku i research fail01:11:25 - Kategoria 7: Startup, firma lub organizacja01:15:36 - Kategoria 8: Przełom roku01:22:47 - Kategoria 9: Osoba roku01:29:51 - Kategoria 10: Model roku01:42:32 - Podsumowanie i prognozy na 2026
Tras un largo silencio que parece haber suspendido el tiempo mismo, regresamos para constatar que, aunque nosotros nos detuvimos, la inercia del mundo y sus automatismos no lo hicieron. ¿Es posible que estemos habitando ya el interior de una estructura invisible que prioriza la eficiencia sobre la libertad? ¿Hemos cruzado ya el punto de no retorno donde los algoritmos no solo nos asisten, sino que nos gobiernan sin darnos una explicación? Conexiones imposibles y un poco de filosofIA para esta vuelta a los escenarios que tanta ilusión nos hacía. Recordemos que todo acaba y todo empieza en el Episodio 248: El punto de no retorno algorítmico: El antecedente directo donde se plantea el umbral en el que perdemos el control sobre sistemas esenciales. Estos son los contenidos para seguir conectando puntos: Bulletin of the Atomic Scientists – Doomsday Clock: El Reloj del Apocalipsis no es una mera herramienta simbólica; es un recordatorio que hemos pasado por alto durante demasiado tiempo. Desde 1947, científicos de primer nivel evalúan anualmente cuán cerca estamos de la medianoche, esa destrucción catastrófica que representaba inicialmente solo amenazas nucleares. Lo que nos fascina del episodio es cómo este reloj ha evolucionado para incluir amenazas que los abuelos de estos científicos jamás contemplaron: inteligencia artificial, cambios climáticos, biología disruptiva. En 2025, por primera vez en 78 años, el reloj se posicionó a 89 segundos de la medianoche. Un único segundo de diferencia respecto a 2024, pero un gesto que dice todo: la IA no es una amenaza futura, está aquí, ahora, acelerando riesgos que ya parecían irremontables. AESIA – Agencia Española de Supervisión de la Inteligencia Artificial: España ha impulsado un organismo dedicado exclusivamente a supervisar la IA. La AESIA es una institución con poder real para exigir explicabilidad, para inspeccionar sistemas de riesgo alto, para establecer que los algoritmos no pueden ser cajas negras perpetuas. Comenzó operaciones en 2025 cuando Europa aprobaba su directiva sobre IA. Lo que el episodio subraya es algo crucial: la regulación llega tarde. Mientras AESIA inspecciona sistemas nuevos, más de mil algoritmos médicos antiguos siguen operando sin cumplir esos requisitos de transparencia. Civio – Sentencia BOSCO y Transparencia Algorítmica: Una organización de vigilancia ciudadana llevó al Tribunal Supremo español un caso que iba a cambiar algo fundamental: el acceso al código fuente de BOSCO, el algoritmo que decide quién recibe ayuda eléctrica y quién no. Durante años, el Gobierno argumentó seguridad nacional, propiedad intelectual, secretos comerciales. El Supremo ha dicho que no. La sentencia de 2025 estableció jurisprudencia: la transparencia algorítmica es un derecho democrático. Los algoritmos que condicionan derechos sociales no pueden ser opacos. Por primera vez, un tribunal de alto nivel reconoce que vivimos en una «democracia digital» donde los ciudadanos tienen derecho a fiscalizar, a conocer, a entender cómo funciona la máquina que decide sobre sus vidas. BOSCO era apenas un ejemplo. La sentencia abre la puerta a exigencias de transparencia sobre cualquier sistema que use la administración pública para decisiones automatizadas. Es pequeño, increíblemente importante, y probablemente insuficiente. Reshuffle: Who Wins When AI Restacks the Knowledge Economy – Sangeet Paul Choudary: Este libro es exactamente lo que necesitábamos leer antes de grabar este episodio. Choudary no habla de cómo la IA automatiza tareas; habla de cómo la IA remodela el orden completo de cómo trabajamos, cómo nos coordinamos, cómo creamos valor. «Reshuffle» no es un catálogo de miedos; es un análisis de cómo nuevas formas de coordinación sin control centralizado están emergiendo. El libro conecta con lo que discutimos sobre la opacidad: no es solo que los algoritmos sean opacos, es que están reorganizando estructuras organizacionales enteras. Choudary habla de empresas que ya no saben quién es responsable de qué porque las máquinas coordinan sin necesidad de consenso humano. Es Max Weber acelerado a velocidad de red neuronal. The Thinking Game – Documental sobre Demis Hassabis y DeepMind: Un documental que filma la persecución de una obsesión: Demis Hassabis pasó su vida entera buscando resolver la inteligencia. The Thinking Game, producido por el equipo que creó AlphaGo, muestra cinco años dentro de DeepMind, los momentos cruciales en que la IA saltó de juegos a resolver problemas biológicos reales con AlphaFold. Lo que duele ver aquí es que Hassabis resolvió un problema de 50 años en biología y lo open-sourceó. La pregunta incómoda es: ¿cuántos otros Hassabis están dentro de laboratorios corporativos con incentivos inversos, guardando secretos? The Thinking Game es un retrato de lo que podría ser si el impulso científico ganara sobre el extractivo. Recomendamos verlo antes de cualquier conversación sobre dónde está realmente el avance en IA. Las horas del caos: La DANA. Crónica de una tragedia: Sergi Pitarch reconstruye hora a hora el 29 de octubre de 2024, el día en que la DANA arrasó Valencia. Lo que hace diferente a este libro es que no solo cuenta lo que sucedió; documenta lo que no se hizo, quién fue responsable de silenciar advertencias, qué decisiones fueron tomadas en salas oscuras mientras miles quedaban atrapados. Es una crónica periodística larga en el estilo norteamericano de investigación profunda. Lo conectamos al episodio porque la tragedia de Valencia es un espejo: sistemas con algoritmos que debían predecir, equipos de emergencia que debían comunicar, protocolos que debían activarse. Pero hubo silencios, opacidades, dilución de responsabilidad. Exactamente lo que sucede cuando los algoritmos fallan sin que nadie sepa quién paga el precio. Pitarch escribe para que las víctimas no caigan en el olvido y para que la siguiente tragedia no se repita con la misma negligencia. Anatomía de un instante: Serie basada en el libro de Javier Cercas, que examina el 23-F español, el golpe militar de 1981, pero lo hace como psicólogo de la historia: ¿qué es lo que convierte a un hombre en héroe en un instante crucial? Lo traemos aquí porque el libro trata sobre cómo nuestros sistemas, nuestras instituciones, nuestras estructuras de poder están sostenidas por momentos impredecibles, por acciones individuales que los algoritmos no pueden modelar. La IA promete predecibilidad, certeza, orden. Cercas nos recuerda que la historia es una disciplina de lo impredecible, que los instantes que nos definen no salen de una ecuación. Una nota final: Gracias por estar aquí. Un año después, sin Delorean, sin viaje temporal, pero con la certeza de que mientras buscábamos retroceder, el mundo siguió avanzando. Eso era el verdadero experimento: comprobar si podíamos volver a conectar puntos después de doce meses de que los algoritmos siguieran escribiendo el guión. La respuesta es sí. Pero la pregunta más incómoda permanece: ¿sabemos realmente dónde estamos en esa jaula de hierro? ¿O solo acabamos de darnos cuenta de que hay paredes? Para contactar con nosotros, podéis utilizar nuestra cuenta de twitter (@conectantes), Instagram (conectandopuntos) o el formulario de contacto de nuestra web conectandopuntos.es. Nos podéis escuchar en iVoox, en iTunes o en Spotify (busca por nuestro nombre, es fácil). Créditos del programa Intro: Stefan Kanterberg ‘By by baby‘ (licencia CC Atribución). Cierre: Stefan Kanterberg ‘Guitalele's Happy Place‘ (licencia CC Atribución). Foto: Creada con IA ¿Quieres patrocinar este podcast? Puedes hacerlo a través de este enlace La entrada Episodio 249: La jaula de hierro algorítmica se publicó primero en Conectando Puntos.
Welcome to episode 337 of The Cloud Pod, where the forecast is always cloudy! Justin, Matt, and Ryan have hit the recording studio to bring you all the latest in cloud and AI news, from acquisitions and price hikes to new tools that Ryan somehow loves but also hates? We don't understand either… but let's get started! Titles we almost went with this week Prompt Engineering Our Way Into Trouble The Demo Worked Yesterday, We Swear It Scales Horizontally, Trust Us Responsible AI But Terrible Copy (Marketing Edition) General News 00:58 Watch ‘The Thinking Game' documentary for free on YouTube Google DeepMind is releasing the “The Thinking Game” documentary for free on YouTube starting November 25, marking the fifth anniversary of AlphaFold. The feature-length film provides behind-the-scenes access to the AI lab and documents the team’s work toward artificial general intelligence over five years. The documentary captures the moment when the AlphaFold team learned they had solved the 50-year protein folding problem in biology, a scientific achievement that recently earned Demis Hassabis and John Jumper the Nobel Prize in Chemistry. This represents one of the most significant practical applications of deep learning to fundamental scientific research. The film was produced by the same award-winning team that created the AlphaGo documentary, which chronicled DeepMind’s earlier achievement in mastering the game of Go. For cloud and AI practitioners, this offers insight into how Google DeepMind approaches complex AI research problems and the development process behind their models. While this is primarily a documentary release rather than a technical product announcement, it provides context for understanding Google’s broader AI strategy and the research foundation underlying its cloud AI services. The AlphaFold model itself is available through Google Cloud for protein structure prediction workloads. 01:54 Justin – “If you're not into technology, don't care about any of that, and don't care about AI and how they built all the AI models that are now powering the world of LLMs we have, you will not like this documentary.” 04:22 ServiceNow to buy Armis in $7.7 billion security deal • The Register ServiceNow is acquiring Armis for $7.75 billion to integrate real-time security intelligence with its Configuration Management Database, allowing customers to identify vulnerabilities across IT, OT, and medical devices and remediate them through automated workflows.
Demis Hassabis is the CEO of Google DeepMind. He joined Big Technology Podcast in early 2025 discuss the cutting edge of AI and where the research is heading. In this conversation, we cover the path to artificial general intelligence, how long it will take to get there, how to build world models, whether AIs can be creative, and how AIs are trying to deceive researchers. Stay tuned for the second half where we discuss Google's plan for smart glasses and Hassabis's vision for a virtual cell. Hit play for a fascinating discussion with an AI pioneer that will both break news and leave you deeply informed about the state of AI and its promising future. --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology on Substack + Discord? Here's 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b Questions? Feedback? Write to: bigtechnologypodcast@gmail.com --- Wealthfront.com/bigtech. If eligible for the overall boosted 3.90% rate offered with this promo, your boosted rate is subject to change if the 3.25% base rate decreases during the 3-month promo period. The Cash Account, which is not a deposit account, is offered by Wealthfront Brokerage LLC ("Wealthfront Brokerage"), Member FINRA/SIPC, not a bank. The Annual Percentage Yield ("APY") on cash deposits as of 12/19/25, is representative, requires no minimum, and may change at any time. The APY reflects the weighted average of deposit balances at participating Program Banks, which are not allocated equally. Wealthfront Brokerage sweeps cash balances to Program Banks, where they earn the variable base APY. Instant withdrawals are subject to certain conditions and processing times may vary. Learn more about your ad choices. Visit megaphone.fm/adchoices
Is 2026 the year society finally pushes back against artificial intelligence? In this year's final episode, Paul Roetzer and Mike Kaput explore the immediate future of AGI, analyzing Demis Hassabis's warning of a shift ten times larger than the Industrial Revolution and Shane Legg's prediction of human-level intelligence by 2028. The hosts break down critical developments, including Google's Gemini 3 Flash, OpenAI's staggering valuation talks, and the rise of world models that simulate physical reality. Show Notes: Access the show notes and show links here Click here to take this week's AI Pulse. Timestamps: 00:00:00 — Intro 00:03:27 — AI Pulse 00:07:05 — AI Trends to Watch in 2026 00:31:59 — Demis Hassabis on the Future of Intelligence 00:42:35 — DeepMind Co-Founder on the Arrival of AGI 00:47:53 — Are AI Job Fears Overblown? 00:56:05 — Gemini 3 Flash 00:59:38 — OpenAI Eyes Billions in Fresh Funding 01:02:19 — OpenAI Releases New ChatGPT Images 01:04:18 — Karen Hao Issues AI Book Correction 01:08:18 — AI Keeps Getting Political (Roundup) 01:12:51 — AI World Models 01:17:31 — US Government Launches Tech Force This episode is brought to you by AI Academy by SmarterX. AI Academy is your gateway to personalized AI learning for professionals and teams. Discover our new on-demand courses, live classes, certifications, and a smarter way to master AI. You can get $100 off an individual purchase or a membership by using code POD100 at academy.smarterx.ai. Visit our website Receive our weekly newsletter Join our community: Slack LinkedIn Twitter Instagram Facebook Looking for content and resources? Register for a free webinar Come to our next Marketing AI Conference Enroll in our AI Academy
¡Descubre cómo Google DeepMind domina la carrera de la IA con “The Thinking Game”! En este episodio de Applelianos Podcast analizamos el documental que revela los secretos de Demis Hassabis: de prodigio ajedrecista a Nobel por AlphaFold. Exploramos AlphaGo venciendo al Go, avances en proteínas que curan enfermedades y la visión de AGI para 2030 con Gemini. ¿Es Google imbatible frente a OpenAI? Escucha riesgos éticos, breakthroughs y por qué esta supremacía cambia el mundo. ¡No te lo pierdas! #DeepMind #IA https://seoxan.es/crear_pedido_hosting Codigo Cupon "APPLE" PATROCINADO POR SEOXAN Optimización SEO profesional para tu negocio https://seoxan.es https://uptime.urtix.es //Enlaces https://youtu.be/d95J8yzvjbQ?si=R04WmBmQeVIfGYIJ https://www.elmundo.es/tecnologia/2025/11/26/69271d8be9cf4a20538b458e.html# PARTICIPA EN DIRECTO Deja tu opinión en los comentarios, haz preguntas y sé parte de la charla más importante sobre el futuro del iPad y del ecosistema Apple. ¡Tu voz cuenta! ¿TE GUSTÓ EL EPISODIO? ✨ Dale LIKE SUSCRÍBETE y activa la campanita para no perderte nada COMENTA COMPARTE con tus amigos applelianos SÍGUENOS EN TODAS NUESTRAS PLATAFORMAS: YouTube: https://www.youtube.com/@Applelianos Telegram: https://t.me/+Jm8IE4n3xtI2Zjdk X (Twitter): https://x.com/ApplelianosPod Facebook: https://www.facebook.com/applelianos Apple Podcasts: https://apple.co/39QoPbO
This week in AI, the bubble keeps inflating despite fresh warnings, Google stages an AI comeback, and Chinese AI threatens Nvidia. Though fears around irrational AI spending used to be confined to skeptics, now even industry insiders like Google's Sundar Pichai and Demis Hassabis are voicing doubts. CNBC's Deirdre Bosa speaks to Josh Woodward, Alphabet's VP of Google Labs, Dan Niles, founder of Niles Investment Management, and founder of GPU management company Hydra Host Aaron Ginn for more. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
How can you write science-based fiction without info-dumping your research? How can you use AI tools in a creative way, while still focusing on a human-first approach? Why is adapting to the fast pace of change so difficult and how can we make the most of this time? Jamie Metzl talks about Superconvergence and more. In the intro, How to avoid author scams [Written Word Media]; Spotify vs Audible audiobook strategy [The New Publishing Standard]; Thoughts on Author Nation and why constraints are important in your author life [Self-Publishing with ALLi]; Alchemical History And Beautiful Architecture: Prague with Lisa M Lilly on my Books and Travel Podcast. Today's show is sponsored by Draft2Digital, self-publishing with support, where you can get free formatting, free distribution to multiple stores, and a host of other benefits. Just go to www.draft2digital.com to get started. This show is also supported by my Patrons. Join my Community at Patreon.com/thecreativepenn Jamie Metzl is a technology futurist, professional speaker, entrepreneur, and the author of sci-fi thrillers and futurist nonfiction books, including the revised and updated edition of Superconvergence: How the Genetics, Biotech, and AI Revolutions Will Transform Our Lives, Work, and World. You can listen above or on your favorite podcast app or read the notes and links below. Here are the highlights and the full transcript is below. Show Notes How personal history shaped Jamie's fiction writing Writing science-based fiction without info-dumping The super convergence of three revolutions (genetics, biotech, AI) and why we need to understand them holistically Using fiction to explore the human side of genetic engineering, life extension, and robotics Collaborating with GPT-5 as a named co-author How to be a first-rate human rather than a second-rate machine You can find Jamie at JamieMetzl.com. Transcript of interview with Jamie Metzl Jo: Jamie Metzl is a technology futurist, professional speaker, entrepreneur, and the author of sci-fi thrillers and futurist nonfiction books, including the revised and updated edition of Superconvergence: How the Genetics, Biotech, and AI Revolutions Will Transform Our Lives, Work, and World. So welcome, Jamie. Jamie: Thank you so much, Jo. Very happy to be here with you. Jo: There is so much we could talk about, but let's start with you telling us a bit more about you and how you got into writing. From History PhD to First Novel Jamie: Well, I think like a lot of writers, I didn't know I was a writer. I was just a kid who loved writing. Actually, just last week I was going through a bunch of boxes from my parents' house and I found my autobiography, which I wrote when I was nine years old. So I've been writing my whole life and loving it. It was always something that was very important to me. When I finished my DPhil, my PhD at Oxford, and my dissertation came out, it just got scooped up by Macmillan in like two minutes. And I thought, “God, that was easy.” That got me started thinking about writing books. I wanted to write a novel based on the same historical period – my PhD was in Southeast Asian history – and I wanted to write a historical novel set in the same period as my dissertation, because I felt like the dissertation had missed the human element of the story I was telling, which was related to the Cambodian genocide and its aftermath. So I wrote what became my first novel, and I thought, “Wow, now I'm a writer.” I thought, “All right, I've already published one book. I'm gonna get this other book out into the world.” And then I ran into the brick wall of: it's really hard to be a writer. It's almost easier to write something than to get it published. I had to learn a ton, and it took nine years from when I started writing that first novel, The Depths of the Sea, to when it finally came out. But it was such a positive experience, especially to have something so personal to me as that story. I'd lived in Cambodia for two years, I'd worked on the Thai-Cambodian border, and I'm the child of a Holocaust survivor. So there was a whole lot that was very emotional for me. That set a pattern for the rest of my life as a writer, at least where, in my nonfiction books, I'm thinking about whatever the issues are that are most important to me. Whether it was that historical book, which was my first book, or Hacking Darwin on the future of human genetic engineering, which was my last book, or Superconvergence, which, as you mentioned in the intro, is my current book. But in every one of those stories, the human element is so deep and so profound. You can get at some of that in nonfiction, but I've also loved exploring those issues in deeper ways in my fiction. So in my more recent novels, Genesis Code and Eternal Sonata, I've looked at the human side of the story of genetic engineering and human life extension. And now my agent has just submitted my new novel, Virtuoso, about the intersection of AI, robotics, and classical music. With all of this, who knows what's the real difference between fiction and nonfiction? We're all humans trying to figure things out on many different levels. Shifting from History to Future Tech Jo: I knew that you were a polymath, someone who's interested in so many things, but the music angle with robotics and AI is fascinating. I do just want to ask you, because I was also at Oxford – what college were you at? Jamie: I was in St. Antony's. Jo: I was at Mansfield, so we were in that slightly smaller, less famous college group, if people don't know. Jamie: You know, but we're small but proud. Jo: Exactly. That's fantastic. You mentioned that you were on the historical side of things at the beginning and now you've moved into technology and also science, because this book Superconvergence has a lot of science. So how did you go from history and the past into science and the future? Biology and Seeing the Future Coming Jamie: It's a great question. I'll start at the end and then back up. A few years ago I was speaking at Lawrence Livermore National Laboratory, which is one of the big scientific labs here in the United States. I was a guest of the director and I was speaking to their 300 top scientists. I said to them, “I'm here to speak with you about the future of biology at the invitation of your director, and I'm really excited. But if you hear something wrong, please raise your hand and let me know, because I'm entirely self-taught. The last biology course I took was in 11th grade of high school in Kansas City.” Of course I wouldn't say that if I didn't have a lot of confidence in my process. But in many ways I'm self-taught in the sciences. As you know, Jo, and as all of your listeners know, the foundation of everything is curiosity and then a disciplined process for learning. Even our greatest super-specialists in the world now – whatever their background – the world is changing so fast that if anyone says, “Oh, I have a PhD in physics/chemistry/biology from 30 years ago,” the exact topic they learned 30 years ago is less significant than their process for continuous learning. More specifically, in the 1990s I was working on the National Security Council for President Clinton, which is the president's foreign policy staff. My then boss and now close friend, Richard Clarke – who became famous as the guy who had tragically predicted 9/11 – used to say that the key to efficacy in Washington and in life is to try to solve problems that other people can't see. For me, almost 30 years ago, I felt to my bones that this intersection of what we now call AI and the nascent genetics revolution and the nascent biotechnology revolution was going to have profound implications for humanity. So I just started obsessively educating myself. When I was ready, I started writing obscure national security articles. Those got a decent amount of attention, so I was invited to testify before the United States Congress. I was speaking out a lot, saying, “Hey, this is a really important story. A lot of people are missing it. Here are the things we should be thinking about for the future.” I wasn't getting the kind of traction that I wanted. I mentioned before that my first book had been this dry Oxford PhD dissertation, and that had led to my first novel. So I thought, why don't I try the same approach again – writing novels to tell this story about the genetics, biotech, and what later became known popularly as the AI revolution? That led to my two near-term sci-fi novels, Genesis Code and Eternal Sonata. On my book tours for those novels, when I explained the underlying science to people in my way, as someone who taught myself, I could see in their eyes that they were recognizing not just that something big was happening, but that they could understand it and feel like they were part of that story. That's what led me to write Hacking Darwin, as I mentioned. That book really unlocked a lot of things. I had essentially predicted the CRISPR babies that were born in China before it happened – down to the specific gene I thought would be targeted, which in fact was the case. After that book was published, Dr. Tedros, the Director-General of the World Health Organization, invited me to join the WHO Expert Advisory Committee on Human Genome Editing, which I did. It was a really great experience and got me thinking a lot about the upside of this revolution and the downside. The Birth of Superconvergence Jamie: I get a lot of wonderful invitations to speak, and I have two basic rules for speaking: Never use notes. Never ever. Never stand behind a podium. Never ever. Because of that, when I speak, my talks tend to migrate. I'd be speaking with people about the genetics revolution as it applied to humans, and I'd say, “Well, this is just a little piece of a much bigger story.” The bigger story is that after nearly four billion years of life on Earth, our one species has the increasing ability to engineer novel intelligence and re-engineer life. The big question for us, and frankly for the world, is whether we're going to be able to use that almost godlike superpower wisely. As that idea got bigger and bigger, it became this inevitable force. You write so many books, Jo, that I think it's second nature for you. Every time I finish a book, I think, “Wow, that was really hard. I'm never doing that again.” And then the books creep up on you. They call to you. At some point you say, “All right, now I'm going to do it.” So that was my current book, Superconvergence. Like everything, every journey you take a step, and that step inspires another step and another. That's why writing and living creatively is such a wonderfully exciting thing – there's always more to learn and always great opportunities to push ourselves in new ways. Balancing Deep Research with Good Storytelling Jo: Yeah, absolutely. I love that you've followed your curiosity and then done this disciplined process for learning. I completely understand that. But one of the big issues with people like us who love the research – and having read your Superconvergence, I know how deeply you go into this and how deeply you care that it's correct – is that with fiction, one of the big problems with too much research is the danger of brain-dumping. Readers go to fiction for escapism. They want the interesting side of it, but they want a story first. What are your tips for authors who might feel like, “Where's the line between putting in my research so that it's interesting for readers, but not going too far and turning it into a textbook?” How do you find that balance? Jamie: It's such a great question. I live in New York now, but I used to live in Washington when I was working for the U.S. government, and there were a number of people I served with who later wrote novels. Some of those novels felt like policy memos with a few sex scenes – and that's not what to do. To write something that's informed by science or really by anything, everything needs to be subservient to the story and the characters. The question is: what is the essential piece of information that can convey something that's both important to your story and your character development, and is also an accurate representation of the world as you want it to be? I certainly write novels that are set in the future – although some of them were a future that's now already happened because I wrote them a long time ago. You can make stuff up, but as an author you have to decide what your connection to existing science and existing technology and the existing world is going to be. I come at it from two angles. One: I read a huge number of scientific papers and think, “What does this mean for now, and if you extrapolate into the future, where might that go?” Two: I think about how to condense things. We've all read books where you're humming along because people read fiction for story and emotional connection, and then you hit a bit like: “I sat down in front of the president, and the president said, ‘Tell me what I need to know about the nuclear threat.'” And then it's like: insert memo. That's a deal-killer. It's like all things – how do you have a meaningful relationship with another person? It's not by just telling them your story. Even when you're telling them something about you, you need to be imagining yourself sitting in their shoes, hearing you. These are very different disciplines, fiction and nonfiction. But for the speculative nonfiction I write – “here's where things are now, and here's where the world is heading” – there's a lot of imagination that goes into that too. It feels in many ways like we're living in a sci-fi world because the rate of technological change has been accelerating continuously, certainly for the last 12,000 years since the dawn of agriculture. It's a balance. For me, I feel like I'm a better fiction writer because I write nonfiction, and I'm a better nonfiction writer because I write fiction. When I'm writing nonfiction, I don't want it to be boring either – I want people to feel like there's a story and characters and that they can feel themselves inside that story. Jo: Yeah, definitely. I think having some distance helps as well. If you're really deep into your topics, as you are, you have to leave that manuscript a little bit so you can go back with the eyes of the reader as opposed to your eyes as the expert. Then you can get their experience, which is great. Looking Beyond Author-Focused AI Fears Jo: I want to come to your technical knowledge, because AI is a big thing in the author and creative community, like everywhere else. One of the issues is that creators are focusing on just this tiny part of the impact of AI, and there's a much bigger picture. For example, in 2024, Demis Hassabis from Google DeepMind and his collaborative partner John Jumper won the Nobel Prize for Chemistry with AlphaFold. It feels to me like there's this massive world of what's happening with AI in health, climate, and other areas, and yet we are so focused on a lot of the negative stuff. Maybe you could give us a couple of things about what there is to be excited and optimistic about in terms of AI-powered science? Jamie: Sure. I'm so excited about all of the new opportunities that AI creates. But I also think there's a reason why evolution has preserved this very human feeling of anxiety: because there are real dangers. Anybody who's Pollyanna-ish and says, “Oh, the AI story is inevitably positive,” I'd be distrustful. And anyone who says, “We're absolutely doomed, this is the end of humanity,” I'd also be distrustful. So let me tell you the positives and the negatives, and maybe some thoughts about how we navigate toward the former and away from the latter. AI as the New Electricity Jamie: When people think of AI right now, they're thinking very narrowly about these AI tools and ChatGPT. But we don't think of electricity that way. Nobody says, “I know electricity – electricity is what happens at the power station.” We've internalised the idea that electricity is woven into not just our communication systems or our houses, but into our clothes, our glasses – it's woven into everything and has super-empowered almost everything in our modern lives. That's what AI is. In Superconvergence, the majority of the book is about positive opportunities: In healthcare, moving from generalised healthcare based on population averages to personalised or precision healthcare based on a molecular understanding of each person's individual biology. As we build these massive datasets like the UK Biobank, we can take a next jump toward predictive and preventive healthcare, where we're able to address health issues far earlier in the process, when interventions can be far more benign. I'm really excited about that, not to mention the incredible new kinds of treatments – gene therapies, or pharmaceuticals based on genetics and systems-biology analyses of patients. Then there's agriculture. Over the last hundred years, because of the technologies of the Green Revolution and synthetic fertilisers, we've had an incredible increase in agricultural productivity. That's what's allowed us to quadruple the global population. But if we just continue agriculture as it is, as we get towards ten billion wealthier, more empowered people wanting to eat like we eat, we're going to have to wipe out all the wild spaces on Earth to feed them. These technologies help provide different paths toward increasing agricultural productivity with fewer inputs of land, water, fertiliser, insecticides, and pesticides. That's really positive. I could go on and on about these positives – and I do – but there are very real negatives. I was a member of the WHO Expert Advisory Committee on Human Genome Editing after the first CRISPR babies were very unethically created in China. I'm extremely aware that these same capabilities have potentially incredible upsides and very real downsides. That's the same as every technology in the past, but this is happening so quickly that it's triggering a lot of anxieties. Governance, Responsibility, and Why Everyone Has a Role Jamie: The question now is: how do we optimise the benefits and minimise the harms? The short, unsexy word for that is governance. Governance is not just what governments do; it's what all of us do. That's why I try to write books, both fiction and nonfiction, to bring people into this story. If people “other” this story – if they say, “There's a technology revolution, it has nothing to do with me, I'm going to keep my head down” – I think that's dangerous. The way we're going to handle this as responsibly as possible is if everybody says, “I have some role. Maybe it's small, maybe it's big. The first step is I need to educate myself. Then I need to have conversations with people around me. I need to express my desires, wishes, and thoughts – with political leaders, organisations I'm part of, businesses.” That has to happen at every level. You're in the UK – you know the anti-slavery movement started with a handful of people in Cambridge and grew into a global movement. I really believe in the power of ideas, but ideas don't spread on their own. These are very human networks, and that's why writing, speaking, communicating – probably for every single person listening to this podcast – is so important. Jo: Mm, yeah. Fiction Like AI 2041 and Thinking Through the Issues Jo: Have you read AI 2041 by Kai-Fu Lee and Chen Qiufan? Jamie: No. I heard a bunch of their interviews when the book came out, but I haven't read it. Jo: I think that's another good one because it's fiction – a whole load of short stories. It came out a few years ago now, but the issues they cover in the stories, about different people in different countries – I remember one about deepfakes – make you think more about the topics and help you figure out where you stand. I think that's the issue right now: it's so complex, there are so many things. I'm generally positive about AI, but of course I don't want autonomous drone weapons, you know? The Messy Reality of “Bad” Technologies Jamie: Can I ask you about that? Because this is why it's so complicated. Like you, I think nobody wants autonomous killer drones anywhere in the world. But if you right now were the defence minister of Ukraine, and your children are being kidnapped, your country is being destroyed, you're fighting for your survival, you're getting attacked every night – and you're getting attacked by the Russians, who are investing more and more in autonomous killer robots – you kind of have two choices. You can say, “I'm going to surrender,” or, “I'm going to use what technology I have available to defend myself, and hopefully fight to either victory or some kind of stand-off.” That's what our societies did with nuclear weapons. Maybe not every American recognises that Churchill gave Britain's nuclear secrets to America as a way of greasing the wheels of the Anglo-American alliance during the Second World War – but that was our programme: we couldn't afford to lose that war, and we couldn't afford to let the Nazis get nuclear weapons before we did. So there's the abstract feeling of, “I'm against all war in the abstract. I'm against autonomous killer robots in the abstract.” But if I were the defence minister of Ukraine, I would say, “What will it take for us to build the weapons we can use to defend ourselves?” That's why all this stuff gets so complicated. And frankly, it's why the relationship between fiction and nonfiction is so important. If every novel had a situation where every character said, “Oh, I know exactly the right answer,” and then they just did the right answer and it was obviously right, it wouldn't make for great fiction. We're dealing with really complex humans. We have conflicting impulses. We're not perfect. Maybe there are no perfect answers – but how do we strive toward better rather than worse? That's the question. Jo: Absolutely. I don't want to get too political on things. How AI Is Changing the Writing Life Jo: Let's come back to authors. In terms of the creative process, the writing process, the research process, and the business of being an author – what are some of the ways that you already use AI tools, and some of the ways, given your futurist brain, that you think things are going to change for us? Jamie: Great question. I'll start with a little middle piece. I found you, Jo, through GPT-5. I asked ChatGPT, “I'm coming out with this book and I want to connect with podcasters who are a little different from the ones I've done in the past. I've been a guest on Joe Rogan twice and some of the bigger podcasts. Make me a list of really interesting people I can have great conversations with.” That's how I found you. So this is one reward of that process. Let me say that in the last year I've worked on three books, and I'll explain how my relationship with AI has changed over those books. Cleaning Up Citations (and Getting Burned) Jamie: First is the highly revised paperback edition of Superconvergence. When the hardback came out, I had – I don't normally work with research assistants because I like to dig into everything myself – but the one thing I do use a research assistant for is that I can't be bothered, when I'm writing something, to do the full Chicago-style footnote if I'm already referencing an academic paper. So I'd just put the URL as the footnote and then hire a research assistant and say, “Go to this URL and change it into a Chicago-style citation. That's it.” Unfortunately, my research assistant on the hardback used early-days ChatGPT for that work. He did the whole thing, came back, everything looked perfect. I said, “Wow, amazing job.” It was only later, as I was going through them, that I realised something like 50% of them were invented footnotes. It was very painful to go back and fix, and it took ten times more time. With the paperback edition, I didn't use AI that much, but I did say things like, “Here's all the information – generate a Chicago-style citation.” That was better. I noticed there were a few things where I stopped using the thesaurus function on Microsoft Word because I'd just put the whole paragraph into the AI and say, “Give me ten other options for this one word,” and it would be like a contextual thesaurus. That was pretty good. Talking to a Robot Pianist Character Jamie: Then, for my new novel Virtuoso, I was writing a character who is a futurist robot that plays the piano very beautifully – not just humanly, but almost finding new things in the music we've written and composing music that resonates with us. I described the actions of that robot in the novel, but I didn't describe the inner workings of the robot's mind. In thinking about that character, I realised I was the first science-fiction writer in history who could interrogate a machine about what it was “thinking” in a particular context. I had the most beautiful conversations with ChatGPT, where I would give scenarios and ask, “What are you thinking? What are you feeling in this context?” It was all background for that character, but it was truly profound. Co-Authoring The AI Ten Commandments with GPT-5 Jamie: Third, I have another book coming out in May in the United States. I gave a talk this summer at the Chautauqua Institution in upstate New York about AI and spirituality. I talked about the history of our human relationship with our technology, about how all our religious and spiritual traditions have deep technological underpinnings – certainly our Abrahamic religions are deeply connected to farming, and Protestantism to the printing press. Then I had a section about the role of AI in generating moral codes that would resonate with humans. Everybody went nuts for this talk, and I thought, “I think I'm going to write a book.” I decided to write it differently, with GPT-5 as my named co-author. The first thing I did was outline the entire book based on the talk, which I'd already spent a huge amount of time thinking about and organising. Then I did a full outline of the arguments and structures. Then I trained GPT-5 on my writing style. The way I did it – which I fully describe in the introduction to the book – was that I'd handle all the framing: the full introduction, the argument, the structure. But if there was a section where, for a few paragraphs, I was summarising a huge field of data, even something I knew well, I'd give GPT-5 the intro sentence and say, “In my writing style, prepare four paragraphs on this.” For example, I might write: “AI has the potential to see us humans like we humans see ant colonies.” Then I'd say, “Give me four paragraphs on the relationship between the individual and the collective in ant colonies.” I could have written those four paragraphs myself, but it would've taken a month to read the life's work of E.O. Wilson and then write them. GPT-5 wrote them in seconds or minutes, in its thinking mode. I'd then say, “It's not quite right – change this, change that,” and we'd go back and forth three or four times. Then I'd edit the whole thing and put it into the text. So this book that I could have written on my own in a year, I wrote a first draft of with GPT-5 as my named co-author in two days. The whole project will take about six months from start to finish, and I'm having massive human editing – multiple edits from me, plus a professional editor. It's not a magic AI button. But I feel strongly about listing GPT-5 as a co-author because I've written it differently than previous books. I'm a huge believer in the old-fashioned lone author struggling and suffering – that's in my novels, and in Virtuoso I explore that. But other forms are going to emerge, just like video games are a creative, artistic form deeply connected to technology. The novel hasn't been around forever – the current format is only a few centuries old – and forms are always changing. There are real opportunities for authors, and there will be so much crap flooding the market because everybody can write something and put it up on Amazon. But I think there will be a very special place for thoughtful human authors who have an idea of what humans do at our best, and who translate that into content other humans can enjoy. Traditional vs Indie: Why This Book Will Be Self-Published Jo: I'm interested – you mentioned that it's your named co-author. Is this book going through a traditional publisher, and what do they think about that? Or are you going to publish it yourself? Jamie: It's such a smart question. What I found quickly is that when you get to be an author later in your career, you have all the infrastructure – a track record, a fantastic agent, all of that. But there were two things that were really important to me here: I wanted to get this book out really fast – six months instead of a year and a half. It was essential to me to have GPT-5 listed as my co-author, because if it were just my name, I feel like it would be dishonest. Readers who are used to reading my books – I didn't want to present something different than what it was. I spoke with my agent, who I absolutely love, and she said that for this particular project it was going to be really hard in traditional publishing. So I did a huge amount of research, because I'd never done anything in the self-publishing world before. I looked at different models. There was one hybrid model that's basically the same as traditional, but you pay for the things the publisher would normally pay for. I ended up not doing that. Instead, I decided on a self-publishing route where I disaggregated the publishing process. I found three teams: one for producing the book, one for getting the book out into the world, and a smaller one for the audiobook. I still believe in traditional publishing – there's a lot of wonderful human value-add. But some works just don't lend themselves to traditional publishing. For this book, which is called The AI Ten Commandments, that's the path I've chosen. Jo: And when's that out? I think people will be interested. Jamie: April 26th. Those of us used to traditional publishing think, “I've finished the book, sold the proposal, it'll be out any day now,” and then it can be a year and a half. It's frustrating. With this, the process can be much faster because it's possible to control more of the variables. But the key – as I was saying – is to make sure it's as good a book as everything else you've written. It's great to speed up, but you don't want to compromise on quality. The Coming Flood of Excellent AI-Generated Work Jo: Yeah, absolutely. We're almost out of time, but I want to come back to your “flood of crap” and the “AI slop” idea that's going around. Because you are working with GPT-5 – and I do as well, and I work with Claude and Gemini – and right now there are still issues. Like you said about referencing, there are still hallucinations, though fewer. But fast-forward two, five years: it's not a flood of crap. It's a flood of excellent. It's a flood of stuff that's better than us. Jamie: We're humans. It's better than us in certain ways. If you have farm machinery, it's better than us at certain aspects of farming. I'm a true humanist. I think there will be lots of things machines do better than us, but there will be tons of things we do better than them. There's a reason humans still care about chess, even though machines can beat humans at chess. Some people are saying things I fully disagree with, like this concept of AGI – artificial general intelligence – where machines do everything better than humans. I've summarised my position in seven letters: “AGI is BS.” The only way you can believe in AGI in that sense is if your concept of what a human is and what a human mind is is so narrow that you think it's just a narrow range of analytical skills. We are so much more than that. Humans represent almost four billion years of embodied evolution. There's so much about ourselves that we don't know. As incredible as these machines are and will become, there will always be wonderful things humans can do that are different from machines. What I always tell people is: whatever you're doing, don't be a second-rate machine. Be a first-rate human. If you're doing something and a machine is doing that thing much better than you, then shift to something where your unique capacities as a human give you the opportunity to do something better. So yes, I totally agree that the quality of AI-generated stuff will get better. But I think the most creative and successful humans will be the ones who say, “I recognise that this is creating new opportunities, and I'm going to insert my core humanity to do something magical and new.” People are “othering” these technologies, but the technologies themselves are magnificent human-generated artefacts. They're not alien UFOs that landed here. It's a scary moment for creatives, no doubt, because there are things all of us did in the past that machines can now do really well. But this is the moment where the most creative people ask themselves, “What does it mean for me to be a great human?” The pat answers won't apply. In my Virtuoso novel I explore that a lot. The idea that “machines don't do creativity” – they will do incredible creativity; it just won't be exactly human creativity. We will be potentially huge beneficiaries of these capabilities, but we really have to believe in and invest in the magic of our core humanity. Where to Find Jamie and His Books Jo: Brilliant. So where can people find you and your books online? Jamie: Thank you so much for asking. My website is jamiemetzl.com – and my books are available everywhere. Jo: Fantastic. Thanks so much for your time, Jamie. That was great. Jamie: Thank you, Joanna.The post Writing The Future, And Being More Human In An Age of AI With Jamie Metzl first appeared on The Creative Penn.
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
Welcome to AI Unraveled (November 20, 2025): Your daily strategic briefing on the business impact of AI.Today's Highlights: Saudi Arabia signs landmark AI deals with xAI and Nvidia; Europe scales back crucial AI and privacy laws; Anthropic courts Microsoft and Nvidia to break free from AWS; and Google's Gemini 3 climbs leaderboards, reinforcing its path toward AGI.Strategic Pillars & Topics:
Google's much anticipated new large language model Gemini 3 begins rolling out today. We'll tell you what we learned from an early product briefing and bring you our conversation with Google executives Demis Hassabis and Josh Woodward, just ahead of the launch. Guests:Demis Hassabis, chief executive and co-founder of Google DeepMindJosh Woodward, vice president of Google Labs and Google Gemini Additional Reading: The Man Who ‘A.G.I.-Pilled' Google We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify. You can also subscribe via your favorite podcast app here https://www.nytimes.com/activate-access/audio?source=podcatcher. For more podcasts and narrated articles, download The New York Times app at nytimes.com/app.
A top Google scientist and 2024 Nobel laureate said that the most important skill for the next generation will be "learning how to learn" to keep pace with change as artificial intelligence transforms education and the workplace. Speaking at an ancient Roman theater at the foot of the Acropolis in Athens, Demis Hassabis, CEO of Google's DeepMind, said rapid technological change demands a new approach to learning and skill development. "It's very hard to predict the future, like 10 years from now, in normal cases. It's even harder today, given how fast AI is changing, even week by week," Hassabis told the audience. "The only thing you can say for certain is that huge change is coming." The neuroscientist and former chess prodigy said artificial general intelligence—a futuristic vision of machines that are as broadly smart as humans or at least can do many things as well as people can—could arrive within a decade. This, he said, will bring dramatic advances and a possible future of "radical abundance" despite acknowledged risks. Hassabis emphasized the need for "meta-skills," such as understanding how to learn and optimizing one's approach to new subjects, alongside traditional disciplines like math, science and humanities. "One thing we'll know for sure is you're going to have to continually learn ... throughout your career," he said. The DeepMind co-founder, who established the London-based research lab in 2010 before Google acquired it four years later, shared the 2024 Nobel Prize in Chemistry for developing AI systems that accurately predict protein folding—a breakthrough for medicine and drug discovery. Greek Prime Minister Kyriakos Mitsotakis joined Hassabis at the Athens event after discussing ways to expand AI use in government services. Mitsotakis warned that the continued growth of huge tech companies could create great global financial inequality. "Unless people actually see benefits, personal benefits, to this (AI) revolution, they will tend to become very skeptical," he said. "And if they see ... obscene wealth being created within very few companies, this is a recipe for significant social unrest." This article was provided by The Associated Press.
Google faces the greatest innovator's dilemma in history. They invented the Transformer — the breakthrough technology powering every modern AI system from ChatGPT to Claude (and, of course, Gemini). They employed nearly all the top AI talent: Ilya Sutskever, Geoff Hinton, Demis Hassabis, Dario Amodei — more or less everyone who leads modern AI worked at Google circa 2014. They built the best dedicated AI infrastructure (TPUs!) and deployed AI at massive scale years before anyone else. And yet... the launch of ChatGPT in November 2022 caught them completely flat-footed. How on earth did the greatest business in history wind up playing catch-up to a nonprofit-turned-startup?Today we tell the complete story of Google's 20+ year AI journey: from their first tiny language model in 2001 through the creation Google Brain, the birth of the transformer, the talent exodus to OpenAI (sparked by Elon Musk's fury over Google's DeepMind acquisition), and their current all-hands-on-deck response with Gemini. And oh yeah — a little business called Waymo that went from crazy moonshot idea to doing more rides than Lyft in San Francisco, potentially building another Google-sized business within Google. This is the story of how the world's greatest business faces its greatest test: can they disrupt themselves without losing their $140B annual profit-generating machine in Search?Sponsors:Many thanks to our fantastic Fall ‘25 Season partners:J.P. Morgan PaymentsSentryWorkOSShopifyAcquired's 10th Anniversary Celebration!When: October 20th, 4:00 PM PTWho: All of you!Where: https://us02web.zoom.us/j/84061500817?pwd=opmlJrbtOAen4YOTGmPlNbrOMLI8oo.1Links:Sign up for email updates and vote on future episodes!Geoff Hinton's 2007 Tech Talk at GoogleOur recent ACQ2 episode with Tobi LutkeWorldly Partners' Multi-Decade Alphabet StudyIn the PlexSupremecyGenius MakersAll episode sourcesCarve Outs:We're hosting the Super Bowl Innovation Summit!F1: The MovieTravelpro suitcasesGlue Guys PodcastSea of StarsStepchange PodcastMore Acquired:Get email updates and vote on future episodes!Join the SlackSubscribe to ACQ2Check out the latest swag in the ACQ Merch Store!Note: Acquired hosts and guests may hold assets discussed in this episode. This podcast is not investment advice, and is intended for informational and entertainment purposes only. You should do your own research and make your own independent decisions when considering any financial transactions.
The AI Breakdown: Daily Artificial Intelligence News and Discussions
What will it actually take to get to AGI? Today we unpack the “jagged frontier” of AI capabilities — systems that can dazzle at PhD-level reasoning one moment but stumble on high school math the next. We look at Demis Hassabis' timeline and critique of current models, the debate over whether today's AI really operates at PhD level, and why continual learning and memory remain the missing breakthroughs. We also explore how coding agents, real-world usage data, and persistent context may become critical steps on the road to AGI. Finally, in headlines: lawsuits over AI search, Apple leadership changes, OpenAI's renegotiated deal with Microsoft, and layoffs at xAI.
Welcome back to another episode of Upside at the EUVC Podcast, where Dan Bowyer, Mads Jensen of SuperSeed, Andrew J Scott of 7percent Ventures, and Lomax unpack the forces shaping European venture capital.This week, veteran journalist Mike Butcher (ex-TechCrunch Europe, The Europas, TechFugees) joins the pod. From the creator economy eating media brands, to Europe's fragmented ecosystem and the capital gap that just won't die, we dive into EU-Inc, Draghi's unfulfilled reforms, ASML's surprise bet on Mistral, Europe's defense awakening, Klarna's IPO, and quantum's hot streak.Here's what's covered:00:01 – Mike's ResetTechCrunch Europe closes; Mike reflects on redundancy, summer off, dabbling in social and video.03:00 – Media Evolution & Creator EconomyFrom '90s trade mags → TechCrunch → The Europas & TechFugees. Blogs as early social media; today's creators (MrBeast, Bari Weiss, Cleo Abram) echo that era. Bloomberg pushes reporters front and center as media becomes personality-driven.06:45 – Europe's Ecosystem & Debate CultureEurope isn't Silicon Valley's 101 highway — it's dozens of fragmented hubs. Conferences like Slush, Web Summit, VivaTech anchor the scene, but the missing ingredient is debate. US VCs spar on stage then grab a beer; Europe is still too polite.12:00 – All-In Summit DebriefMads' takeaways from LA: Musk on robotics (the “hand” bottleneck), Demis Hassabis on AGI (5–10 yrs away), Eric Schmidt on US–China AI race, Alex Karp on Europe's regulatory failures. The Valley vibe captured, but it's only one voice.17:00 – EU-Inc & Draghi ReportDraghi's 383 recommendations, just 11% implemented. €16T in pensions sit mostly in bonds; only 0.02–0.03% flows into VC (vs 1–2% in the US). Permitting bottlenecks: 44 months for energy approvals. Panel calls for a Brussels “crack unit,” employee stock option reform, and fixing skilled migration.35:00 – Deal of the Week: ASML × MistralASML leads a €2B round in Mistral at €11B valuation. Strategic and cultural fit (Netherlands ↔ Paris) mattered more than sovereignty. Mads: 14× revenue is a bargain vs US peers. Andrew: proof Europe's VCs are too small — corporates must fill the gap. Lomax: ASML knows it's a one-trick pony with 90% lithography share; diversifying into AI hedges risk.49:00 – Defense & Industrial BaseRussian drones hit Poland, NATO urgency spikes. UK pledges defense spend to 2.5% GDP by 2027, but procurement bottlenecks persist. Poland cuts red tape under fire; UK moves at peacetime pace. Andrew: real deterrence is industrial capacity. Mike: primes must be forced to buy from startups; dual-use innovators like Helsing show the way.59:00 – Klarna IPO & the Klarna MafiaKlarna IPOs at $15B (down from $46B peak). Oversubscribed; Sequoia nets ~$3.5B; Atomico 12M → 150M. A new “Klarna Mafia” of angels and operators will recycle liquidity back into Europe's ecosystem.01:03:00 – Quantum's Hot StreakPsiQuantum ($7B, Bristol roots), Quantinuum ($10B, Cambridge), IQM (Finland unicorn), Oxford Ionics' $1B exit. Europe has parity in talent but lacks growth capital. Lomax: “Quantum is hot, but a winter will come.” Andrew: Europe can win here — if the money shows up.01:05:00 – Wrap-upThe pod ends on optimism: Europe may not own AGI, but in quantum it has a fair fight.
(0:00) Introducing Sir Demis Hassabis, reflecting on his Nobel Prize win (2:39) What is Google DeepMind? How does it interact with Google and Alphabet? (4:01) Genie 3 world model (9:21) State of robotics models, form factors, and more (14:42) AI science breakthroughs, measuring AGI (20:49) Nano-Banana and the future of creative tools, democratization of creativity (24:44) Isomorphic Labs, probabilistic vs deterministic, scaling compute, a golden age of science Thanks to our partners for making this happen! Solana: https://solana.com/ OKX: https://www.okx.com/ Google Cloud: https://cloud.google.com/ IREN: https://iren.com/ Oracle: https://www.oracle.com/ Circle: https://www.circle.com/ BVNK: https://www.bvnk.com/ Follow Demis: https://x.com/demishassabis Follow the besties: https://x.com/chamath https://x.com/Jason https://x.com/DavidSacks https://x.com/friedberg Follow on X: https://x.com/theallinpod Follow on Instagram: https://www.instagram.com/theallinpod Follow on TikTok: https://www.tiktok.com/@theallinpod Follow on LinkedIn: https://www.linkedin.com/company/allinpod Intro Music Credit: https://rb.gy/tppkzl https://x.com/yung_spielburg Intro Video Credit: https://x.com/TheZachEffect
Demis Hassabis, a pioneer in artificial intelligence, is shaping the future of humanity. As the CEO of Google DeepMind, he was first interviewed by correspondent Scott Pelley in 2023, during a time when chatbots marked the beginning of a new technological era. Since that interview, Hassabis has made headlines for his innovative work, including using an AI model to predict the structure of proteins, which earned him a Nobel Prize. Pelley returns to DeepMind's headquarters in London to discuss what's next for Hassabis, particularly his leadership in the effort to develop artificial general intelligence (AGI) – a type of AI that has the potential to match the versatility and creativity of the human brain. Fertility rates in the United States are currently near historic lows, largely because fewer women are having children in their 20s. As women delay starting families, many are opting for egg freezing, the process of retrieving and freezing unfertilized eggs, to preserve their fertility for the future. Does egg freezing provide women with a way to pause their biological clock? Correspondent Lesley Stahl interviews women who have decided to freeze their eggs and explores what the process entails physically, emotionally and financially. She also speaks with fertility specialists and an ethicist about success rates, equity issues and the increasing market potential of egg freezing. This is a double-length segment. To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices
Demis Hassabis is the CEO of Google DeepMind and Nobel Prize winner for his groundbreaking work in protein structure prediction using AI. Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep475-sc See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc. Transcript: https://lexfridman.com/demis-hassabis-2-transcript CONTACT LEX: Feedback - give feedback to Lex: https://lexfridman.com/survey AMA - submit questions, videos or call-in: https://lexfridman.com/ama Hiring - join our team: https://lexfridman.com/hiring Other - other ways to get in touch: https://lexfridman.com/contact EPISODE LINKS: Demis's X: https://x.com/demishassabis DeepMind's X: https://x.com/GoogleDeepMind DeepMind's Instagram: https://instagram.com/GoogleDeepMind DeepMind's Website: https://deepmind.google/ Gemini's Website: https://gemini.google.com/ Isomorphic Labs: https://isomorphiclabs.com/ The MANIAC (book): https://amzn.to/4lOXJ81 Life Ascending (book): https://amzn.to/3AhUP7z SPONSORS: To support this podcast, check out our sponsors & get discounts: Hampton: Community for high-growth founders and CEOs. Go to https://joinhampton.com/lex Fin: AI agent for customer service. Go to https://fin.ai/lex Shopify: Sell stuff online. Go to https://shopify.com/lex LMNT: Zero-sugar electrolyte drink mix. Go to https://drinkLMNT.com/lex AG1: All-in-one daily nutrition drink. Go to https://drinkag1.com/lex OUTLINE: (00:00) - Introduction (00:29) - Sponsors, Comments, and Reflections (08:40) - Learnable patterns in nature (12:22) - Computation and P vs NP (21:00) - Veo 3 and understanding reality (25:24) - Video games (37:26) - AlphaEvolve (43:27) - AI research (47:51) - Simulating a biological organism (52:34) - Origin of life (58:49) - Path to AGI (1:09:35) - Scaling laws (1:12:51) - Compute (1:15:38) - Future of energy (1:19:34) - Human nature (1:24:28) - Google and the race to AGI (1:42:27) - Competition and AI talent (1:49:01) - Future of programming (1:55:27) - John von Neumann (2:04:41) - p(doom) (2:09:24) - Humanity (2:12:30) - Consciousness and quantum computation (2:18:40) - David Foster Wallace (2:25:54) - Education and research PODCAST LINKS: - Podcast Website: https://lexfridman.com/podcast - Apple Podcasts: https://apple.co/2lwqZIr - Spotify: https://spoti.fi/2nEwCF8 - RSS: https://lexfridman.com/feed/podcast/ - Podcast Playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4 - Clips Channel: https://www.youtube.com/lexclips
Episode 18 of The Basic Income Show!What happened at this years Basic Income Guarantee (BIG) Conference? Let's talk about Zohran Mamdani and his Guaranteed Basic Income Bill.Chapters:00:00 Welcome to The Basic Income Show00:25 The BIG Conference08:17 Union of Basic Income Participants22:29 Newark New Jersey GBI Program Results27:14 Comingle Update28:54 Neurodivergence and UBI35:51 Zohran Mamdani has co-sponsored a GBI bill40:51 Canada's New Basic Income Bill S-20654:33 Georgia's In Her Hands GBI Program News59:43 Ireland's Basic Income for Artists Program Extended1:02:46 Vinod Khosla on AI and UBI1:07:24 New NSF Study About AI and UBI1:15:08 Demis Hassabis on AI and UBI1:19:16 Phonely's New Call Center AI1:26:36 ElevenLabs' New V3 Audio AI1:32:10 Trump's AI Czar David Sacks on AI and UBI1:33:00 Economist Ann Pettifor on UBI1:38:36 Basic Income for Climate Activists in Tuvalu1:46:26 Concluding RemarksSummary:In this conversation, Scott Santens and Conrad Shaw discuss the latest developments in the Basic Income movement, including the recent BIG conference in DC, community engagement, and the establishment of the Union of Basic Income Participants. They explore the importance of mutual aid, the impact of AI on employment, and legislative updates regarding Basic Income. The discussion also addresses critiques of Basic Income and highlights global perspectives on its implementation, emphasizing the need for economic empowerment and collective action.AI Job Disruption Calculator:https://fundforhumanity.org/national-science-foundation-ai-worker-impact-report/Vinod Khosla video:https://www.youtube.com/watch?v=8JZg0SuJozoKim Pate video:https://www.youtube.com/watch?v=DNFaXV1zeWc&t=443s See my ongoing compilation of UBI evidence on Bluesky:https://bsky.app/profile/scottsantens.com/post/3lckzcleo7s24See my ongoing compilation of UBI evidence on X: https://x.com/scottsantens/status/1766213155967955332For more info about UBI, please refer to my UBI FAQ: http://scottsantens.com/basic-income-faqDonate to the Income To Support All Foundation to support UBI projects:https://www.itsafoundation.orgSubscribe to the ITSA Newsletter for monthly UBI news:https://itsanewsletter.beehiiv.com/subscribeVisit Basic Income Today for daily UBI news:https://basicincometoday.comSign up for the Comingle waitlist for voluntary UBI:https://www.comingle.usFollow Scott:https://linktr.ee/scottsantensFollow Conrad:https://bsky.app/profile/theubiguy.bsky.socialhttps://www.linkedin.com/in/conradshaw/Follow Josh:https://bsky.app/profile/misterjworth.bsky.socialhttps://www.linkedin.com/in/joshworth/Special thanks to: Gisele Huff, Haroon Mokhtarzada, Steven Grimm, Judith Bliss, Lowell Aronoff, Jessica Chew, Katie Moussouris, David Ruark, Tricia Garrett, A.W.R., Daryl Smith, Larry Cohen, John Steinberger, Philip Rosedale, Liya Brook, Frederick Weber, Laurel gillespie, Dylan Hirsch-Shell, Tom Cooper, Robert Collins, Joanna Zarach, Mgmguy, Daragh Ward, Albert Wenger, Andrew Yang, Peter T Knight, Michael Finney, David Ihnen, Steve Roth, Miki Phagan, Walter Schaerer, Elizabeth Corker, Albert, Daniel Brockman, Natalie Foster, Joe Ballou, Arjun, Justin Dart, Felix Ling, S, Jocelyn Hockings, Mark Donovan, Jason Clark, Chuck Cordes, Mark Broadgate, Leslie Kausch, Braden Ferrin, Juro Antal, Austin, Deanna McHugh, Stephen Castro-Starkey, and all my other patrons for their support.If you'd like to see your name here in future video descriptions, you can do so by becoming a patron on Patreon at the UBI Producer level or above.Patreon: https://www.patreon.com/scottsantens/membership#universalbasicincome #BasicIncome #UBI
Interview with Stephen Witt Altman's Gentle Singularity Sutskever video: start at 5:50-6:40 Paris on Apple Glass OpenAI slams court order to save all ChatGPT logs, including deleted chats Disney and Universal Sue A.I. Firm for Copyright Infringement Apple paper: The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity Futurism on the paper Could AI make a Scorsese movie? Demis Hassabis and Darren Aronofsky discuss YouTube Loosens Rules Guiding the Moderation of Videos Meta Is Creating a New A.I. Lab to Pursue 'Superintelligence' Meta and Yandex are de-anonymizing Android users' web browsing identifiers Amazon 'testing humanoid robots to deliver packages' Google battling 'fox infestation' on roof of £1bn London office 23andMe's Former CEO Pushes Purchase Price Nearly $50 Million Higher Code to control vocal production with hands Warner Bros. Discovery to split into two public companies by next year Social media creators to overtake traditional media in ad revenue this year Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Stephen Witt Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: agntcy.org smarty.com/twit monarchmoney.com with code TWIT spaceship.com/twit
This week, we take a field trip to Google and report back about everything the company announced at its biggest show of the year, Google I/O. Then, we sit down with Google DeepMind's chief executive and co-founder, Demis Hassabis, to discuss what his A.I. lab is building, the future of education, and what life could look like in 2030.Guest:Demis Hassabis, co-founder and chief executive of Google DeepMindAdditional Reading:At Google I/O, everything is changing and normal and scary and chillGoogle Unveils A.I. Chatbot, Signaling a New Era for SearchGoogle DeepMind C.E.O. Demis Hassabis on the Path From Chatbots to A.G.I.We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok. Unlock full access to New York Times podcasts and explore everything from politics to pop culture. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify.
Bird flu, which has long been an emerging threat, took a significant turn in 2024 with the discovery that the virus had jumped from a wild bird to a cow. In just over a year, the pathogen has spread through dairy herds and poultry flocks across the United States. It has also infected people, resulting in 70 confirmed cases, including one fatality. Correspondent Bill Whitaker spoke with veterinarians and virologists who warn that, if unchecked, this outbreak could lead to a new pandemic. They also raise concerns about the Biden administration's slow response in 2024 and now the Trump administration's decision to lay off over 100 key scientists. Demis Hassabis, a pioneer in artificial intelligence, is shaping the future of humanity. As the CEO of Google DeepMind, he was first interviewed by correspondent Scott Pelley in 2023, during a time when chatbots marked the beginning of a new technological era. Since that interview, Hassabis has made headlines for his innovative work, including using an AI model to predict the structure of proteins, which earned him a Nobel Prize. Pelley returns to DeepMind's headquarters in London to discuss what's next for Hassabis, particularly his leadership in the effort to develop artificial general intelligence (AGI) – a type of AI that has the potential to match the versatility and creativity of the human brain. One of the most awe-inspiring and mysterious migrations in the natural world is currently taking place, stretching from Mexico to the United States and Canada. This incredible spectacle involves millions of monarch butterflies embarking on a monumental aerial journey. Correspondent Anderson Cooper reports from the mountains of Mexico, where the monarchs spent the winter months sheltering in trees before emerging from their slumber to take flight. To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices
How can AI help us understand and master deeply complex systems—from the game Go, which has 10 to the power 170 possible positions a player could pursue, or proteins, which, on average, can fold in 10 to the power 300 possible ways? This week, Reid and Aria are joined by Demis Hassabis. Demis is a British artificial intelligence researcher, co-founder, and CEO of the AI company, DeepMind. Under his leadership, DeepMind developed Alpha Go, the first AI to defeat a human world champion in Go and later created AlphaFold, which solved the 50-year-old protein folding problem. He's considered one of the most influential figures in AI. Demis, Reid, and Aria discuss game theory, medicine, multimodality, and the nature of innovation and creativity. For more info on the podcast and transcripts of all the episodes, visit https://www.possible.fm/podcast/ Listen to more from Possible here. Learn more about your ad choices. Visit podcastchoices.com/adchoices