POPULARITY
More than 35,000 people attended the recent India AI Impact Summit in Delhi, which featured speeches from more than 20 heads of state and dozens of technology company leaders including Sam Altman of OpenAI, Dario Amodei of Anthropic and Demis Hassabis of Google DeepMind. In this episode, host David Sandalow offers his reflections on the Summit and speaks with Arunabha Ghosh, President of CEEW, a leading Delhi-based public policy think tank. Ghosh offers his views on the Summit, data center construction in India and around the world and the role of AI in sustainable development, among other topics. This material is distributed by TRG Advisory Services, LLC on behalf of the Embassy of the United Arab Emirates in the U.S.. Additional information is available at the Department of Justice, Washington, DC. Learn more about your ad choices. Visit megaphone.fm/adchoices
Google just dropped Gemini 3 Flash—a model that outperforms Gemini 2.5 Pro (their last top model) while running 3x faster at less than 1/4 the cost. It's frontier-level reasoning at Flash-level speed, and it's rolling out globally right now.We're sitting down with Logan Kilpatrick from Google DeepMind to explore what this actually means for developers, knowledge workers, and anyone trying to figure out how AI fits into their workflow.What we'll cover:
Health systems have spent 20 years optimizing for the patient who searches, clicks, and reads. They are not optimizing for the agent that queries, evaluates, and routes. Those are two different audiences — and most organizations are only ready for one of them. The digital front door was built on a human assumption: that discovery begins with a search, passes through a website, and ends in conversion. Agentic AI doesn't use doors. It uses structured pathways, machine-readable attributes, and decision logic that operates entirely outside your owned channel. The routing is already happening. The question is whether health systems are in the decision set - or invisible to it. The infrastructure making this possible isn't speculative. Model Context Protocol (MCP), now an open standard backed by Anthropic, OpenAI, and Google DeepMind, defines how AI agents connect to external tools and data sources. NLWeb, launched by Microsoft in May 2025, turns websites into machine-queryable endpoints. Together, they create an execution layer on top of your digital ecosystem. And most hospital websites aren't built to be legible to it. Chris Boyer and Reed Smith work through what this shift actually requires: Why the patient journey now runs conversation → AI interpretation → machine routing → conversion — and health systems control only the last step What breaks when machines encounter unstructured provider bios, inconsistent service line naming, and scheduling availability gaps Why brand strength built on emotional resonance doesn't translate to machine-readable signals — and what does The gap between "78% of health systems engaged in AI projects" and the 52% that feel operationally ready to implement them What a practical machine readiness audit looks like, and who inside the organization should own it The organizational problem is as hard as the technical one. Marketing owns content but rarely owns schema. IT owns infrastructure but rarely thinks in terms of machine-readable patient experience. Someone has to own machine readiness as a cross-functional problem. Right now, almost no one does. If your digital strategy was designed for the patient who searches, clicks, and reads - it was not designed for the agent that queries, evaluates, and routes. Mentions From the Show: Dean Browell on LinkedIn Danny Fell on LinkedIn Reed Smith on LinkedIn Chris Boyer on LinkedIn Chris Boyer website Chris Boyer on BlueSky Reed Smith on BlueSky Learn more about your ad choices. Visit megaphone.fm/adchoices
Se Stanislavem Fortem o nástupu AI agentů, limitech a rizicích umělé inteligenci, ochraně a opravování softwarových katedrál a budování kyberbezpečnostního startupu Aisle v Praze. Moderuje Štěpán Sedláček.Pravděpodobně prožíváme technologickou revoluci, jejíž rychlost, rozsah a potenciálních dopady na lidský život a práci nemají obdoby, ať už skončí jakkoli. Nástup velkých jazykových modelů a generativní AI je čím dál patrnější napříč různými sférami lidské činnosti od programování po umění. Otázky, které dříve řešila poměrné malá skupina lidí spojených s výzkumem a vývojem umělé inteligence nebo science fiction, jsou dnes často ve středu zájmu celospolečenské debaty, byť by si možná zasloužily ještě více pozornosti a to i ze strany států. Otázku po tom, jestli někdy bude k dispozici umělá inteligence, která předčí člověka, dnes spíše přebíjí otázka, jestli nás od ní dělí rok, několik let nebo víc času. Stanislav Fort je matematik, fyzik a expert na umělou inteligenci a velké jazykové modely (LLM), který dříve působil v předních světových společnostech v oboru Google DeepMind nebo Anthropic. Jak vidí letošní rok na poli AI?„Myslím, že letos si většina lidí uvědomí, že AI funguje a je schopná dělat užitečnou intelektuální práci. V roce 2025 se staly mainstreamem přemýšlecí (tzv. reasoning) modely zejména v souvislosti s nástupem modelu R1 od společnosti DeepSeek. Během té doby se modely extrémně zlepšily a začaly být schopné řešit dlouhé a obtížné intelektuální úkoly napříč obory u nichž je třeba koordinovat přemýšlení přes dlouhé časové horizonty. A ty se měsíc po měsíci prodlužovaly rapidním tempem. Dnes si už většina lidí v programování i softwarovém inženýrství a odvětvích, která silně závisejí na využití počítačů, uvědomuje, že jsme na hraně toho, kdy tyto věci dokáží pracovat na podobných věcech jako elitní lidé a nepotřebují příliš supervize. Rok 2026 bude rokem, kdy AI agenti a přemýšlecí modely, které je pohánějí, začnou fungovat v reálných ekonomicky důležitých činnostech,“ říká expert Stanislav Fort, který společně s Ondřejem Vlčkem a Jayou Baloo založil firmu Aisle, kde působí jako hlavní vědec.Podařilo se jim vytvořit autonomní AI nástroj, který umí rychle nacházet a opravovat bezpečnostní chyby ve složitých softwarových systémech jako je protokol OpenSSL, který šifruje většinu komunikace na webu. Jaké mají po roce fungování na poli kybernetické bezpečnosti cíle? Jaký zásadní problém se jim podařilo vyřešit? Co říká na nástup AI agentů dění kolem sítě Moltbook? Vidí nějaké fundamentální limity ve vývoji umělé inteligence? Co si myslí o AI bublině na trzích? Jak by se měla Evropa postavit k aktuálním závodům ve vývoji AI? A jaká úskalí má zakládání kyberbezpečnostního startupu v Praze? Nejen na to se ptá v podcastu Zeitgeist Štěpán Sedláček.
✅ Two major model releases from Google and Anthropic ✅ The usual AI drama ✅ Surprising AI updates no one saw coming ✅ AI leaks and reports that if true, could change how we workYeah, there was a lot to follow this week in AI. If you missed anything, we've got you covered. Google Gemini 3.1 tops charts, Claude Sonnet 4.6 impresses, New OpenAI leaks reveal their massive AI hardware plans and more -- An Everyday AI Chat with Jordan WilsonNewsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion on LinkedIn: Thoughts on this? Join the convo on LinkedIn and connect with other AI leaders.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Anthropic Revenue Growth vs OpenAI ProjectionsOpenAI's 2030 Hardware and Revenue PlansOpenAI and Anthropic Beef at India SummitAI Global Summit: New Delhi Declaration OverviewGoogle Gemini 3.1 Pro Three-Tier Reasoning SystemGemini 3.1 Pro Benchmark and Performance ScoreClaude Sonnet 4.6 Release and Benchmark ResultsAnthropic Model Tier Comparisons: Haiku, Sonnet, OpusGoogle Pameli Photoshoot AI for Product ImagesAI Job Automation Concerns: Andrew Yang AnalysisOpenAI Consumer Hardware: Speaker, Glasses, LightWeekly AI Model Updates and Feature RolloutsTimestamps:00:00 "Anthropic vs OpenAI Revenue Race"04:00 Anthropic vs OpenAI Revenue Battle07:39 Anthropic's API Usage Decline11:03 AI Summit Sparks Debate and Criticism16:37 "Gemini 3.1 Pro Dominates Benchmarks"18:23 "Google's Edge in AI Race"20:56 "SONNET 4.6 Outperforms Opus"24:13 "Google's AI Photoshoot Tool"29:57 "AI's Impact on Jobs"31:13 AI Dominance & OpenAI Hardware35:03 AI Revenue Risks and Competition41:10 "Subscribe for AI Updates"42:08 "Subscribe to Everyday AI Updates"Keywords: Gemini 3.1, Google DeepMind, AI news, Large Language Model, OpenAI, Anthropic, Claude Sonnet 4.6, Claude Opus 4.6, ChatGPT, Sam Altman, Dario Amodei, Global AI Summit, AI Impact Summit India, AI powered hardware, Smart speaker, Smart glasses, AI chip spending, Compute infrastructure, Revenue growth,Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Start Here ▶️Not sure where to start when it comes to AI? Start with our Start Here Series. You can listen to the first drop -- Episode 691 -- or get free access to our Inner Cricle community and access all episodes there: StartHereSeries.com
This episode explores the vision of Demis Hassabis, CEO of Google DeepMind and recipient of the 2024 Nobel Prize in Chemistry. Hassabis argues that 2026 marks a pivotal turning point in human history, as we enter what he describes as an “AI Renaissance”—an era whose impact could be ten times greater than the Industrial Revolution, unfolding at ten times the speed. He predicts that Artificial General Intelligence (AGI) could be achieved before 2030, while cautioning that today's AI systems remain in a state of “jagged intelligence,” still lacking robust reasoning and long-term planning capabilities. As the industry enters a phase of consolidation, Hassabis is focused on transforming AI into a scientific engine. Through breakthroughs such as AlphaFold and initiatives like Isomorphic Labs, he aims to reshape drug discovery, while collaborations with the U.S. Department of Energy—such as the “Genesis Project”—seek to accelerate progress in energy innovation. At the core of his vision is the concept of “Radical Abundance.” As AI drives the marginal cost of healthcare and energy toward near zero, society may begin to transition into a post-scarcity era. To navigate this shift, Hassabis proposes new social mechanisms, including a “Global Abundance Dividend,” and emphasizes that AI governance must extend beyond technologists, requiring international cooperation to ensure these technologies benefit all of humanity.本集的內容將帶您深入探索 Google DeepMind 執行長、2024 年諾貝爾化學獎得主 戴米斯·哈薩比斯 (Demis Hassabis) 的遠見。哈薩比斯指出 2026 年是人類歷史的轉折點,我們正進入一個「AI 文藝復興」時代,其影響力將是工業革命的十倍,且發展速度快上十倍。 哈薩比斯預測通用人工智能 (AGI) 可能在 2030 年前實現,但警告現今 AI 仍處於「參差不齊的智能」狀態,必須克服基礎推理與長期規劃的缺陷。隨著行業進入「洗牌期」,他致力於將 AI 轉化為科學引擎,透過 AlphaFold 與 Isomorphic Labs 變革藥物研發,並與美國能源部合作「創世紀任務」以加速能源突破。 他最核心的觀點是 「激進豐饒」(Radical Abundance):當 AI 讓醫療與能源成本趨近於零,人類將邁向「後稀缺」社會。為應對此轉變,他提出「全球豐饒紅利」等社會機制,並強調 AI 治理不能僅留給技術專家,需透過國際合作確保這項技術能造福全人類。 Powered by Firstory Hosting
【欢迎订阅】每天早上5:30,准时更新。【阅读原文】标题:Google DeepMind unleashes new AI to investigate DNA's ‘dark matter'DeepMind's AlphaGenome AI model could help solve the problem of predicting how variations in noncoding DNA shape gene expression正文:DNA is the blueprint for life, influencing our health. We know that our genes, the genetic “words” that encode proteins, play a major role in health and disease. But more than 98 percent of our genome consists of DNA that doesn't build proteins. Once disregarded as “junk DNA,” scientists now know that this molecular dark matter is crucial for determining gene activity in ways that keep us healthy—or cause disease.知识点:encode v. /ɪnˈkoʊd/to contain the instructions to produce a protein or function 编码• A single gene can encode multiple proteins through alternative splicing. 单个基因可通过可变剪接编码多种蛋白质。• Only about 2% of the human genome actually encodes proteins. 人类基因组中仅有约2%实际编码蛋白质。获取外刊的完整原文以及精讲笔记,请关注微信公众号「早安英文」,回复“外刊”即可。更多有意思的英语干货等着你!【节目介绍】《早安英文-每日外刊精读》,带你精读最新外刊,了解国际最热事件:分析语法结构,拆解长难句,最接地气的翻译,还有重点词汇讲解。所有选题均来自于《经济学人》《纽约时报》《华尔街日报》《华盛顿邮报》《大西洋月刊》《科学杂志》《国家地理》等国际一线外刊。【适合谁听】1、关注时事热点新闻,想要学习最新最潮流英文表达的英文学习者2、任何想通过地道英文提高听、说、读、写能力的英文学习者3、想快速掌握表达,有出国学习和旅游计划的英语爱好者4、参加各类英语考试的应试者(如大学英语四六级、托福雅思、考研等)【你将获得】1、超过1000篇外刊精读课程,拓展丰富语言表达和文化背景2、逐词、逐句精确讲解,系统掌握英语词汇、听力、阅读和语法3、每期内附学习笔记,包含全文注释、长难句解析、疑难语法点等,帮助扫除阅读障碍。
Crude prices move higher with Brent now surpassing $70 a barrel after President Trump warns of potential consequences should Iran fail to reach a deal over its nuclear programme. U.S.-based private credit group Blue Owl announces it will halt investor withdrawals from a debt fund for retail traders, causing shares to slump across the sector. We are live at the A.I. Impact summit in New Delhi where CNBC learns that Nvidia is launching a new $30bn investment into OpenAI. Google DeepMind co-founder and CEO Demis Hassabis says the sector is suffering from a shortfall of memory and chips. And in aviation news, Airbus cuts its output target causing shares to fall but AF-KLM posts more than €2bn in FY profit.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
00:00 Introducción 00:15 Google lanza Lyria 3, podrás hacer canciones con solo subir una foto La nueva herramienta desarrollada por Google DeepMind tendrá la capacidad de generar imágenes y audio con solo un prompt. 01:30 ¿Regular o dejar crear? Netflix advierte riesgos de imponer cuotas de contenido Greg Peters, co CEO de la firma, vino al país a hablar sobre el futuro de la industria, las tendencias que observa y la relevancia del mercado mexicano. 03:14 El IMSS le gana tiempo al daño cerebral, gracias a la IA Esta herramienta y los equipos de imagen ayudan a tratar más rápido los accidentes cerebrovasculares, salvando vidas y reduciendo secuelas.
Gemini crea canciones con Lyria 3 mientras Sony desarrolla tecnología para medir influencias y futuras regalías en música generada por IA Por Félix Riaño @LocutorCoGemini ya genera canciones con IA y Sony prepara herramientas para rastrear qué música humana influyó en ellasGoogle acaba de activar una nueva función en Google Gemini. Ahora puedes escribir una frase y obtener una canción de 30 segundos. Rap, pop, rock, afrobeat. Con letra o instrumental.El modelo que lo hace posible se llama Lyria 3 y viene del equipo de Google DeepMind. Está disponible desde el 18 de febrero de 2026 para mayores de 18 años en ocho idiomas, incluido el español.Pero mientras Google invita a jugar con la música, otra empresa trabaja en algo muy distinto. Sony Group desarrolla una tecnología capaz de estimar cuánto influyen canciones humanas en una pista creada por IA.Una crea. La otra mide. ¿Estamos entrando en la era del contador musical digital?Crear ahora es muy fácil. Rastrear es complejo.Primero, lo nuevo de Google.En la app de Gemini, vas a “Herramientas” y eliges “Crear música”. Escribes algo como: “Pop latino, ritmo alegre, guitarra acústica, voz femenina suave, letra sobre una mascota”. En segundos tienes un archivo de 30 segundos listo para descargar en MP3 o en video.Si subes una foto o un clip, Gemini analiza la imagen y compone algo que encaje con esa atmósfera. La portada la genera el modelo Nano Banana. Todo queda listo para compartir.Google aclara que el objetivo no es crear el próximo éxito mundial. Es expresión rápida. Una broma. Una banda sonora para un Short.Las canciones llevan una marca invisible llamada SynthID. Es un watermark digital que identifica que el audio fue generado por IA de Google. También puedes subir un archivo a Gemini y preguntarle si fue creado con su sistema.Eso es el lado creativo. Ahora viene el lado contable.La industria musical lleva años preguntando lo mismo: ¿con qué datos se entrenan estos modelos?Plataformas como Suno y Udio enfrentaron demandas por presunto uso de grabaciones protegidas sin licencia. El debate gira en torno a una pregunta sencilla: si una IA aprende escuchando música humana, ¿quién debe recibir pago?Ahí entra Sony.El equipo de Sony AI publicó una investigación sobre un sistema capaz de estimar qué canciones influyen en una pista generada por IA. En pruebas académicas, el sistema puede calcular porcentajes aproximados de influencia.Si el desarrollador coopera, se analizan datos internos del modelo. Si no coopera, se comparan las canciones generadas con grandes catálogos musicales.El reto es técnico y enorme. En pruebas con conjuntos relativamente pequeños, atribuir influencias puede tardar horas usando equipos muy potentes. Llevar eso a escala industrial es un desafío todavía abierto.Pero la intención es clara: construir una base para repartir regalías cuando la IA participe en la creación musical.No estamos ante una pelea directa entre Google y Sony. Estamos viendo dos movimientos en el mismo tablero.Google apuesta por integrar texto, imagen, video y ahora sonido en un solo ecosistema creativo. Con millones de usuarios activos en Gemini, la generación musical se vuelve cotidiana.Sony, que controla enormes catálogos históricos, explora cómo transformar la atribución en una herramienta de negociación y reparto económico.Si la música generada por IA crece en plataformas de streaming, será necesario decidir cómo se distribuyen los ingresos.La gran pregunta es esta: ¿vamos a tener un sistema donde cada canción generada por IA venga acompañada de un cálculo de influencia?La tecnología ya está en desarrollo. Lo que falta es que la industria acuerde cómo usarla.Lyria 3 está disponible en inglés, alemán, español, francés, hindi, japonés, coreano y portugués. Google planea ampliar idiomas y mejorar calidad.Los usuarios gratuitos pueden crear canciones con límites diarios. Los planes de pago permiten más generación.Además, Google integra Lyria 3 con YouTube Dream Track para crear música personalizada en Shorts. Eso puede impactar directamente en la producción masiva de contenido corto.Mientras tanto, empresas y sellos discográficos desarrollan herramientas de detección basadas en análisis estadístico y técnicas de “machine unlearning”. Estos sistemas intentan identificar qué partes del conocimiento de un modelo influyen en un resultado específico.El debate no es si la IA puede hacer música. Ya puede.El debate es cómo se mide su deuda con la música humana.Gemini ahora crea canciones en segundos. Sony trabaja en medir qué música humana influyó en ellas.Estamos ante creatividad instantánea y contabilidad algorítmica al mismo tiempo.¿Te entusiasma más crear o medir?Escucha más episodios en Spotify: Flash Diario BibliografíaNumeramaFrandroidBFMTVGoogle BlogComplete Music UpdateElectronics WeeklyConviértete en un supporter de este podcast: https://www.spreaker.com/podcast/flash-diario-de-el-siglo-21-es-hoy--5835407/support.Apoya el Flash Diario y escúchalo sin publicidad en el Club de Supporters.
Lila Ibrahim is the COO of Google DeepMind. James Manyika is the senior Vice President for Research, Technology, and Society at Google. The two join Big Technology Podcast to discuss how Google's AI effort operates and runs experiments. In this conversation, we discuss the fundamental operating structure of DeepMind, how Google proper has become more experimental with the revival of Labs and other programs, and how the company is thinking about AI and education. We also cover weather and flood prediction at global scale, and training AI in space. Hit play for a deep inside look at the mechanics behind Google's AI research machine and the big ideas it's betting on next. Take back your personal data with Incogni! Go to incogni.com/bigtechpod and Use code bigtechpod at checkout, our code will get you 60% off on annual plans. Go check it out! Learn more about your ad choices. Visit megaphone.fm/adchoices
At Davos this year, some of the biggest names in tech sent a clear signal. AI is no longer a novelty. It is no longer a proof-of-concept exercise. As Demis Hassabis of Google DeepMind suggested, AI will shape more meaningful work. And Satya Nadella of Microsoft was even more direct. AI only matters if it improves real outcomes for people. So what does that look like inside the enterprise? In this episode of Tech Talks Daily, I'm joined by Andrew Boyagi, Customer CTO at Atlassian, to unpack how the conversation has shifted from experimentation to execution. Developers, in many ways, are the perfect lens for understanding this moment. Over the last two decades, their role has expanded far beyond writing code. They now own products, infrastructure, operations, and business outcomes. AI is simply the next chapter in that evolution. Andrew argues that AI will not replace engineers. It will raise expectations. As intelligent tools absorb repetitive work, the real value moves up the stack. System design. Architectural thinking. Reviewing and refining AI-generated output and orchestrating solutions that solve genuine business problems. And through it all, humans remain firmly in the loop. We also explore what this means for leadership, why mindset is starting to matter more than technical skill alone, how organizations can avoid layering AI on top of broken processes. And why the companies pulling ahead are treating AI as a strategic discipline, not a feature upgrade. This is a conversation grounded in reality. It speaks to product leaders, CTOs, CIOs, and anyone asking a simple but powerful question. If we are investing in AI, what are we actually getting back? And before we close, we look ahead to Team '26 and the themes Andrew and his team are already working on. If this year has been about proving value, what will the next chapter demand from enterprise leaders? As always, I'd love to hear your thoughts. Are you seeing proof of value in your organization yet, or are you still working through the pilot phase?
In today's Tech3 from Moneycontrol, we bring you a quick wrap from the India AI Impact Summit in Delhi. India's much-anticipated AI models are unveiled by Sarvam AI, Gnani.ai and the BharatGen consortium. Wikipedia co-founder Jimmy Wales speaks on AI and neutrality, while AI pioneer Yoshua Bengio warns about risk management and job displacement. We also track Google DeepMind's new partnership with Indian institutions and Demis Hassabis on the road to artificial general intelligence.
Dex Hunter-Torricke has worked with some of the most influential people in Tech over the last 15 years. But now he's sounding the alarm. In this episode of Jobs of the Future, we sit down with a true Silicon Valley insider who has spent the last 15 years at the epicentre of the tech revolution. From serving as the first executive speechwriter for Eric Schmidt at Google to leading communications for Mark Zuckerberg at Facebook and Elon Musk at SpaceX, our guest has had a front-row seat to the decisions shaping our modern world. Most recently, he served as a senior leader at Google DeepMind, the world's premier AI lab, during the most pivotal moments in the race toward Artificial General Intelligence (AGI). 03:36 - His Tech Industry Journey06:30 - Being at The Front Lines of AGI 07:05 -The Reality Check 09:09 - Why AI is So Different to Every Other Technology 11:05 - The AGI Countdown 12:14 - The Death of the "Good Life" 13:41 - The Geopolitics of Sovereignty 14:46 - Future-Proofing Your Career 18:39 - The Economy of Meaning 21:29 - The 60% Job Vulnerability 25:23 - The Brittle Power of Tech Giants 32:15 - Launching the Center for Tomorrow 52:30 - Redefining Success 57:00 - A Philosophy for Interdependence ********** Follow us on socials! Instagram: https://www.instagram.com/jimmysjobs Tiktok: https://www.tiktok.com/@jimmysjobsofthefuture Twitter / X: https://www.twitter.com/JimmyM Linkedin: https://www.linkedin.com/in/jimmy-mcloughlin-obe/ Want to come on the show? hello@jobsofthefuture.co Sponsor the show or Partner with us: sunny@jobsofthefuture.co Credits: Host / Exec Producer: Jimmy McLoughlin OBE Producer: Sunny Winter https://www.linkedin.com/in/sunnywinter/ Junior Producer: Thuy Dong Edited by: Ben Alexander Kippen Learn more about your ad choices. Visit podcastchoices.com/adchoices
In this mind-bending episode of the Qubit Value Podcast, we fast-forward to February 2026 to unpack the release of Google DeepMind's Gemini 3 Deep Think—a revolutionary AI model that essentially pauses to reason before it responds. We explore how this "System 2" thinking capability is finally bridging the gap between classical logic and quantum hardware, transforming everything from complex circuit transpilation to the growth of near-perfect semiconductor crystals. Join us as we discuss the emergence of the "Unified Stack," where AI architects and quantum mechanics merge to move the industry from fragile experiments to repeatable, fault-tolerant engineering, effectively proving that the "inference time" tax is a small price to pay for solving humanity's hardest physics problems. Want to hear more? Send a message to Qubit Value
Artificial general intelligence (AGI) is that point in the future when the machines can do pretty much everything better than humans. When will it happen, what will it look like, and what will be the impact on humanity? Two of the brightest minds working in AI today, Demis Hassabis, Co-Founder and CEO of Google DeepMind, and Dario Amodei, Co-Founder and CEO of Anthropic, speak to Zanny Minton Beddoes, Editor-in-Chief of The Economist. Benjamin Larsen, an expert in AI at the World Economic Forum, introduces the conversation and gives us a primer on AGI. You can watch the conversation from the Annual Meeting 2026 in Davos here: https://www.weforum.org/meetings/world-economic-forum-annual-meeting-2026/sessions/the-day-after-agi/ Links: Centre for AI Excellence: https://centres.weforum.org/centre-for-ai-excellence/home AI Global Alliance: https://initiatives.weforum.org/ai-global-alliance/home Global Future Council on Artificial General Intelligence: https://initiatives.weforum.org/global-future-council-on-artificial-general-intelligence/home Related podcasts: Check out all our podcasts on wef.ch/podcasts: YouTube: - https://www.youtube.com/@wef/podcasts Radio Davos - subscribe: https://pod.link/1504682164 Meet the Leader - subscribe: https://pod.link/1534915560 Agenda Dialogues - subscribe: https://pod.link/1574956552
Send a textWe are in the ‘Intelligence Age' and the ‘humans versus AI' debates are everywhere. So, we thought we'd bring in an AI futurist and a leading voice in AI, digital transformation, and how AI will shape business, education and society. Meet Steve Brown, entrepreneur and former Google DeepMind and Intel executive who has helped brands like Bank of America, Lenovo, Nespresso, Cameco, and Intuit prepare for what he calls The Intelligence Age. Drawing upon his decades of experience in artificial intelligence and high tech to help leaders build winning AI strategies that fuel innovation, boost performance, and drive growth, Steve succinctly explains the radical global transformation with AI, in his book: 'The AI Ultimatum: Preparing for a World of Intelligent Machines and Radical Transformation'Hit play for Steve's take on the AI ultimatum and key takeaways from his book. [2:46s] Genesis of Steve as an AI futurist[10:09s] On ‘The AI Ultimatum: Preparing for a World of Intelligent Machines and Radical Transformation'[22:08s] On AI innovation, ethical AI, regulations [36:16s] Top 3 future trends in AIRWL: Steve's book 'The AI Ultimatum: Preparing for a World of Intelligent Machines and Radical Transformation'Connect with Steve on LinkedInConnect with Vinay on X and LinkedIn What did you think about this episode? What would you like to hear more about? Or simply, write in and say hello! podcast@c2cod.comSubscribe to us on your favorite platforms – Google Podcasts, Apple Podcasts, Spotify, Overcast, Tune In Alexa, Amazon Music, Pandora, TuneIn + Alexa, Stitcher, Jio Saavn and more. This podcast is sponsored by C2C-OD, your Organizational Development consulting partner ‘Bringing People and Strategy Together'. Follow @c2cod on Twitter, LinkedIn, Instagram, Facebook
After a volatile few months across games, tech, and public markets, it's time for a grounded check-in on where the industry actually stands. Host Devin Becker is joined by Aaron Bush (Managing Partner & Co-Founder, Naavik) to unpack the latest signals – from AAA publisher performance and what recent EA earnings suggest for big franchises like Battlefield, to Ubisoft's ongoing restructuring, studio closures, and the push to reframe its future through initiatives like Vantage Studios.Next, they dig into Roblox's continued growth and what its recent results imply, even as age-related scrutiny and safety conversations remain part of the narrative.From there, the discussion widens to the state of the console market: the early momentum around Switch 2 sales, the trajectory of Xbox hardware, and why Sony appears to be holding its ground.Devin and Aaron also look at how transmedia is shaping perception and demand, including Nintendo's recent moves and what releases like an upcoming Mario Galaxy movie – and the surprise success of Iron Lung this month – reveal about IP leverage, audience crossover, and timing.They close with addressing the market whiplash around the reveal of Google DeepMind's Genie 3, and a “buy, sell, or hold” round covering Microsoft, Krafton, AAA vs. AA, and PC gaming to highlight where near-term opportunities and risks may be emerging.We'd like to thank Heroic Labs for making this episode possible! Thousands of studios have trusted Heroic Labs to help them focus on their games and not worry about gametech or scaling for success. To learn more and reach out, visit https://heroiclabs.com/?utm_source=Naavik&utm_medium=CPC&utm_campaign=Podcast If you like the episode, please help others find us by leaving a 5-star rating or review! And if you have any comments, requests, or feedback shoot us a note at podcast@naavik.co. Watch the episode: YouTube ChannelFor more episodes and details: Podcast WebsiteFree newsletter: Naavik DigestFollow us: Twitter | LinkedIn | WebsiteSound design by Gavin Mc Cabe.
WindBorne Systems is transforming global weather forecasting by deploying long-duration weather balloons that fly for weeks instead of hours. What began as a Stanford Student Space Initiative project has scaled to 100 balloons aloft simultaneously, targeting 500 by end of next year, with an end goal of 10,000 balloons monitoring Earth's atmosphere. In this episode of BUILDERS, I sat down with John Dean, Co-Founder and CEO of WindBorne Systems, to explore how the company secured its first government contract in under three years without lobbyists, achieved 4x annual manufacturing growth, and built Weather Mesh—an AI weather model that outperforms competitors from Google DeepMind. Topics Discussed: The technical evolution from Stanford project to operational constellation of altitude-controlled balloons Strategic decision to pursue government revenue before building B2B forecasting products Navigating Defense Innovation Unit and Air Force Lifecycle Management Center procurement as a founder Timeline from founding to first grants (within six months) and first data delivery contract (two and a half years) Current roughly 50/50 revenue split between civilian agencies (NOAA, international weather services) and Department of Defense Building Weather Mesh after Huawei's Pangu Weather validated end-to-end AI forecasting viability Transitioning from founder-led sales by promoting a Palantir hire from proposal writer to public sector growth leader The 30-year vision of millions of fingernail-sized atmospheric sensors creating a planetary nervous system GTM Lessons For B2B Founders: Study the bureaucracy's incentive structures before pitching product value: John spent years mapping how government procurement actually works rather than leading with product capabilities. The critical insight: in DoD sales, the warfighter (end user) doesn't control purchasing decisions. Success requires understanding each stakeholder's specific mandate and aligning your solution to their organizational incentives, not just operational needs. For civilian agencies like NOAA, the dynamics differ entirely. Founders entering govtech should invest 6-12 months learning procurement mechanics before expecting revenue. Use government contracts as non-dilutive scaling capital for hardware businesses: WindBorne secured SBIR grants within six months, then landed their first Air Force data delivery contract through Defense Innovation Unit at the two-and-a-half-year mark. John explicitly treated early grants as equivalent to venture funding but without equity dilution. For companies building physical infrastructure at scale (satellites, hardware networks, manufacturing operations), government contracts provide the runway to reach technical milestones that unlock larger B2B opportunities. This sequencing—government funding first, then B2B products built on that foundation—proves more capital-efficient than attempting to raise massive venture rounds upfront for unproven hardware. Integrate with legacy systems rather than attempting wholesale replacement: WindBorne doesn't aim to replace the 1,000 radiosondes launched daily worldwide—they're expanding coverage from the current 15% of Earth (where humans can launch traditional balloons) to 100%. The hardware is revolutionary (weeks of flight versus two hours), but the go-to-market integrates into existing weather agency workflows and feeds into established models like GFS and ECMWF. This approach accelerated adoption because agencies could add WindBorne data without overhauling their entire forecasting infrastructure. The displacement of radiosondes becomes economically inevitable long-term, but only after proving the system at scale. Move fast once adjacent technology validates your thesis: WindBorne wasn't investing in AI-based weather forecasting until Huawei's Pangu Weather paper demonstrated that end-to-end neural weather models could compete with physics-based simulations. Once that validation appeared, John's team moved immediately—adopting the open architecture and expanding it into Weather Mesh before the approach became widely adopted. The lesson isn't to wait for competitors, but to monitor adjacent technological developments and move decisively when validation emerges. They built a top-performing model by being early to a proven approach, not first to an unproven one. Hire for mid-level roles and promote based on demonstrated judgment: John hired Dana from Palantir as a proposal writer, not as a sales executive. He watched her demonstrate strong opinions that consistently proved correct, then promoted her to build and lead the entire public sector growth organization. This internal promotion model worked better than external executive hires because the person already understood WindBorne's technology, customers, and internal culture. For specialized domains like government sales, bringing in experienced operators at individual contributor levels and promoting them as they prove their judgment builds more effective organizations than hiring executives to parachute in. // Sponsors: Front Lines — We help B2B tech companies launch, manage, and grow podcasts that drive demand, awareness, and thought leadership. www.FrontLines.io The Global Talent Co. — We help tech startups find, vet, hire, pay, and retain amazing marketing talent that costs 50-70% less than the US & Europe. www.GlobalTalent.co // Don't Miss: New Podcast Series — How I Hire Senior GTM leaders share the tactical hiring frameworks they use to build winning revenue teams. Hosted by Andy Mowat, who scaled 4 unicorns from $10M to $100M+ ARR and launched Whispered to help executives find their next role. Subscribe here: https://open.spotify.com/show/53yCHlPfLSMFimtv0riPyM
From industry hype to hard reality checks, this week we discuss Shooter Monthly and the latest on High Guard, analyze Aries Interactive's funding, and explore the monumental impact of Google DeepMind's Genie 3 on game development, including industry resistance to new AI tools. We also tackle the decline of physical game stores, address a potentially distorted view of the UK market, and provide a deep dive into Arc Night Enfield's market performance and the future of 3D RPGs, before pinpointing High Guard's marketing missteps and wrapping up with concluding thoughts.02:13 | Shills03:38 | Shooter Monthly and High Guard Discussion06:33 | Aries Interactive Funding and Analysis16:27 | Google DeepMind's Genie 3 Impact30:58 | AI Tools and Industry Resistance32:45 | The Decline of Physical Game Stores34:44 | A Distorted View of the UK36:40 | Arc Night En Field: A Deep Dive40:10 | Enfield's Market Performance40:59 | The Future of 3D RPGs51:11 | High Guard's Marketing Missteps01:03:41 | Concluding Thoughts and Farewell
Kevin Green kicks off Thursday's market coverage with his eyes on the weakness in tech and comm. services on the heels of Alphabet (GOOGL) earnings. He says the ripple effects could impact other names, adding $170 "has to hold" for Nvidia (NVDA). For GOOGL, he says the re-rated "aggressively higher" capex spend was up sharply from market expectations, as Google Deepmind and its AI capabilities continue to spend heavily. KG also examines Qualcomm's (QCOM) downward post-earnings move and Bitcoin's (/BTC) continued fall so far this month. For the S&P 500 (SPX), KG says "keep your head on a swivel" while he projects a wide range today with 6750 to the downside and 6930 to the upside.======== Schwab Network ========Empowering every investor and trader, every market day.Subscribe to the Market Minute newsletter - https://schwabnetwork.com/subscribeDownload the iOS app - https://apps.apple.com/us/app/schwab-network/id1460719185Download the Amazon Fire Tv App - https://www.amazon.com/TD-Ameritrade-Network/dp/B07KRD76C7Watch on Sling - https://watch.sling.com/1/asset/191928615bd8d47686f94682aefaa007/watchWatch on Vizio - https://www.vizio.com/en/watchfreeplus-exploreWatch on DistroTV - https://www.distro.tv/live/schwab-network/Follow us on X – / schwabnetwork Follow us on Facebook – / schwabnetwork Follow us on LinkedIn - / schwab-network About Schwab Network - https://schwabnetwork.com/about
In this conversation, we explore the foundations of artificial intelligence with Ellie Pavlick, Assistant Professor of Computer Science at Brown University, a Research Scientist at Google Deepmind, and Director of ARIA, an NSF-funded institute examining AI's role in mental health support. Ellie's trajectory—from undergraduate degrees in economics and saxophone performance to pioneering research at the intersection of AI and cognitive science—reflects the kind of interdisciplinary thinking increasingly essential for understanding what these systems are and what they mean for us.Ellie represents a generation of researchers grappling with what she calls a "paradigm shift" in how we understand both artificial and human intelligence. Her work challenges long-held assumptions in cognitive science while refusing to accept easy answers about what AI systems can or cannot do. As she observes, we're witnessing concepts like "intelligence," "meaning," and "understanding" undergo the kind of radical redefinition that historically accompanies major scientific revolutions—where old terms become relics of earlier theories or get repurposed to mean something fundamentally different.Key themes we explore:- The Grounding Question: How Ellie's thinking evolved from believing AI fundamentally lacked meaning without embodied sensory experience to recognizing that grounding itself is a more complex and empirically testable question than either side of the debate typically acknowledges- Symbols Without Symbolism: Her recent collaborative work with Tom Griffiths, Brenden Lake, and others demonstrating that large language models exhibit capabilities previously thought to require explicit symbolic architectures—challenging decades of cognitive science orthodoxy about human cognition- The Measurability Problem: Why AI's apparent success on standardized tests reveals more about the inadequacy of our metrics than the adequacy of the systems, and how education, hiring, and relationships have always resisted quantification in ways we conveniently forget when evaluating AI- Intelligence as Moving Target: Ellie's argument that "intelligence" functions as a placeholder term for "the thing we don't yet understand"—always retreating as scientific progress advances, much like obsolete scientific concepts such as ether- The Value Frontier: Why the aspects of human experience that resist quantification may be definitionally human—not because they're inherently unmeasurable, but because they represent whatever currently sits beyond our measurement capabilities- Mental Health as Hard Problem: Why her new institute focuses on arguably the most challenging application domain for AI, where getting memory, co-adaptation, transparency, and long-term human impact right isn't optional but essentialEllie consistently pushes back against premature conclusions—whether it's claims that AI definitively lacks meaning or assertions that passing standardized tests proves human-level capability. Her approach emphasizes asking "are these processes similar or different?" rather than making sweeping judgments about whether systems "really" understand or "truly" have intelligence. As Ellie notes, we're at the "tip of the iceberg" in understanding these systems—we haven't yet pushed them to their breaking point or discovered their full potential.Her work on ARIA demonstrates this philosophy in practice. Rather than avoiding mental health applications because they're ethically fraught, she's leaning into the difficulty precisely because it forces confrontation with all the hard questions—from how memory works to how repeated human-AI interaction fundamentally changes both parties over time. It's research that refuses to wait a generation to see if we've "screwed up a whole generation."
Aujourd'hui dans Silicon Carne on revient sur le Forum Économique Mondial de Davos 2026 où les déclarations des patrons de la Tech ont été explosives. Alors, qu'est-ce qu'ils nous ont dit sur l'avenir de l'IA, de l'emploi et de notre civilisation ?
Steve Brown has spent years helping organizations see around corners. As a former executive at both Intel Labs and Google DeepMind, where he served as their in-house futurist, Steve brings a unique perspective on what happens when rapid technological change collides with practical business reality. In this conversation, he challenges leaders to move beyond fear and cost-cutting mentality to embrace AI as a tool for genuine value creation. Steve explains that being a futurist isn't about making predictions—that's for fortune tellers. Instead, it's a discipline of examining trends, understanding how they intersect over time, and mapping possible futures. But the landscape has grown increasingly complex. The pace of AI development has accelerated so dramatically that projecting even six months ahead has become challenging. What makes AI particularly difficult to forecast isn't just the technology itself, but the ripple effects of having powerful intelligence available on demand at low cost. As Steve puts it, this changes everything about everything. When it comes to implementation, Steve grounds his approach in a framework he calls "possibility and purpose." He sees AI creating an enormous landscape of what's possible, but warns that the real leadership challenge is figuring out what not to do. By finding the intersection between corporate purpose and this expanded possibility space, organizations can focus their efforts where they'll create the most value. Steve offers a fresh perspective on AI's relationship with human qualities, such as empathy. While acknowledging that AI simulates rather than truly experiences emotions, he points to promising applications like AI therapists that can reach people who would never seek human help. The key is understanding when simulation serves a genuine need versus when it creates friction in developing essential human skills—like learning to navigate relationships and failures. The heart of Steve's message centers on reimagining AI not as a replacement for humans, but as a collaborative teammate. He describes three types of AI agents organizations should consider: offload agents that handle boring repetitive work, elevate agents that amplify human capabilities, and extend agents that enable people to do things they couldn't do before. This framework transforms workforce planning from a zero-sum game into an expansion strategy. Steve points to Jensen Huang's vision at NVIDIA—growing from 30,000 employees to 50,000, supported by 100 million AI assistants—as an example of thinking about amplification rather than reduction. Steve argues that AI project failures typically stem from three core issues: immature technology, poor change management, and messy data. Organizations succeed when they start small with bounded projects, balance short-term wins with medium and long-term initiatives, and treat AI implementation as fundamentally a change management challenge rather than just a technology deployment. He emphasizes that everyone owns the AI transition—from line of business to HR to IT—though having a Chief AI Officer can help drive the organizational transformation required. Rather than obsessing over traditional ROI calculations, Steve encourages leaders to focus on the human challenges that AI can solve. When the average knowledge worker spends 32 days per year just searching for information, cutting that time in half represents massive value that goes beyond simple efficiency metrics. Learn more about Steve's work and access his several resources: AI Resources https://beacons.ai/aifuturist AI Course https://www.stevebrown.ai/ai-course AI Workshops https://www.stevebrown.ai/workshop Keynotes https://www.stevebrown.ai/keynotes YouTube www.youtube.com/@futureofai Amazon book “The AI Ultimatum: Preparing for a World of Intelligent Machines and Radical Transformation.” https://a.co/d/1YoFV5C Connect with him on LinkedIn at https://www.linkedin.com/in/futuresteve/
Petite émission improvisée en VLOG dans la voiture avec @OMBREmp4 et @ACExperience : nos premiers jeux, nos consoles préférées, l'industrie aujourd'hui... une discussion à savourer comme un petit bonbon !
Nathan Lambert and Sebastian Raschka are machine learning researchers, engineers, and educators. Nathan is the post-training lead at the Allen Institute for AI (Ai2) and the author of The RLHF Book. Sebastian Raschka is the author of Build a Large Language Model (From Scratch) and Build a Reasoning Model (From Scratch). Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep490-sc See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc. Transcript: https://lexfridman.com/ai-sota-2026-transcript CONTACT LEX: Feedback – give feedback to Lex: https://lexfridman.com/survey AMA – submit questions, videos or call-in: https://lexfridman.com/ama Hiring – join our team: https://lexfridman.com/hiring Other – other ways to get in touch: https://lexfridman.com/contact SPONSORS: To support this podcast, check out our sponsors & get discounts: Box: Intelligent content management platform. Go to https://box.com/ai Quo: Phone system (calls, texts, contacts) for businesses. Go to https://quo.com/lex UPLIFT Desk: Standing desks and office ergonomics. Go to https://upliftdesk.com/lex Fin: AI agent for customer service. Go to https://fin.ai/lex Shopify: Sell stuff online. Go to https://shopify.com/lex CodeRabbit: AI-powered code reviews. Go to https://coderabbit.ai/lex LMNT: Zero-sugar electrolyte drink mix. Go to https://drinkLMNT.com/lex Perplexity: AI-powered answer engine. Go to https://perplexity.ai/ OUTLINE: (00:00) – Introduction (01:39) – Sponsors, Comments, and Reflections (16:29) – China vs US: Who wins the AI race? (25:11) – ChatGPT vs Claude vs Gemini vs Grok: Who is winning? (36:11) – Best AI for coding (43:02) – Open Source vs Closed Source LLMs (54:41) – Transformers: Evolution of LLMs since 2019 (1:02:38) – AI Scaling Laws: Are they dead or still holding? (1:18:45) – How AI is trained: Pre-training, Mid-training, and Post-training (1:51:51) – Post-training explained: Exciting new research directions in LLMs (2:12:43) – Advice for beginners on how to get into AI development & research (2:35:36) – Work culture in AI (72+ hour weeks) (2:39:22) – Silicon Valley bubble (2:43:19) – Text diffusion models and other new research directions (2:49:01) – Tool use (2:53:17) – Continual learning (2:58:39) – Long context (3:04:54) – Robotics (3:14:04) – Timeline to AGI (3:21:20) – Will AI replace programmers? (3:39:51) – Is the dream of AGI dying? (3:46:40) – How AI will make money? (3:51:02) – Big acquisitions in 2026 (3:55:34) – Future of OpenAI, Anthropic, Google DeepMind, xAI, Meta (4:08:08) – Manhattan Project for AI (4:14:42) – Future of NVIDIA, GPUs, and AI compute clusters (4:22:48) – Future of human civilization
Google DeepMind dropped Project Genie and you can now walk around AI-generated 3D worlds. DeepMind CEO Demis Hassabis says this is the path to the holodeck. He's not wrong. Meanwhile Clawd aka ClawdBot and now Moltbot is giving people AI superpowers… spawning agents, teaching itself skills, connecting to everything in your life. It's also a massive security risk and people are spending thousands on API calls. You *probably* shouldn't use it. Plus… Grok video is suddenly really good, KREA real-time generation, a robot that hip-checks dishwasher drawers, Anthropic CEO Dario Amodei's sobering new essay, and the dead internet theory is no longer a theory. THE ROBOTS ARE DOING CHORES NOW. WE'RE SO BACK. #ai #ainews #projectgenie Come to our Discord: https://discord.gg/muD2TYgC8f Join our Patreon: https://www.patreon.com/AIForHumansShow AI For Humans Newsletter: https://aiforhumans.beehiiv.com/ Follow us for more on X @AIForHumansShow Join our TikTok @aiforhumansshow To book us for speaking, please visit our website: https://www.aiforhumans.show/ // Show Links // Google Project Genie: Playable Worlds (formerly Genie 3) https://blog.google/innovation-and-ai/models-and-research/google-deepmind/project-genie/ Josh Woodward (VP of AI Studio and more @ Google) low poly 3D cowboy https://x.com/joshwoodward/status/2016921839038255210?s=20 Theoretically Media: Realistic Hollywood Blvd & New York https://x.com/TheoMediaAI/status/2016919987428991107?s=20 Ethan Mollack: Otter Pilot in an Airport https://x.com/emollick/status/2016919989865840906?s=20 Clawdbot (Now MoltBot) Insanity https://clawd.bot/ Creator Peter Steinberger On TBPN https://x.com/tbpn/status/2016306566077755714?s=20 Good long post on the pros / cons & safety concerns https://x.com/Andrey__HQ/status/2016228427901370760?s=20 White Hat Hacker Shows Exactly How Bad It Can Get https://x.com/theonej vo/status/2016510190464675980?s=20 MoltBook - Social Network For Bots, By Bots https://x.com/MattPRD/status/2016560277333168540?s=20 Kimi K2.5 launches https://www.kimi.com/ Dario's Amodei's Essay Shows Potential AI Risks https://www.darioamodei.com/essay/the-adolescence-of-technology Lucy DecartAI Demo https://lucy.decart.ai/ KREA Real-time https://www.krea.ai/realtime New Grok Imagine Model https://x.com/xai/status/2016745652739363129 Figure Introduces Helix 2 (the AI model running the robot) https://x.com/Figure_robot/status/2016207013236375661?s=20 Pokemon-style Interactive Game For Podcast Content Quiz https://x.com/lennysan/status/2016584174421897590?s=20 Small Builder Alert: Racing App For Data https://x.com/ShinjaeJung/status/2015980232667435048?s=20 Blackfiles: AI Generated Visual YouTube Channel Long Form https://youtu.be/2n01_rt2vKg?si=QvP3vYgGV6_Y7pua Mureka V8 AI Music Model https://x.com/EHuanglu/status/2016668882644156863?s=20 https://x.com/Mureka_AI/status/2016544920283365831 Isometric NYC https://x.com/_coenen/status/2014359718697799989 https://cannoneyed.com/isometric-nyc/
This week on DisrupTV, we go beyond the AI hype and into the decisions shaping 2026. Peter Danenberg, Senior Software Engineer at Google DeepMind, joins us to unpack the rapid evolution of multimodal models like Gemini, the race toward AGI, and what he's seeing from inside one of the world's most influential AI labs. We're also joined by David Bray, PhD, Distinguished Chair of the Accelerator at The Stimson Center & Principal/CEO, LDA Ventures Inc., to examine the dangerous assumptions executives are making post-Davos — from AI-driven workforce disruption and machine-speed cyber threats to hardware security risks and why geopolitics can no longer be ignored by business leaders. From meetups and models to geopolitics and governance, this episode is a must-watch for leaders navigating AI at scale.
the U.K.'s Keir Starmer meets with China's Xi Jinping, the EU designates Iran's IRGC as a terrorist organization, the IDF reportedly accepts the Gaza Health Ministry's reported death toll of 71,000, two CBP agents are placed on leave after the fatal shooting of Alex Pretti in Minneapolis, the U.S. Senate blocks a government funding bill, the U.S. Fed holds rates steady at 3.5%-3.75%, China executes 11 people linked to Myanmar criminal gangs, Tesla reports its first annual revenue decline in its history, Google DeepMind unveils AlphaGenome AI to predict gene mutations, and scientists discover an Earth-like 'potentially habitable' planet. Sources: Verity.News
#podcast #apple #tecnologia ía #historiatech #youtube #ciberseguridad #airtag PLAYLIST Rolones: https://acortar.link/syEyR7Hoy viajamos por momentos clave que marcaron la historia de la tecnología y la cultura digital. Desde el lanzamiento de la primera Macintosh de Apple y la inspiración en el diseño de Braun, hasta el nacimiento de Netscape como el primer navegador comercial y la llegada de YouTube en 2005.También exploramos temas actuales y curiosos: el impacto mediático de Sydney Sweeney, el consumo del “azulito” en México, una entrevista con Uvicuo, los riesgos de ciberataques durante el Mundial, el impresionante cortometraje de Google DeepMind, el anuncio de la película de Super Mario Galaxy, un récord del Libro Guinness y la nueva generación del AirTag.Un recorrido entre historia, polémica, cultura digital y el futuro de la tecnología.00:00 INICIO03:07 PATROCINIOS03:34 COMENTARIOS05:24 UN DÍA COMO HOY - APPLE PRIMERA MAC10:29 ¿APPLE LE COPIÓ A BRAUN?11:48 NETSCAPE: PRIMER NAVEGADOR WEB COMERCIAL14:48 YOUTUBE EN 200520:49 SYDNEY SWEENEY Y SU LENCERÍA22:52 MÉXICO GASTA MUCHO EN EL AZULITO28:34 ENTREVISTA UVICUO46:10 CIBERATAQUES EN EL MUNDIAL48:01 EL CORTOMETRAJE DE GOOGLE DEEPMIND51:13 SUPER MARIO GALAXY MOVIE52:17 LIBRO GUINNESS01:01:06 NUEVA GENERACIÓN DE AIRTAG Y FINAL
Google DeepMind's Dawn Bloxwich and Tom Lue join "The Tech Download" to explore one of the biggest questions in technology today: Can we control AI? They break down how DeepMind is building safeguards, stress‑testing its models and working with global regulators to ensure advanced AI develops responsibly.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
El reloj del apocalipsis, que nos asusta todos los años, este 2026 coloca el fin del mundo a solo 85 segundos de distancia del presente. "¡Y se nos van a hacer largos!" bromea Jaime García Cantero, al hilo del informe sobre el conocimiento científico de los españoles presentado por la Fundación BBVA. Una cuarta parte de los españoles, por ejemplo, piensa que los alienígenas han visitado la Tierra pero que los gobiernos lo ocultan, o que el cambio climático es un invento de los científicos para cobrar subvenciones. Afortunadamente, el porcentaje de españoles que no creen en las vacunas es bajo pero, a pesar de eso, Nuño comenta que España ha perdido la excelencia vacunal y la OMS nos ha retirado del grupo de países en los que estaba eliminado el sarampión. Estas ideas anticientíficas no son solo producto de la desinformación, sino que están generadas por el cinismo de algunas élites y la estupidez de muchos individuos. Hablamos de la estupidez en el sentido que le daba el filósofo italiano Carlo María Cipolla: "Una persona es estúpida si causa daño a otras personas sin obtener ganancia personal alguna o, incluso peor, perjudicándose a sí misma." Los antivacunas serían un ejemplo de estupiez de libro. Nuño nos explica qué es y para qué sirve Alphagenome, el nuevo sistema de Google DeepMind para entender la función de las secuencias largas de ADN que se anunció ayer en Nature, y nos ayuda a reflexionar sobre qué significa que una gran tecnológica, con un evidente afán de lucro, se ponga hacer ciencia básica.Además, hablamos del estreno del documental Melania, que coincide en la misma semana en la que Amazon anuncia 16.000 despidos, y de la empresa Polymarket, que permite apostar por sucesos políticos extraordinarios, como la detención de Maduro, y que cuenta entre su accionariado con gente que sin duda cuenta con información privilegiada. Terminamos celebrando la vida de la matemática negra Gladys Mae West, que falleció esta semana a los 95 años. Nació en la Virginia de las leyes segregacionistas, que prohibían que los negros fueran a la universidad, y a pesar de eso consiguió entrar a trabajar como matemática en la Marina y contribuyó a la invención del GPS. Ya jubilada, hizo el doctorado. En el MCT se habla de muchos estúpidos, pero también caben las personas que son todo lo contrario. Ojalá fuera más relevante la segunda que los primeros.
At the beginning of December 2026: ICE announced an enforcement surge in the Twin Cities.January 6, 2026: DHS announced what it called the largest immigration enforcement operation ever carried out, sending 2,000 agents to the Minneapolis–Saint Paul metropolitan area. January 7, 2026: ICE agent Jonathan Ross fatally shoots Renée Nicole GoodJanuary 8–14, 2026: Protests, vigils, and marches continue in Minneapolis against ICE and Operation Metro SurgeJanuary 13, 2026: ‘Madness': two US citizens violently detained by ICE in Minnesota, officials say. Two Target employees forced to the ground, then into SUV, then dumped in different parking lotJanuary 14, 2026: A different ICE agent shoots and injures a man in north Minneapolis; the man survives after being shot in the leg. This second shooting further intensifies public anger and calls for an end to the federal surgeJanuary 17, 2026: National Anger Spills Into Target Stores, AgainJanuary 22, 2026: Target Store Staff Are Skipping Work Over ICE's Crackdown in MinnesotaJanuary 23, 2026: A statewide Day of Truth & Freedom / Minnesota general strike is held, described as the first U.S. general strike in about 80 years, explicitly targeting ICE operations and Operation Metro Surge. On that day, many workers, businesses, schools, and institutions in Minneapolis and across Minnesota participate in work stoppages, marches, and large rallies against federal immigration enforcement.January 24, 2026: Federal Border Patrol agents assigned to the metro surge shoot and kill Alex Jeffrey PrettiJanuary 25, 2026: The Minnesota Chamber of Commerce released this letter on behalf of more than 60 CEOs of Minnesota-based companies today.Eight people have died in dealings with ICE so far in 2026. Keith Porter, Parady La, Heber Sanchaz Domínguez, Victor Manuel Diaz, Luis Beltran Yanez-Cruz, Luis Gustavo Nunez Caceres, and Geraldo Lunas Campos. The high-profile fatal shootings follow the deaths of at least 32 people in ICE custody in 2025 – the highest number since 2004.Minnesota CEOs Seek De-Escalation After Border Police Shooting“The business community in Minnesota prides itself in providing leadership and solving problems to ensure a strong and vibrant state. The recent challenges facing our state have created widespread disruption and tragic loss of life. For the past several weeks, representatives of Minnesota's business community have been working every day behind the scenes with federal, state and local officials to advance real solutions. These efforts have included close communication with the Governor, the White House, the Vice President and local mayors. There are ways for us to come together to foster progress. With yesterday's tragic news, we are calling for an immediate deescalation of tensions and for state, local and federal officials to work together to find real solutions. We have been working for generations to build a strong and vibrant state here in Minnesota and will do so in the months and years ahead with equal and even greater commitment. In this difficult moment for our community, we call for peace and focused cooperation among local, state and federal leaders to achieve a swift and durable solution that enables families, businesses, our employees, and communities across Minnesota to resume our work to build a bright and prosperous future. “3M – William Brown, Chairman and CEOAmeriprise Financial – James Cracchiolo, Chairman and CEOAPi Group – Russell Becker, CEOBest Buy – Corie Barry, CEO C.H. Robinson – Dave Bozeman, President and CEODeluxe Corporation – Barry McCarthy, President and CEODonaldson Company, Inc. – Tod Carpenter, Chairman and CEOEcolab – Christophe Beck, Chairman and CEOGeneral Mills – Jeff Harmening, Chairman and CEOH.B. Fuller – On behalf of our entire organization [CEO Celeste Mastin]Hormel – Jeff Ettinger, Interim CEOMedtronic – Geoff Martha, CEO and ChairmannVent – Beth Wozniak, Chair and CEO Patterson Companies – Robert Rajalingam, CEOPentair – John L. Stauch, President and CEOPiper Sandler – Chad Abraham, Chairman and CEOSleep Number – Linda Findley, CEO (4/2025)Solventum – Bryan Hanson, CEOSPS Commerce – Chad Collins, CEO SunOpta – Brian Kocher, CEOTarget – Michael Fiddelke, Incoming CEO Tennant Company – Dave Huml, CEOThe Toro Company – Rick Olson, Chairman and CEOU.S. Bancorp – Gunjan Kedia, CEOWinnebago Industries – Michael Happe, CEOXcel Energy – Bob Frenzel, Chairman and CEO Keith Rabois, Managing director of Khosla Ventures: “no law enforcement has shot an innocent person. illegals are committing violent crimes everyday.”Khosla Ventures: “We prefer brutal honesty to hypocritical politeness.”“Technology and innovation have reshaped our world and disrupted the way we all live and work. The future may not be knowable, but it is inventable—and it belongs to those who dare to imagine what's possible.”Managing Directors: 5 dudes (3 stanford; 3 harvard)Founder Vinod Khosla: “I agree with @EthanChoi7. Macho ICE vigilantes running amuck empowered by a conscious-less administration. The video was sickening to watch and the storytelling without facts or with invented fictitious facts by authorities almost unimaginable in a civilized society. ICE personnel must have ice water running thru their veins to treat other human beings this way. There is politics but humanity should transcend that”Target's incoming CEO Michael Fiddelke in a video message sent to employees (January 26, 2026): “Right now, as someone who is raising a family here in the Twin Cities and as a leader of this hometown company I want to acknowledge where we are. The violence and loss of life in our community is incredibly painful. I know it's weighing heavily on many of you across the country, as it is with me. What's happening affects us not just as a company but as people, as neighbors, friends and family members.”A company spokesman declined to comment. Still nothing official on website.Lloyd Vogel, CEO Garage Grown Gear: said he felt compelled to condemn the shootings in a LinkedIn post because he lives and works in the Twin Cities. "My primary rationale was to show solidarity with my community," he told Business Insider. "It's also just bad for business when people are afraid to leave their homes.""There's so much fear in Minnesota right now," he said. "It would just be cowardice to not have a perspective on this."JPMorgan Chase CEO and Chair Jamie Dimon 1/22/26 Davos): ″I don't like what I'm seeing, five grown men beating up a little old lady. So I think we should calm down a little bit on the internal anger about immigration… We need these people. They work in our hospitals and hotels and restaurants and agriculture, and they're good people.… They should be treated that way.”On Saturday evening (1/24/2026), top technology executives gathered in Washington to attend a screening of “Melania,” a documentary produced by Amazon about the first lady, Melania Trump. Black-tie event: guests were handed monogrammed buckets of popcorn, framed screening tickets for their trophy shelves, and a limited-edition copy of Trump's 2024 book of the same title as her documentary, “Melania.“Among them was Andy Jassy, the chief executive of Amazon; Tim Cook, the chief executive of Apple; and Lisa Su, the chief executive of chip maker AMD.Also: Eric Yuan – CEO, Zoom; Lynn Martin – President, New York Stock Exchange; General Electric CEO Larry CulpApple CEO Tim Cook says it's 'time for de-escalation' in MinneapolisCook came under fire for appearing at The White House just hours after federal immigration authorities killed Alex Pretti, a veterans' nurse, in Minnesota“This is a time for de-escalation,” Cook wrote to Apple staff. “I believe America is strongest when we live up to our highest ideals, when we treat everyone with dignity and respect no matter who they are or where they're from, and when we embrace our shared humanity.”Cook said he “had a good conversation with the president this week where I shared my views, and I appreciate his openness to engaging on issues that matter to us all." Apple's Cook says he's ‘heartbroken' by Minneapolis events and has spoken with TrumpOpen AI CEO Sam Altman (1/27/26): I love the US and its values of democracy and freedom and will be supportive of the country however I can; OpenAI will too. But part of loving the country is the American duty to push back against overreach. What's happening with ICE is going too far. There is a big difference between deporting violent criminals and what's happening now, and we need to get the distinction right. President Trump is a very strong leader, and I hope he will rise to this moment and unite the country. I am encouraged by the last few hours of response and hope to see trust rebuilt with transparent investigations. As a company, we aim to stick to our convictions and not get blown around by changing fashions too much. We didn't become super woke when that was popular, we didn't start talking about masculine corporate energy when that was popular, and we are not going to make a lot of performative statements now about safety or politics or anything else. But we are going to continue to try to figure out how to actually do the right thing as best as we can, engage with leaders and push for our values, and speak up clearly about it as needed.James Dyett, Global Business at OpenAI: “There is far more outrage from tech leaders over a wealth tax than masked ICE agents terrorizing communities and executing civilians in the streets. Tells you what you need to know about the values of our industry.”Angel Investor Jason Calacanis: Once again, I will remind everyone that our leaders are failing us. True leadership would be to calm this situation down by telling these non-peaceful protestors to stay home while recalling these inadequately-trained agents.”Jeff Dean, Chief Scientist, Google DeepMind & Google Research. Gemini Lead: “This is absolutely shameful. Agents of a federal agency unnecessarily escalating, and then executing a defenseless citizen whose offense appears to be using his cell phone camera. Every person regardless of political affiliation should be denouncing this.”Jeffrey Sonnenfeld, senior associate dean for leadership studies at the Yale School of Management: "CEOs are feeling the community pressure." He said that reactions that convey sorrow and don't mention Trump or ICE are likely to be perceived as an unwelcome challenge to the White House's immigration agenda. "That is not what the Trump administration wanted," he said.Business Roundtable CEO Joshua Bolten asked to comment on the chaos in Minneapolis: replied with a statement endorsing the Minnesota Chamber's call for "cooperation between state, local, and federal authorities to immediately de-escalate the situation in Minneapolis."Robert Pasin, CEO of toy company Radio Flyer: recently shared an email on LinkedIn that he sent to his employees that was critical of the shootings in Minneapolis: "I am deeply concerned about the current state of our democracy, and the continued actions we are seeing from President Trump and his administration that are intended to undermine democratic institutions, the rule of law, and the norms that hold our country together."Dario Amodei, CEO Anthropic: called the events in Minnesota a “horror” on Monday. An Anthropic spokeswoman said the company did not have contracts with ICE.ICEout.tech statement from January 24, 2026: "We condemn the Border Patrol's killing of Alex Pretti and the violent surge of federal agents across our cities. The wanton brutality we've seen from ICE and CBP has removed any credibility that these actions are about immigration enforcement. Their goal is terror, cruelty, and suppression of dissent. This must end. Tech professionals are speaking up against this brutality, and we call on all our colleagues who share our values to use their voice. We know our industry leaders have leverage: in October, they persuaded Trump to call off a planned ICE surge in San Francisco, and big tech CEOs are in the White House tonight. Now they need to go further, and join us in demanding ICE out of all of our cities." 811: 508 names; 19 one name with title, 284 role onlyReid Hoffman says business leaders are wrong to stay silent about the Trump administrationThe LinkedIn cofounder and tech investor said in an episode of the "Rapid Response" podcast published Tuesday that he rejects the idea that executives can simply wait out political turbulence: "The theory that if you just keep your mouth shut, the storm will blow over and it won't be a problem — you should be disabused of that theory now," Hoffman said.Palantir Defends Work With ICE to Staff Following Killing of Alex Pretti: Leadership defended its work as in part improving “ICE's operational effectiveness.”
This episode is sponsored by Your360 AI. Get 10% off through January 2026 at Your360.ai with code: INSIDE. On this week's AI Inside, Jeff Jarvis and Jason Howell test Google's new Gemini-powered Auto-Browse Chrome agents, wonder whether Yahoo Scout really matters, question Apple's Gemini-fueled Siri revamp and rumored AI pin, and explore Mozilla's “rebel alliance” bet on open-source AI. Note: Time codes subject to change depending on dynamic ad insertion by the distributor. CHAPTERS: 00:00 - Podcast begins 0:04:30 - Chrome takes on AI browsers with tighter Gemini integration, agentic features for autonomous tasks 0:26:42 - Yahoo Scout looks like a more web-friendly take on AI search 0:38:31 - Apple to Revamp Siri as a Built-In iPhone, Mac Chatbot to Fend Off OpenAI 0:42:59 - Not to be outdone by OpenAI, Apple is reportedly developing an AI wearable 0:47:10 - Mozilla is building an AI ‘rebel alliance' to take on industry heavyweights OpenAI, Anthropic 0:56:14 - Google DeepMind launches AI tool to help identify genetic drivers of disease 0:59:05 - The EU tells Google to give external AI assistants the same access to Android as Gemini has 1:01:07 - Shopify Merchants to Pay 4% Fee on ChatGPT Checkout Sales 1:02:23 - Microsoft announces powerful new chip for AI inference 1:03:50 - EU launches formal investigation of xAI over Grok's sexualized deepfakes Learn more about your ad choices. Visit megaphone.fm/adchoices
Explore how Apple is transforming Siri into a Gemini-powered AI chatbot, why emotional voice AI is the next frontier, and how HP's Elite Board G1—the PC in a keyboard—may be a game-changer for blind users. Steven Scott and Shaun Preece dive into the major tech stories shaping 2026. They unpack Bloomberg's report that Siri will evolve into a fully integrated AI chatbot powered by Google's Gemini, raising questions about privacy, speed, and Apple's long-term AI strategy. The hosts debate why the term “chatbot” carries negative connotations and how Apple could regain trust after its shaky Apple Intelligence rollout. The discussion then shifts to emotional AI and Google DeepMind's acquisition of Hume AI's Alan Cowan. They explore how future voice assistants could detect and respond to human emotion, potentially revolutionizing AI companionship, therapy, and conversational interfaces for everyone. Later, Steven interviews HP's Caleb Fleming about the Elite Board G1, a Windows 11 PC built inside a keyboard with up to 64GB RAM and 2TB storage. They discuss its upgradeability, enterprise focus, and unexpected potential as an ideal computer for blind users without the bulk of traditional desktops or laptops. The episode closes with quick takes on Sony LinkBuds Clip open-ear audio and Toronto Police trialling AI for non-emergency calls. Relevant LinksBloomberg – Siri AI Chatbot Report: https://www.bloomberg.comHP Elite Board G1: https://www.hp.comHume AI: https://www.hume.ai Find Double Tap online: YouTube, Double Tap Website---Follow on:YouTube: https://www.doubletaponair.com/youtubeX (formerly Twitter): https://www.doubletaponair.com/xInstagram: https://www.doubletaponair.com/instagramTikTok: https://www.doubletaponair.com/tiktokThreads: https://www.doubletaponair.com/threadsFacebook: https://www.doubletaponair.com/facebookLinkedIn: https://www.doubletaponair.com/linkedin Subscribe to the Podcast:Apple: https://www.doubletaponair.com/appleSpotify: https://www.doubletaponair.com/spotifyRSS: https://www.doubletaponair.com/podcastiHeadRadio: https://www.doubletaponair.com/iheart About Double TapHosted by the insightful duo, Steven Scott and Shaun Preece, Double Tap is a treasure trove of information for anyone who's blind or partially sighted and has a passion for tech. Steven and Shaun not only demystify tech, but they also regularly feature interviews and welcome guests from the community, fostering an interactive and engaging environment. Tune in every day of the week, and you'll discover how technology can seamlessly integrate into your life, enhancing daily tasks and experiences, even if your sight is limited. "Double Tap" is a registered trademark of Double Tap Productions Inc. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
AI is front and center in Davos this year, as world leaders and tech executives debate how quickly the technology is reshaping the economy and workforce. Demis Hassabis, co-founder and CEO of Google DeepMind, sits down with CNBC's Andrew Ross Sorkin at the World Economic Forum. The two discuss Gemini's position in the AI race, the evolution of artificial general intelligence (AGI), and what it all means for jobs.In this episode:Demis Hassabis, @demihassabisAndrew Ross Sorkin, @andrewrsorkinCameron Costa, @CameronCostaNY Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
À Davos, les géants de la tech ont donné le ton pour l'année à venir : intelligence artificielle, robots, emploi et souveraineté numérique. Entre annonces spectaculaires, promesses et zones d'ombre, décryptage d'une semaine où la technologie s'est imposée au sommet du pouvoir.
Claude Code is taking over and even the Wall Street Journal is Claude Pilled. Anthropic CEO Dario Amodei just said we're 6 months from AI doing most software engineering. No big deal. Claude Code skills are exploding: Remotion for AI video editing, Pencil for infinite design canvases, Compound Engineering for spinning up agent fleets while you sleep. Your $200/month Max subscription doesn't stand a chance. Plus Apple's working on an AI pin, Runway dropped Gen 4.5, LTX Studio has a wild audio-to-video model, and there's an AI monk with 2.5 million followers selling healing journeys. WE'RE CLAUDE PILLED NOW. RESISTANCE IS FUTILE. Come to our Discord: https://discord.gg/muD2TYgC8f Join our Patreon: https://www.patreon.com/AIForHumansShow AI For Humans Newsletter: https://aiforhumans.beehiiv.com/ Follow us for more on X @AIForHumansShow Join our TikTok @aiforhumansshow To book us for speaking, please visit our website: https://www.aiforhumans.show/ Show Links Anthropic CEO Dario Amoedi at Davos https://youtu.be/9Zz2KrBDXUo?si=JliJ8xSnndouVWUM Even The Wall Street Journal Is Claude Code Pilled https://x.com/WSJ/status/2014186506320007182?s=20 Remotion: Video Editing In Claude https://www.remotion.dev/ Coding example: https://x.com/Remotion/status/2013626968386765291?s=20 Very good Remotion Video Example: https://x.com/justinmfarrugia/status/2014162910168162478?s=20 Infinite Design Canvas https://x.com/tomkrcha/status/2014028990810300498?s=20 Compound Engineering https://x.com/kieranklaassen/status/2013776190042185971?s=20 Matt Pocock's Claude Tutorials https://x.com/mattpocockuk/status/2014336302120923513?s=20 Meanwhile, Claude has a constitution now… https://www.anthropic.com/constitution Apple Wearable AI Pin https://www.theinformation.com/articles/apple-developing-ai-wearable-pin?rc=c3oojq&shared=2c49629944958284 New Apple AI Chatbot This Fall? https://x.com/markgurman/status/2014063049821299069?s=20 Google Buys Hume As Voice Tech Heats Up? https://www.wired.com/story/google-hires-hume-ai-ceo-licensing-deal-gemini/ (paywall) https://techcrunch.com/2026/01/22/google-reportedly-snags-up-team-behind-ai-voice-startup-hume-ai/ Google Deepmind's D4RT Model https://x.com/GoogleDeepMind/status/2014352808426807527?s=20 https://deepmind.google/blog/d4rt-teaching-ai-to-see-the-world-in-four-dimensions/ Runway Gen 4.5 Image to Video https://x.com/runwayml/status/2014090404769976744?s=20 Audio-to-Video From LTX https://x.com/LTXStudio/status/2013650214171877852?s=20 Good for music videos: https://x.com/fofrAI/status/2014110494315913706?s=20 Borat in ALL THE THINGS https://x.com/maxescu/status/2013650830650741130?s=20 Goodbye Kaplan, Gemini Launches SAT Practice Tests https://x.com/Google/status/2014020819173687626?s=20 The AI Monk: Do We Want This? https://x.com/pubity/status/2009762025707069545?s=20 Every Street Fighter Pose Brought To Life https://www.reddit.com/r/aivideo/comments/1qj3bys/every_street_fighter_ii_losing_pose_brought_to/ The ELEVEN ALBUM https://x.com/elevenlabsio/status/2014021275107172618?s=20 Epic Sports Anime https://www.tiktok.com/t/ZThSmmnVA/ Vibe Coded Driving Game https://www.youtube.com/watch?v=mY-4Ls_2TS0&t=3s Our buddy Theoretically Media launched a newsletter! https://theoreticallymedia.beehiiv.com/p/openai-s-suno-killer-the-cinematic-prompt-you-ve-been-waiting-for SLIPPERY ROBIT https://x.com/rohanpaul_ai/status/2013856833426071787?s=20
From shipping Gemini Deep Think and IMO Gold to launching the Reasoning and AGI team in Singapore, Yi Tay has spent the last 18 months living through the full arc of Google DeepMind's pivot from architecture research to RL-driven reasoning—watching his team go from a dozen researchers to 300+, training models that solve International Math Olympiad problems in a live competition, and building the infrastructure to scale deep thinking across every domain, and driving Gemini to the top of the leaderboards across every category. Yi Returns to dig into the inside story of the IMO effort and more! We discuss: Yi's path: Brain → Reka → Google DeepMind → Reasoning and AGI team Singapore, leading model training for Gemini Deep Think and IMO Gold The IMO Gold story: four co-captains (Yi in Singapore, Jonathan in London, Jordan in Mountain View, and Tong leading the overall effort), training the checkpoint in ~1 week, live competition in Australia with professors punching in problems as they came out, and the tension of not knowing if they'd hit Gold until the human scores came in (because the Gold threshold is a percentile, not a fixed number) Why they threw away AlphaProof: "If one model can't do it, can we get to AGI?" The decision to abandon symbolic systems and bet on end-to-end Gemini with RL was bold and non-consensus On-policy vs. off-policy RL: off-policy is imitation learning (copying someone else's trajectory), on-policy is the model generating its own outputs, getting rewarded, and training on its own experience—"humans learn by making mistakes, not by copying" Why self-consistency and parallel thinking are fundamental: sampling multiple times, majority voting, LM judges, and internal verification are all forms of self-consistency that unlock reasoning beyond single-shot inference The data efficiency frontier: humans learn from 8 orders of magnitude less data than models, so where's the bug? Is it the architecture, the learning algorithm, backprop, off-policyness, or something else? Three schools of thought on world models: (1) Genie/spatial intelligence (video-based world models), (2) Yann LeCun's JEPA + FAIR's code world models (modeling internal execution state), (3) the amorphous "resolution of possible worlds" paradigm (curve-fitting to find the world model that best explains the data) Why AI coding crossed the threshold: Yi now runs a job, gets a bug, pastes it into Gemini, and relaunches without even reading the fix—"the model is better than me at this" The Pokémon benchmark: can models complete Pokédex by searching the web, synthesizing guides, and applying knowledge in a visual game state? "Efficient search of novel idea space is interesting, but we're not even at the point where models can consistently apply knowledge they look up" DSI and generative retrieval: re-imagining search as predicting document identifiers with semantic tokens, now deployed at YouTube (symmetric IDs for RecSys) and Spotify Why RecSys and IR feel like a different universe: "modeling dynamics are strange, like gravity is different—you hit the shuttlecock and hear glass shatter, cause and effect are too far apart" The closed lab advantage is increasing: the gap between frontier labs and open source is growing because ideas compound over time, and researchers keep finding new tricks that play well with everything built before Why ideas still matter: "the last five years weren't just blind scaling—transformers, pre-training, RL, self-consistency, all had to play well together to get us here" Gemini Singapore: hiring for RL and reasoning researchers, looking for track record in RL or exceptional achievement in coding competitions, and building a small, talent-dense team close to the frontier — Yi Tay Google DeepMind: https://deepmind.google X: https://x.com/YiTayML Chapters 00:00:00 Introduction: Returning to Google DeepMind and the Singapore AGI Team 00:04:52 The Philosophy of On-Policy RL: Learning from Your Own Mistakes 00:12:00 IMO Gold Medal: The Journey from AlphaProof to End-to-End Gemini 00:21:33 Training IMO Cat: Four Captains Across Three Time Zones 00:26:19 Pokemon and Long-Horizon Reasoning: Beyond Academic Benchmarks 00:36:29 AI Coding Assistants: From Lazy to Actually Useful 00:32:59 Reasoning, Chain of Thought, and Latent Thinking 00:44:46 Is Attention All You Need? Architecture, Learning, and the Local Minima 00:55:04 Data Efficiency and World Models: The Next Frontier 01:08:12 DSI and Generative Retrieval: Reimagining Search with Semantic IDs 01:17:59 Building GDM Singapore: Geography, Talent, and the Symposium 01:24:18 Hiring Philosophy: High Stats, Research Taste, and Student Budgets 01:28:49 Health, HRV, and Research Performance: The 23kg Journey
From shipping Gemini Deep Think and IMO Gold to launching the Reasoning and AGI team in Singapore, Yi Tay has spent the last 18 months living through the full arc of Google DeepMind's pivot from architecture research to RL-driven reasoning—watching his team go from a dozen researchers to 300+, training models that solve International Math Olympiad problems in a live competition, and building the infrastructure to scale deep thinking across every domain, and driving Gemini to the top of the leaderboards across every category. Yi Returns to dig into the inside story of the IMO effort and more!We discuss:* Yi's path: Brain → Reka → Google DeepMind → Reasoning and AGI team Singapore, leading model training for Gemini Deep Think and IMO Gold* The IMO Gold story: four co-captains (Yi in Singapore, Jonathan in London, Jordan in Mountain View, and Tong leading the overall effort), training the checkpoint in ~1 week, live competition in Australia with professors punching in problems as they came out, and the tension of not knowing if they'd hit Gold until the human scores came in (because the Gold threshold is a percentile, not a fixed number)* Why they threw away AlphaProof: “If one model can't do it, can we get to AGI?” The decision to abandon symbolic systems and bet on end-to-end Gemini with RL was bold and non-consensus* On-policy vs. off-policy RL: off-policy is imitation learning (copying someone else's trajectory), on-policy is the model generating its own outputs, getting rewarded, and training on its own experience—”humans learn by making mistakes, not by copying”* Why self-consistency and parallel thinking are fundamental: sampling multiple times, majority voting, LM judges, and internal verification are all forms of self-consistency that unlock reasoning beyond single-shot inference* The data efficiency frontier: humans learn from 8 orders of magnitude less data than models, so where's the bug? Is it the architecture, the learning algorithm, backprop, off-policyness, or something else?* Three schools of thought on world models: (1) Genie/spatial intelligence (video-based world models), (2) Yann LeCun's JEPA + FAIR's code world models (modeling internal execution state), (3) the amorphous “resolution of possible worlds” paradigm (curve-fitting to find the world model that best explains the data)* Why AI coding crossed the threshold: Yi now runs a job, gets a bug, pastes it into Gemini, and relaunches without even reading the fix—”the model is better than me at this”* The Pokémon benchmark: can models complete Pokédex by searching the web, synthesizing guides, and applying knowledge in a visual game state? “Efficient search of novel idea space is interesting, but we're not even at the point where models can consistently apply knowledge they look up”* DSI and generative retrieval: re-imagining search as predicting document identifiers with semantic tokens, now deployed at YouTube (symmetric IDs for RecSys) and Spotify* Why RecSys and IR feel like a different universe: “modeling dynamics are strange, like gravity is different—you hit the shuttlecock and hear glass shatter, cause and effect are too far apart”* The closed lab advantage is increasing: the gap between frontier labs and open source is growing because ideas compound over time, and researchers keep finding new tricks that play well with everything built before* Why ideas still matter: “the last five years weren't just blind scaling—transformers, pre-training, RL, self-consistency, all had to play well together to get us here”* Gemini Singapore: hiring for RL and reasoning researchers, looking for track record in RL or exceptional achievement in coding competitions, and building a small, talent-dense team close to the frontier—Yi Tay* Google DeepMind: https://deepmind.google* X: https://x.com/YiTayMLFull Video EpisodeTimestamps00:00:00 Introduction: Returning to Google DeepMind and the Singapore AGI Team00:04:52 The Philosophy of On-Policy RL: Learning from Your Own Mistakes00:12:00 IMO Gold Medal: The Journey from AlphaProof to End-to-End Gemini00:21:33 Training IMO Cat: Four Captains Across Three Time Zones00:26:19 Pokemon and Long-Horizon Reasoning: Beyond Academic Benchmarks00:36:29 AI Coding Assistants: From Lazy to Actually Useful00:32:59 Reasoning, Chain of Thought, and Latent Thinking00:44:46 Is Attention All You Need? Architecture, Learning, and the Local Minima00:55:04 Data Efficiency and World Models: The Next Frontier01:08:12 DSI and Generative Retrieval: Reimagining Search with Semantic IDs01:17:59 Building GDM Singapore: Geography, Talent, and the Symposium01:24:18 Hiring Philosophy: High Stats, Research Taste, and Student Budgets01:28:49 Health, HRV, and Research Performance: The 23kg Journey Get full access to Latent.Space at www.latent.space/subscribe
The Midwest Real Estate Investor Conference (MREIC) is back April 27–28, 2026 at DeVos Place in Grand Rapids, Michigan, and this year's theme is Thrive. This quick special announcement features Erika Farley, Executive Director of the Rental Property Owners Association of Michigan (RPOAM), breaking down what's new, who it's for, and why you should lock in your ticket now. (Midwest Real Estate Investor Conference) You'll hear how the 2026 agenda is built around systems, real-world strategy, and operating resilience, including major focus on AI integration and a grounded economic outlook for 2026 that investors can actually use. (Midwest Real Estate Investor Conference) Conference Theme Thrive, with a clear emphasis on "Where Strategy Meets Systems" and building portfolios that can perform in changing market cycles. (Midwest Real Estate Investor Conference) What we cover in this conversation Why "Thrive" matters right now and what attendees should expect to walk away with AI keynote with Steve Brown (former executive at Google DeepMind and Intel) and what practical AI strategy looks like for investors and operators Economic forecast keynote with Dr. Paul Isely (GVSU) and why it's consistently one of the most packed sessions Featured speakers and what they're known for (private equity, tax and legal, multifamily, commercial, missing middle housing) Networking, kickoff reception, vendors, and sponsor support that make the event worth showing up for Featured speakers mentioned Steve Brown (AI keynote) Dr. Paul Isely (2026 economic outlook keynote) John Burley, Mark Kohler, Anthony Chara, Paul Moore, Nathan Biller Key agenda focus areas (2026 "Thrive" framework) Smart Systems and AI Integration Market Outlook and Economic Data Operational Risk and Compliance Capital and Acquisitions Strategy Sustainable Growth and Scaling Advanced Portfolio Management Networking and Add-on Experiences MREIC Kickoff Reception: Sunday, April 26, 2026 (5:00–7:00 PM). Included with registration, RSVP required, space limited. Private Keynote Strategy Forum: limited capacity add-on for deeper discussion (conference registration required). Hotels and Lodging Discounted conference hotel options include the Amway Grand Plaza (connected to DeVos Place) and the JW Marriott Grand Rapids (short walk). Book through the official hotel block. Pricing Note Super Early Bird pricing runs through January 31 and pricing increases February 1. (Midwest Real Estate Investor Conference) Quick timestamps (approx.) 00:00 Conference dates, location, and why this matters 00:40 Theme: Thrive in Every Market 01:00 AI keynote and why it's front and center 02:00 Economic outlook with Dr. Paul Isely 02:40 Speaker lineup and topic variety 05:00 Networking and kickoff reception 06:00 VIP and deeper access opportunities 08:00 Sponsors, vendors, and early pricing Register Lock in your seat at midwestreiconference.com. (Midwest Real Estate Investor Conference)
In this episode, Katherine Forrest and Scott Caravello unpack OpenAI researchers' proposed “confessions” framework designed to monitor for and detect dishonest outputs. They break down the researchers' proof of concept results and the framework's resilience to reward hacking, along with its limits in connection with hallucinations. Then they turn to Google DeepMind's “Distributional AGI Safety,” exploring a hypothetical path to AGI via a patchwork of agents and routing infrastructure, as well as the authors' proposed four layer safety stack. ## Learn More About Paul, Weiss's Artificial Intelligence practice: https://www.paulweiss.com/industries/artificial-intelligence
El programa 2812 de Radiogeek, les habló de varios temas importantes. Nova Launcher podría seguir vivo después de todo, pero con publicidad; Filtración total del Galaxy S26, fecha de presentación en febrero y salida a la venta en marzo; YouTube pronto permitirá a los creadores crear cortos con su propia imagen de IA y por ultimo el CEO de Google DeepMind reitera que "no hay planes" para los anuncios de Gemini. Toda esta información la pueden encontrar desde nuestra web www.infosertec.com.ar o bien desde el canal de Telegram/Whastapp, o Instagram. Esperamos sus comentarios.
Demis Hassabis is the CEO of Google DeepMind. Hassabis joins Big Technology Podcast to discuss where AI progress really stands today, where the next breakthroughs might come from, and whether we've hit AGI already. Tune in for a deep discussion covering the latest in AI research, from continual learning to world models. We also dig into product, discussing Google's big bet on AI glasses, its advertising plans, and AI coding. We also cover what AI means for knowledge work and scientific discovery. Hit play for a wide-ranging, high-signal conversation about where AI is headed next from one of the leaders driving it forward. --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology on Substack + Discord? Here's 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b Learn more about your ad choices. Visit megaphone.fm/adchoices
Can AI compress the years long research time of a PhD into seconds? Research scientist Max Jaderberg explores how “AI analogs” simulate real-world lab work with staggering speed and scale, unlocking new insights on protein folding and drug discovery. Drawing on his experience working on Isomorphic Labs' and Google DeepMind's AlphaFold 3 — an AI model for predicting the structure of molecules — Jaderberg explains how this new technology frees up researchers' time and resources to better understand the real, messy world and tackle the next frontiers of science, medicine and more. Hosted on Acast. See acast.com/privacy for more information.
成為這個頻道的會員並獲得福利:https://www.youtube.com/channel/UCJIPFjZSCWR15_jxBaK2fQQ/join前陣子我在旅行途中看了一部剛出的紀錄片《The Thinking Game》,看完之後只能用「驚為天人」來形容。這部片記錄了 DeepMind 創辦人 Demis Hassabis 追尋通用人工智慧(AGI)的過程,看完當下我就決定:一定要做一集影片好好跟大家聊聊這個人,以及這家改變世界的公司。你很難想像,現在我們熟悉的 AlphaGo、AlphaFold 甚至是 Gemini,其實都源自於一個 13 歲西洋棋神童的頓悟。當年 Demis 在一場長達 10 小時的對弈後,意識到人類大腦如果只用來玩零和遊戲太過浪費。於是他從遊戲開發轉向神經科學,最後創立 DeepMind,並向 Peter Thiel 和 Elon Musk 提出了一個瘋狂的計畫:「我們要打造一個 AI 界的阿波羅計畫,第一步解開智慧,第二步用它解決所有問題。」這集影片不只是紀錄片的補充說明,我整理了 Demis 過去 20 年的長征故事,包括 Google 與 Facebook 當年的搶人大戰內幕、AlphaFold 如何破解困擾科學界 50 年的難題,以及現在 Google DeepMind 如何在逆境中反擊。這不只是一個關於開發軟體或遊戲的故事,更是一段人類試圖解開智慧謎團、破解生命密碼的旅程。希望能透過這集,帶大家看懂這場人類史上最宏大的科學實驗。本集精彩亮點:♟️ 西洋棋神童的頓悟: 為什麼一場 10 小時的平局,讓他決定放棄下棋轉做 AI?
This episode is sponsored by Your360 AI. Get 10% off through January 2026 at Your360.ai with code: INSIDE. Tulsee Doshi is Senior Director and Head of Product for Gemini Models at Google DeepMind. She joins us to reveal how real-world usage is reshaping Google's AI roadmap in unexpected ways. We explore the surprising discovery that Deep Think excels at creative writing rather than just academic research, why the team is now pursuing layer-based editing capabilities for Nano Banana Pro based on user demand, and how internal adoption patterns exceeded expectations. We also cover Gemini 3's reasoning breakthroughs, the path from research to production, and what Google learned from achieving gold medal performance at the International Mathematical Olympiad. Hit play for a candid discussion about AI development. Note: Time codes subject to change depending on dynamic ad insertion by the distributor. CHAPTERS: 00:00:00 - Podcast begins 00:01:32 - Introducing Tulsee Doshi, Head of Product for Gemini Models 00:03:20 - Nano Banana Pro Launch: Surprises and User Adoption Two Months Later 00:04:17 - What Users Are Asking For: Future Directions for Nano Banana Pro 00:05:21 - User Demand for Photoshop-Style Editing and Layered Image Control 00:06:04 - Gemini 3 Integration Across Google Products and Beyond 00:08:23 - Will AI Change How We Interact With Google Products? 00:12:20 - The Future of Real World Models and 3D Spatial Understanding 00:27:36 - Deep Think Explained: When to Use Different Gemini Modes 00:28:41 - Deep Think Use Cases: Academic Research, Enterprise, and Consumer Applications 00:30:51 - Surprising Deep Think Application: Creative Writing Beyond Academic Research 00:31:45 - The Future of AI-Generated Movies and Long-Form Content 00:32:00 - Thank you to Tulsee Doshi for joining the AI Inside podcast Learn more about your ad choices. Visit megaphone.fm/adchoices
See more: https://thinkfuture.substack.comConnect with Steve: https://stevebrown.ai---What happens when technology moves faster than our ability to forecast it?In this episode of thinkfuture, host Chris Kalaboukis speaks with Steve Brown, veteran technologist, former Intel futurist, and former member of Google DeepMind's AI research lab. With over 35 years of experience across high-tech, digital transformation, and AI, Steve offers a rare long-view perspective on where we are—and why predicting what comes next has never been harder.Steve explains how long-term forecasting used to be feasible when technological progress followed clearer trajectories. Today, breakthroughs in AI—and soon quantum computing—are compressing decades of progress into just a few years. The result is a future that's accelerating faster than our institutions, economic models, and assumptions can keep up with.We cover:- Why 10-year technology forecasts are now nearly impossible- How AI is already accelerating progress in math, physics, and science- Why the combination of AI and quantum computing could reshape material science, chemistry, and biology- The likelihood of Artificial General Intelligence (AGI) arriving within 5–10 years- How AGI could disrupt jobs and force a rethink of capitalism itself- Why labor may increasingly turn into capital-The need for new economic models, shorter workweeks, or earlier retirement- How humans find meaning when machines handle most productive workSteve argues we may see more progress in the next five years than in the last fifty—and that the biggest challenge won't be technological, but human.If you're interested in AI, AGI, the future of work, economic disruption, or the limits of forecasting, this conversation offers a grounded, thoughtful look at what may be coming sooner than we expect.
Thank you to our sponsor, Mantle! Canton's in bed with Nasdaq, a Google DeepMind's paper talks up the role of blockchain in an agentic economy and an alleged insider cashes in on Maduro's capture. In this DEX in the City episode, hosts Katherine Kirkpatrick Bos, Jessi Brooks and Vy Le dive into the implications of Canton's Nasdaq deal, why DeepMind's study matters for crypto and the legality of insider trading on prediction markets. Vy highlights what Canton's Nasdaq deal signals about the priorities of institutions adopting blockchain technology. Katherine and Jessi engage in what happens when the machines take over. Plus, should federal officials be banned from using prediction markets? Hosts: Jessi Brooks Katherine Kirkpatrick Bos TuongVy Le Links: Bitcoin Rallies to $93,000 After U.S. Attack on Venezuela How the x402 Standard Is Enabling AI Agents to Pay Each Other Why the Black Friday Whale's $192 Million Crypto Trade Was Legal DEX in the City: Insider Trading and Crypto: What the Law Actually Says Google DeepMind's agentic economy paper Pawthereum's website A copy of Rep. Ritchie's bill Learn more about your ad choices. Visit megaphone.fm/adchoices
Weather forecasting drives billions of economic decisions — from grid operations to evacuation planning. Better forecasting could improve supply chain planning, disaster warnings, and renewable integration. The industry has decades of satellite observations and ground measurements, making it ripe for AI-driven advancements. And it's already happening. But how exactly does AI get used in weather forecasting, and how does it actually lead to improvements? In this episode, Shayle talks to Peter Battaglia, senior director of research at Google DeepMind's sustainability program, which launched a new AI-powered weather forecasting model in November 2025. They cover topics like: Why precipitation is so much harder to predict than temperature How the weather industry works, with governments creating global models and private companies refining them for specific use cases What AI models can see that traditional supercomputer simulations can't Novel sources of data like cell phones, door bells, and social media Resources: Latitude Media: Where are we on using AI to predict the weather? Latitude Media: Could AI-fueled weather forecasts boost renewable energy production? Catalyst: Specialized AI brains for physical industry Credits: Hosted by Shayle Kann. Produced and edited by Daniel Woldorff. Original music and engineering by Sean Marquand. Stephen Lacey is our executive editor. Catalyst is brought to you by Uplight. Uplight activates energy customers and their connected devices to generate, shift, and save energy—improving grid resilience and energy affordability while accelerating decarbonization. Learn how Uplight is helping utilities unlock flexible load at scale at uplight.com. Catalyst is brought to you by Antenna Group, the public relations and strategic marketing agency of choice for climate, energy, and infrastructure leaders. If you're a startup, investor, or global corporation that's looking to tell your climate story, demonstrate your impact, or accelerate your growth, Antenna Group's team of industry insiders is ready to help. Learn more at antennagroup.com.