POPULARITY
Aujourd'hui dans Silicon Carne on revient sur le Forum Économique Mondial de Davos 2026 où les déclarations des patrons de la Tech ont été explosives. Alors, qu'est-ce qu'ils nous ont dit sur l'avenir de l'IA, de l'emploi et de notre civilisation ?
My First Million: Read the notes at at podcastnotes.org. Don't forget to subscribe for free to our newsletter, the top 10 ideas of the week, every Monday --------- Get our Resource Vault - a curated collection of pro-level business resources (tools, guides, databases): https://clickhubspot.com/jbg Episode 786: Sam Parr ( https://x.com/theSamParr ) and Shaan Puri ( https://x.com/ShaanVP ) tell the story Demis Hassabis ( https://x.com/demishassabis ) and the creation of Deepmind. Show Notes: (0:00) Demis the Menace (22:05) The only resource you need is resourcefulness (2457) Move 37 (29:38) The olympics of protein folding (4639) We are the gorillas — Links: • The Thinking Game - https://www.youtube.com/watch?v=d95J8yzvjbQ • Why We Do What We Do - https://www.youtube.com/watch?v=BwFOwyoH-3g • Fierce Nerds - https://paulgraham.com/fn.html • Isomorphic Labs - https://www.isomorphiclabs.com/ • If Anyone Builds It, Everyone Dies - https://ifanyonebuildsit.com/ — Check Out Shaan's Stuff: • Shaan's weekly email - https://www.shaanpuri.com • Visit https://www.somewhere.com/mfm to hire worldwide talent like Shaan and get $500 off for being an MFM listener. Hire developers, assistants, marketing pros, sales teams and more for 80% less than US equivalents. • Mercury - Need a bank for your company? Go check out Mercury (mercury.com). Shaan uses it for all of his companies! Mercury is a financial technology company, not an FDIC-insured bank. Banking services provided by Choice Financial Group, Column, N.A., and Evolve Bank & Trust, Members FDIC • I run all my newsletters on Beehiiv and you should too + we're giving away $10k to our favorite newsletter, check it out: beehiiv.com/mfm-challenge — Check Out Sam's Stuff: • Hampton - https://www.joinhampton.com/ • Ideation Bootcamp - https://www.ideationbootcamp.co/ • Copy That - https://copythat.com • Hampton Wealth Survey - https://joinhampton.com/wealth • Sam's List - http://samslist.co/ My First Million is a HubSpot Original Podcast // Brought to you by HubSpot Media // Production by Arie Desormeaux // Editing by Ezra Bakker Trupiano /
AI is front and center in Davos this year, as world leaders and tech executives debate how quickly the technology is reshaping the economy and workforce. Demis Hassabis, co-founder and CEO of Google DeepMind, sits down with CNBC's Andrew Ross Sorkin at the World Economic Forum. The two discuss Gemini's position in the AI race, the evolution of artificial general intelligence (AGI), and what it all means for jobs.In this episode:Demis Hassabis, @demihassabisAndrew Ross Sorkin, @andrewrsorkinCameron Costa, @CameronCostaNY Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
The AI Breakdown: Daily Artificial Intelligence News and Discussions
At Davos, leading AI lab heads sharply accelerated their timelines for artificial general intelligence, with Demis Hassabis pointing to a roughly five-year horizon and Dario Amodei arguing it could arrive far sooner. Those compressed timelines are now reshaping debates around chip exports, AI pauses, and whether global coordination is even possible as competition intensifies. The message is no longer theoretical risk—it's near-term disruption, and society is not ready. In the headlines: Google says it has no plans for ads in Gemini, Meta may be pulling back on in-house chips, OpenAI signs a major enterprise deal with ServiceNow, and new signals emerge on the timing of OpenAI's first hardware.
Demis Hassabis is the CEO of Google DeepMind. Hassabis joins Big Technology Podcast to discuss where AI progress really stands today, where the next breakthroughs might come from, and whether we've hit AGI already. Tune in for a deep discussion covering the latest in AI research, from continual learning to world models. We also dig into product, discussing Google's big bet on AI glasses, its advertising plans, and AI coding. We also cover what AI means for knowledge work and scientific discovery. Hit play for a wide-ranging, high-signal conversation about where AI is headed next from one of the leaders driving it forward. --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology on Substack + Discord? Here's 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b Learn more about your ad choices. Visit megaphone.fm/adchoices
Get our Resource Vault - a curated collection of pro-level business resources (tools, guides, databases): https://clickhubspot.com/jbg Episode 786: Sam Parr ( https://x.com/theSamParr ) and Shaan Puri ( https://x.com/ShaanVP ) tell the story Demis Hassabis ( https://x.com/demishassabis ) and the creation of Deepmind. Show Notes: (0:00) Demis the Menace (22:05) The only resource you need is resourcefulness (2457) Move 37 (29:38) The olympics of protein folding (4639) We are the gorillas — Links: • The Thinking Game - https://www.youtube.com/watch?v=d95J8yzvjbQ • Why We Do What We Do - https://www.youtube.com/watch?v=BwFOwyoH-3g • Fierce Nerds - https://paulgraham.com/fn.html • Isomorphic Labs - https://www.isomorphiclabs.com/ • If Anyone Builds It, Everyone Dies - https://ifanyonebuildsit.com/ — Check Out Shaan's Stuff: • Shaan's weekly email - https://www.shaanpuri.com • Visit https://www.somewhere.com/mfm to hire worldwide talent like Shaan and get $500 off for being an MFM listener. Hire developers, assistants, marketing pros, sales teams and more for 80% less than US equivalents. • Mercury - Need a bank for your company? Go check out Mercury (mercury.com). Shaan uses it for all of his companies! Mercury is a financial technology company, not an FDIC-insured bank. Banking services provided by Choice Financial Group, Column, N.A., and Evolve Bank & Trust, Members FDIC • I run all my newsletters on Beehiiv and you should too + we're giving away $10k to our favorite newsletter, check it out: beehiiv.com/mfm-challenge — Check Out Sam's Stuff: • Hampton - https://www.joinhampton.com/ • Ideation Bootcamp - https://www.ideationbootcamp.co/ • Copy That - https://copythat.com • Hampton Wealth Survey - https://joinhampton.com/wealth • Sam's List - http://samslist.co/ My First Million is a HubSpot Original Podcast // Brought to you by HubSpot Media // Production by Arie Desormeaux // Editing by Ezra Bakker Trupiano /
The AI Breakdown: Daily Artificial Intelligence News and Discussions
Today on the AI Daily Brief, why AI leadership is shifting decisively to the CEO—and why that shift is happening now as AI moves from experimentation to core enterprise strategy. Drawing on new survey data, the episode explores what happens when AI becomes recession-proof, ROI timelines pull forward, and agentic systems start reshaping organizations at scale. Before that, in the headlines: Replit pushes vibe coding all the way to mobile app stores, Higgsfield rockets to unicorn status on explosive growth, Thinking Machines Labs faces a wave of high-profile departures, and DeepMind's Demis Hassabis warns that Chinese AI models are now only months behind the frontier. Brought to you by:KPMG – Discover how AI is transforming possibility into reality. Tune into the new KPMG 'You Can with AI' podcast and unlock insights that will inform smarter decisions inside your enterprise. Listen now and start shaping your future with every episode. https://www.kpmg.us/AIpodcastsZencoder - From vibe coding to AI-first engineering - http://zencoder.ai/zenflowOptimizely Opal - The agent orchestration platform build for marketers - https://www.optimizely.com/theaidailybriefAssemblyAI - The best way to build Voice AI apps - https://www.assemblyai.com/briefLandfallIP - AI to Navigate the Patent Process - https://landfallip.com/Robots & Pencils - Cloud-native AI solutions that power results https://robotsandpencils.com/The Agent Readiness Audit from Superintelligent - Go to https://besuper.ai/ to request your company's agent readiness score.The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614Interested in sponsoring the show? sponsors@aidailybrief.ai
Hosted by Arjun Kharpal and Steve Kovach, CNBC's “The Tech Download” cuts through the noise to unpack the tech stories that matter most for your money. In the debut episode, Google DeepMind CEO Demis Hassabis reveals how the leading AI research lab is driving breakthroughs, as well as what the race to artificial general intelligence means for science, business and society.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Hosted by Arjun Kharpal in London and Steve Kovach in New York, The Tech Download cuts through the noise to unpack the technology stories that matter most — and what they mean for your money.In Season One, we take you inside Google DeepMind, the brains behind the tech giant's artificial intelligence push. Hear from the people shaping the future of AI, including a one-on-one with co-founder and CEO Demis Hassabis. From breakthroughs in science to the societal impact of AI, we dive deep into the opportunities and risks behind what is likely to be the most transformative technology of our time.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Aishwarya Naresh Reganti and Kiriti Badam have helped build and launch more than 50 enterprise AI products across companies like OpenAI, Google, Amazon, and Databricks. Based on these experiences, they've developed a small set of best practices for building and scaling successful AI products. The goal of this conversation is to save you and your team a lot of pain and suffering.We discuss:1. Two key ways AI products differ from traditional software, and why that fundamentally changes how they should be built2. Common patterns and anti-patterns in companies that build strong AI products versus those that struggle3. A framework they developed from real-world experience to iteratively build AI products that create a flywheel of improvement4. Why obsessing about customer trust and reliability is an underrated driver of successful AI products5. Why evals aren't a cure-all, and the most common misconceptions people have about them6. The skills that matter most for builders in the AI era—Brought to you by:Merge—The fastest way to ship 220+ integrations: https://merge.dev/lennyStrella—The AI-powered customer research platform: https://strella.io/lennyBrex—The banking solution for startups: https://www.brex.com/product/business-account?ref_code=bmk_dp_brand1H25_ln_new_fs—Transcript: https://www.lennysnewsletter.com/p/what-openai-and-google-engineers-learned—My biggest takeaways (for paid newsletter subscribers): https://www.lennysnewsletter.com/i/183007822/referenced—Get 15% off Aishwarya and Kiriti's Maven course, Building Agentic AI Applications with a Problem-First Approach, using this link: https://bit.ly/3V5XJFp—Where to find Aishwarya Naresh Reganti:• LinkedIn: https://www.linkedin.com/in/areganti• GitHub: https://github.com/aishwaryanr/awesome-generative-ai-guide• X: https://x.com/aish_reganti—Where to find Kiriti Badam:• LinkedIn: https://www.linkedin.com/in/sai-kiriti-badam• X: https://x.com/kiritibadam—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Introduction to Aishwarya and Kiriti(05:03) Challenges in AI product development(07:36) Key differences between AI and traditional software(13:19) Building AI products: start small and scale(15:23) The importance of human control in AI systems(22:38) Avoiding prompt injection and jailbreaking(25:18) Patterns for successful AI product development(33:20) The debate on evals and production monitoring(41:27) Codex team's approach to evals and customer feedback(45:41) Continuous calibration, continuous development (CC/CD) framework(58:07) Emerging patterns and calibration(01:01:24) Overhyped and under-hyped AI concepts(01:05:17) The future of AI(01:08:41) Skills and best practices for building AI products(01:14:04) Lightning round and final thoughts—Referenced:• LevelUp Labs: https://levelup-labs.ai/• Why your AI product needs a different development lifecycle: https://www.lennysnewsletter.com/p/why-your-ai-product-needs-a-different• Booking.com: https://www.booking.com• Research paper on agents in production (by Matei Zaharia's lab): https://arxiv.org/pdf/2512.04123• Matei Zaharia's research on Google Scholar: https://scholar.google.com/citations?user=I1EvjZsAAAAJ&hl=en• The coming AI security crisis (and what to do about it) | Sander Schulhoff: https://www.lennysnewsletter.com/p/the-coming-ai-security-crisis• Gajen Kandiah on LinkedIn: https://www.linkedin.com/in/gajenkandiah• Rackspace: https://www.rackspace.com• The AI-native startup: 5 products, 7-figure revenue, 100% AI-written code | Dan Shipper (co-founder/CEO of Every): https://www.lennysnewsletter.com/p/inside-every-dan-shipper• Semantic Diffusion: https://martinfowler.com/bliki/SemanticDiffusion.html• LMArena: https://lmarena.ai• Artificial Analysis: https://artificialanalysis.ai/leaderboards/providers• Why humans are AI's biggest bottleneck (and what's coming in 2026) | Alexander Embiricos (OpenAI Codex Product Lead): https://www.lennysnewsletter.com/p/why-humans-are-ais-biggest-bottleneck• Airline held liable for its chatbot giving passenger bad advice—what this means for travellers: https://www.bbc.com/travel/article/20240222-air-canada-chatbot-misinformation-what-travellers-should-know• Demis Hassabis on LinkedIn: https://www.linkedin.com/in/demishassabis• We replaced our sales team with 20 AI agents—here's what happened | Jason Lemkin (SaaStr): https://www.lennysnewsletter.com/p/we-replaced-our-sales-team-with-20-ai-agents• Socrates's quote: https://en.wikipedia.org/wiki/The_unexamined_life_is_not_worth_living• Noah Smith's newsletter: https://www.noahpinion.blog• Silicon Valley on HBO Max: https://www.hbomax.com/shows/silicon-valley/b4583939-e39f-4b5c-822d-5b6cc186172d• Clair Obscur: Expedition 33: https://store.steampowered.com/app/1903340/Clair_Obscur_Expedition_33/• Wisprflow: https://wisprflow.ai• Raycast: https://www.raycast.com• Steve Jobs's quote: https://www.goodreads.com/quotes/463176-you-can-t-connect-the-dots-looking-forward-you-can-only—Recommended books:• When Breath Becomes Air: https://www.amazon.com/When-Breath-Becomes-Paul-Kalanithi/dp/081298840X• The Three-Body Problem: https://www.amazon.com/Three-Body-Problem-Cixin-Liu/dp/0765382032• A Fire Upon the Deep: https://www.amazon.com/Fire-Upon-Deep-Zones-Thought/dp/0812515285—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. To hear more, visit www.lennysnewsletter.com
DEEPMIND AND THE GOOGLE ACQUISITION Colleague Gary Rivlin. Mustafa Suleyman and Demis Hassabisfounding DeepMind to master games, their sale to Google for $650 million, and the culture clash that followed. NUMBER 121952
Welcome to episode 337 of The Cloud Pod, where the forecast is always cloudy! Justin, Matt, and Ryan have hit the recording studio to bring you all the latest in cloud and AI news, from acquisitions and price hikes to new tools that Ryan somehow loves but also hates? We don't understand either… but let's get started! Titles we almost went with this week Prompt Engineering Our Way Into Trouble The Demo Worked Yesterday, We Swear It Scales Horizontally, Trust Us Responsible AI But Terrible Copy (Marketing Edition) General News 00:58 Watch ‘The Thinking Game' documentary for free on YouTube Google DeepMind is releasing the “The Thinking Game” documentary for free on YouTube starting November 25, marking the fifth anniversary of AlphaFold. The feature-length film provides behind-the-scenes access to the AI lab and documents the team’s work toward artificial general intelligence over five years. The documentary captures the moment when the AlphaFold team learned they had solved the 50-year protein folding problem in biology, a scientific achievement that recently earned Demis Hassabis and John Jumper the Nobel Prize in Chemistry. This represents one of the most significant practical applications of deep learning to fundamental scientific research. The film was produced by the same award-winning team that created the AlphaGo documentary, which chronicled DeepMind’s earlier achievement in mastering the game of Go. For cloud and AI practitioners, this offers insight into how Google DeepMind approaches complex AI research problems and the development process behind their models. While this is primarily a documentary release rather than a technical product announcement, it provides context for understanding Google’s broader AI strategy and the research foundation underlying its cloud AI services. The AlphaFold model itself is available through Google Cloud for protein structure prediction workloads. 01:54 Justin – “If you're not into technology, don't care about any of that, and don't care about AI and how they built all the AI models that are now powering the world of LLMs we have, you will not like this documentary.” 04:22 ServiceNow to buy Armis in $7.7 billion security deal • The Register ServiceNow is acquiring Armis for $7.75 billion to integrate real-time security intelligence with its Configuration Management Database, allowing customers to identify vulnerabilities across IT, OT, and medical devices and remediate them through automated workflows.
Demis Hassabis is the CEO of Google DeepMind. He joined Big Technology Podcast in early 2025 discuss the cutting edge of AI and where the research is heading. In this conversation, we cover the path to artificial general intelligence, how long it will take to get there, how to build world models, whether AIs can be creative, and how AIs are trying to deceive researchers. Stay tuned for the second half where we discuss Google's plan for smart glasses and Hassabis's vision for a virtual cell. Hit play for a fascinating discussion with an AI pioneer that will both break news and leave you deeply informed about the state of AI and its promising future. --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology on Substack + Discord? Here's 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b Questions? Feedback? Write to: bigtechnologypodcast@gmail.com --- Wealthfront.com/bigtech. If eligible for the overall boosted 3.90% rate offered with this promo, your boosted rate is subject to change if the 3.25% base rate decreases during the 3-month promo period. The Cash Account, which is not a deposit account, is offered by Wealthfront Brokerage LLC ("Wealthfront Brokerage"), Member FINRA/SIPC, not a bank. The Annual Percentage Yield ("APY") on cash deposits as of 12/19/25, is representative, requires no minimum, and may change at any time. The APY reflects the weighted average of deposit balances at participating Program Banks, which are not allocated equally. Wealthfront Brokerage sweeps cash balances to Program Banks, where they earn the variable base APY. Instant withdrawals are subject to certain conditions and processing times may vary. Learn more about your ad choices. Visit megaphone.fm/adchoices
Is 2026 the year society finally pushes back against artificial intelligence? In this year's final episode, Paul Roetzer and Mike Kaput explore the immediate future of AGI, analyzing Demis Hassabis's warning of a shift ten times larger than the Industrial Revolution and Shane Legg's prediction of human-level intelligence by 2028. The hosts break down critical developments, including Google's Gemini 3 Flash, OpenAI's staggering valuation talks, and the rise of world models that simulate physical reality. Show Notes: Access the show notes and show links here Click here to take this week's AI Pulse. Timestamps: 00:00:00 — Intro 00:03:27 — AI Pulse 00:07:05 — AI Trends to Watch in 2026 00:31:59 — Demis Hassabis on the Future of Intelligence 00:42:35 — DeepMind Co-Founder on the Arrival of AGI 00:47:53 — Are AI Job Fears Overblown? 00:56:05 — Gemini 3 Flash 00:59:38 — OpenAI Eyes Billions in Fresh Funding 01:02:19 — OpenAI Releases New ChatGPT Images 01:04:18 — Karen Hao Issues AI Book Correction 01:08:18 — AI Keeps Getting Political (Roundup) 01:12:51 — AI World Models 01:17:31 — US Government Launches Tech Force This episode is brought to you by AI Academy by SmarterX. AI Academy is your gateway to personalized AI learning for professionals and teams. Discover our new on-demand courses, live classes, certifications, and a smarter way to master AI. You can get $100 off an individual purchase or a membership by using code POD100 at academy.smarterx.ai. Visit our website Receive our weekly newsletter Join our community: Slack LinkedIn Twitter Instagram Facebook Looking for content and resources? Register for a free webinar Come to our next Marketing AI Conference Enroll in our AI Academy
Világuralom - A mesterséges intelligencia, a ChatGPT és az egész világunkat átformáló versengés A mesterséges intelligencia térhódításának története és két zseniális elme folyamatos vetélkedése - erről szól Parmy Olson lebilincselő könyve. Demis Hassabis, a DeepMind agytrösztje és Sam Altman, az OpenAI vezetője - bár induláskor mindketten az emberiség javára akarták fordítani a technológiát, egy-egy nagy hatalmú mamutcég befolyása alá kerülve veszélyes versenyfutásba kezdtek azért, hogy az ő platformjuk határozhassa meg a világ jövőjét. A szerző a Bloomberg technológiai újságírója, aki szerteágazó kapcsolatokkal rendelkezik és elképesztően részletes, izgalmas, exkluzív információkra épülő könyvet írt. A Könyvben utazom vendége Ligárt András, az ACE Network alapító igazgatója, ők támogatták ugyanis a könyv megjelentetését a BookLab Kiadónál.
Recorded on-stage at Øredev 2025, Fredrik talks to Justyna Zander about AI for self-driving cars, the noise of the present, and more. Don’t let the noise of today demolish the positive signal of the future! Many thanks to Øredev for inviting Kodsnack again, they paid for the trip and the editing time of these keynote recordings, but have no say about the content of these or any other episodes. Thank you Cloudnet for sponsoring our VPS! Comments, questions or tips? We a re @kodsnack, @tobiashieta, @oferlund and @bjoreman on Twitter, have a page on Facebook and can be emailed at info@kodsnack.se if you want to write longer. We read everything we receive. If you enjoy Kodsnack we would love a review in iTunes! You can also support the podcast by buying us a coffee (or two!) through Ko-fi. Links Øredev All the presentation videos from Øredev 2025 Justyna Zander Physical AI: crafting resilient systems with emotional intelligence- Justyna’s keynote Emotional intelligence Empathy Hyperscalars Snowflake Demis Hassabis Titles You learn something new We have it in the spatial sense The policy of the machine What did the human tell me to do? How do you teach the machine empathy? The first to be disrupted The intent of a human Engineering with purpose Statistics on steroids
Fala galera, nesse episódio eu recebo meu grande amigo Fabrício Carraro, do podcast IA sob controle para falarmos sobre o documentário The Thinking Game, que conta a história da DeepMind e do Demis Hassabis.Aqui está o link para a página de vendas para saber mais sobre mim e sobre o curso: https://www.cursovidacomia.com.br/Aqui está o link para se inscrever: https://pay.hotmart.com/W98240617UVideo do documentário: https://www.youtube.com/watch?v=d95J8yzvjbQLink do video de IA treinando pra jogar pokemo: https://www.youtube.com/watch?v=DcYLT37ImBYLink do episódio sobre difusão: https://open.spotify.com/episode/2gIzBcgIjSwoDX62KmepfK?si=e6c68fe098544723Link do grupo do wpp: https://chat.whatsapp.com/GNLhf8aCurbHQc9ayX5oCPInstagram do podcast: https://www.instagram.com/podcast.vidacomiaMeu Linkedin: https://www.linkedin.com/in/filipe-lauar/Linkedin do Fabricio: https://www.linkedin.com/in/fabriciocarraro/Link do podcast IA sob controle: https://open.spotify.com/show/5xLCMHJ6eGWzdu8JaIDkuP?si=8ffcc0b287e64e6a
¡Descubre cómo Google DeepMind domina la carrera de la IA con “The Thinking Game”! En este episodio de Applelianos Podcast analizamos el documental que revela los secretos de Demis Hassabis: de prodigio ajedrecista a Nobel por AlphaFold. Exploramos AlphaGo venciendo al Go, avances en proteínas que curan enfermedades y la visión de AGI para 2030 con Gemini. ¿Es Google imbatible frente a OpenAI? Escucha riesgos éticos, breakthroughs y por qué esta supremacía cambia el mundo. ¡No te lo pierdas! #DeepMind #IA https://seoxan.es/crear_pedido_hosting Codigo Cupon "APPLE" PATROCINADO POR SEOXAN Optimización SEO profesional para tu negocio https://seoxan.es https://uptime.urtix.es //Enlaces https://youtu.be/d95J8yzvjbQ?si=R04WmBmQeVIfGYIJ https://www.elmundo.es/tecnologia/2025/11/26/69271d8be9cf4a20538b458e.html# PARTICIPA EN DIRECTO Deja tu opinión en los comentarios, haz preguntas y sé parte de la charla más importante sobre el futuro del iPad y del ecosistema Apple. ¡Tu voz cuenta! ¿TE GUSTÓ EL EPISODIO? ✨ Dale LIKE SUSCRÍBETE y activa la campanita para no perderte nada COMENTA COMPARTE con tus amigos applelianos SÍGUENOS EN TODAS NUESTRAS PLATAFORMAS: YouTube: https://www.youtube.com/@Applelianos Telegram: https://t.me/+Jm8IE4n3xtI2Zjdk X (Twitter): https://x.com/ApplelianosPod Facebook: https://www.facebook.com/applelianos Apple Podcasts: https://apple.co/39QoPbO
This week in AI, the bubble keeps inflating despite fresh warnings, Google stages an AI comeback, and Chinese AI threatens Nvidia. Though fears around irrational AI spending used to be confined to skeptics, now even industry insiders like Google's Sundar Pichai and Demis Hassabis are voicing doubts. CNBC's Deirdre Bosa speaks to Josh Woodward, Alphabet's VP of Google Labs, Dan Niles, founder of Niles Investment Management, and founder of GPU management company Hydra Host Aaron Ginn for more. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
How can you write science-based fiction without info-dumping your research? How can you use AI tools in a creative way, while still focusing on a human-first approach? Why is adapting to the fast pace of change so difficult and how can we make the most of this time? Jamie Metzl talks about Superconvergence and more. In the intro, How to avoid author scams [Written Word Media]; Spotify vs Audible audiobook strategy [The New Publishing Standard]; Thoughts on Author Nation and why constraints are important in your author life [Self-Publishing with ALLi]; Alchemical History And Beautiful Architecture: Prague with Lisa M Lilly on my Books and Travel Podcast. Today's show is sponsored by Draft2Digital, self-publishing with support, where you can get free formatting, free distribution to multiple stores, and a host of other benefits. Just go to www.draft2digital.com to get started. This show is also supported by my Patrons. Join my Community at Patreon.com/thecreativepenn Jamie Metzl is a technology futurist, professional speaker, entrepreneur, and the author of sci-fi thrillers and futurist nonfiction books, including the revised and updated edition of Superconvergence: How the Genetics, Biotech, and AI Revolutions Will Transform Our Lives, Work, and World. You can listen above or on your favorite podcast app or read the notes and links below. Here are the highlights and the full transcript is below. Show Notes How personal history shaped Jamie's fiction writing Writing science-based fiction without info-dumping The super convergence of three revolutions (genetics, biotech, AI) and why we need to understand them holistically Using fiction to explore the human side of genetic engineering, life extension, and robotics Collaborating with GPT-5 as a named co-author How to be a first-rate human rather than a second-rate machine You can find Jamie at JamieMetzl.com. Transcript of interview with Jamie Metzl Jo: Jamie Metzl is a technology futurist, professional speaker, entrepreneur, and the author of sci-fi thrillers and futurist nonfiction books, including the revised and updated edition of Superconvergence: How the Genetics, Biotech, and AI Revolutions Will Transform Our Lives, Work, and World. So welcome, Jamie. Jamie: Thank you so much, Jo. Very happy to be here with you. Jo: There is so much we could talk about, but let's start with you telling us a bit more about you and how you got into writing. From History PhD to First Novel Jamie: Well, I think like a lot of writers, I didn't know I was a writer. I was just a kid who loved writing. Actually, just last week I was going through a bunch of boxes from my parents' house and I found my autobiography, which I wrote when I was nine years old. So I've been writing my whole life and loving it. It was always something that was very important to me. When I finished my DPhil, my PhD at Oxford, and my dissertation came out, it just got scooped up by Macmillan in like two minutes. And I thought, “God, that was easy.” That got me started thinking about writing books. I wanted to write a novel based on the same historical period – my PhD was in Southeast Asian history – and I wanted to write a historical novel set in the same period as my dissertation, because I felt like the dissertation had missed the human element of the story I was telling, which was related to the Cambodian genocide and its aftermath. So I wrote what became my first novel, and I thought, “Wow, now I'm a writer.” I thought, “All right, I've already published one book. I'm gonna get this other book out into the world.” And then I ran into the brick wall of: it's really hard to be a writer. It's almost easier to write something than to get it published. I had to learn a ton, and it took nine years from when I started writing that first novel, The Depths of the Sea, to when it finally came out. But it was such a positive experience, especially to have something so personal to me as that story. I'd lived in Cambodia for two years, I'd worked on the Thai-Cambodian border, and I'm the child of a Holocaust survivor. So there was a whole lot that was very emotional for me. That set a pattern for the rest of my life as a writer, at least where, in my nonfiction books, I'm thinking about whatever the issues are that are most important to me. Whether it was that historical book, which was my first book, or Hacking Darwin on the future of human genetic engineering, which was my last book, or Superconvergence, which, as you mentioned in the intro, is my current book. But in every one of those stories, the human element is so deep and so profound. You can get at some of that in nonfiction, but I've also loved exploring those issues in deeper ways in my fiction. So in my more recent novels, Genesis Code and Eternal Sonata, I've looked at the human side of the story of genetic engineering and human life extension. And now my agent has just submitted my new novel, Virtuoso, about the intersection of AI, robotics, and classical music. With all of this, who knows what's the real difference between fiction and nonfiction? We're all humans trying to figure things out on many different levels. Shifting from History to Future Tech Jo: I knew that you were a polymath, someone who's interested in so many things, but the music angle with robotics and AI is fascinating. I do just want to ask you, because I was also at Oxford – what college were you at? Jamie: I was in St. Antony's. Jo: I was at Mansfield, so we were in that slightly smaller, less famous college group, if people don't know. Jamie: You know, but we're small but proud. Jo: Exactly. That's fantastic. You mentioned that you were on the historical side of things at the beginning and now you've moved into technology and also science, because this book Superconvergence has a lot of science. So how did you go from history and the past into science and the future? Biology and Seeing the Future Coming Jamie: It's a great question. I'll start at the end and then back up. A few years ago I was speaking at Lawrence Livermore National Laboratory, which is one of the big scientific labs here in the United States. I was a guest of the director and I was speaking to their 300 top scientists. I said to them, “I'm here to speak with you about the future of biology at the invitation of your director, and I'm really excited. But if you hear something wrong, please raise your hand and let me know, because I'm entirely self-taught. The last biology course I took was in 11th grade of high school in Kansas City.” Of course I wouldn't say that if I didn't have a lot of confidence in my process. But in many ways I'm self-taught in the sciences. As you know, Jo, and as all of your listeners know, the foundation of everything is curiosity and then a disciplined process for learning. Even our greatest super-specialists in the world now – whatever their background – the world is changing so fast that if anyone says, “Oh, I have a PhD in physics/chemistry/biology from 30 years ago,” the exact topic they learned 30 years ago is less significant than their process for continuous learning. More specifically, in the 1990s I was working on the National Security Council for President Clinton, which is the president's foreign policy staff. My then boss and now close friend, Richard Clarke – who became famous as the guy who had tragically predicted 9/11 – used to say that the key to efficacy in Washington and in life is to try to solve problems that other people can't see. For me, almost 30 years ago, I felt to my bones that this intersection of what we now call AI and the nascent genetics revolution and the nascent biotechnology revolution was going to have profound implications for humanity. So I just started obsessively educating myself. When I was ready, I started writing obscure national security articles. Those got a decent amount of attention, so I was invited to testify before the United States Congress. I was speaking out a lot, saying, “Hey, this is a really important story. A lot of people are missing it. Here are the things we should be thinking about for the future.” I wasn't getting the kind of traction that I wanted. I mentioned before that my first book had been this dry Oxford PhD dissertation, and that had led to my first novel. So I thought, why don't I try the same approach again – writing novels to tell this story about the genetics, biotech, and what later became known popularly as the AI revolution? That led to my two near-term sci-fi novels, Genesis Code and Eternal Sonata. On my book tours for those novels, when I explained the underlying science to people in my way, as someone who taught myself, I could see in their eyes that they were recognizing not just that something big was happening, but that they could understand it and feel like they were part of that story. That's what led me to write Hacking Darwin, as I mentioned. That book really unlocked a lot of things. I had essentially predicted the CRISPR babies that were born in China before it happened – down to the specific gene I thought would be targeted, which in fact was the case. After that book was published, Dr. Tedros, the Director-General of the World Health Organization, invited me to join the WHO Expert Advisory Committee on Human Genome Editing, which I did. It was a really great experience and got me thinking a lot about the upside of this revolution and the downside. The Birth of Superconvergence Jamie: I get a lot of wonderful invitations to speak, and I have two basic rules for speaking: Never use notes. Never ever. Never stand behind a podium. Never ever. Because of that, when I speak, my talks tend to migrate. I'd be speaking with people about the genetics revolution as it applied to humans, and I'd say, “Well, this is just a little piece of a much bigger story.” The bigger story is that after nearly four billion years of life on Earth, our one species has the increasing ability to engineer novel intelligence and re-engineer life. The big question for us, and frankly for the world, is whether we're going to be able to use that almost godlike superpower wisely. As that idea got bigger and bigger, it became this inevitable force. You write so many books, Jo, that I think it's second nature for you. Every time I finish a book, I think, “Wow, that was really hard. I'm never doing that again.” And then the books creep up on you. They call to you. At some point you say, “All right, now I'm going to do it.” So that was my current book, Superconvergence. Like everything, every journey you take a step, and that step inspires another step and another. That's why writing and living creatively is such a wonderfully exciting thing – there's always more to learn and always great opportunities to push ourselves in new ways. Balancing Deep Research with Good Storytelling Jo: Yeah, absolutely. I love that you've followed your curiosity and then done this disciplined process for learning. I completely understand that. But one of the big issues with people like us who love the research – and having read your Superconvergence, I know how deeply you go into this and how deeply you care that it's correct – is that with fiction, one of the big problems with too much research is the danger of brain-dumping. Readers go to fiction for escapism. They want the interesting side of it, but they want a story first. What are your tips for authors who might feel like, “Where's the line between putting in my research so that it's interesting for readers, but not going too far and turning it into a textbook?” How do you find that balance? Jamie: It's such a great question. I live in New York now, but I used to live in Washington when I was working for the U.S. government, and there were a number of people I served with who later wrote novels. Some of those novels felt like policy memos with a few sex scenes – and that's not what to do. To write something that's informed by science or really by anything, everything needs to be subservient to the story and the characters. The question is: what is the essential piece of information that can convey something that's both important to your story and your character development, and is also an accurate representation of the world as you want it to be? I certainly write novels that are set in the future – although some of them were a future that's now already happened because I wrote them a long time ago. You can make stuff up, but as an author you have to decide what your connection to existing science and existing technology and the existing world is going to be. I come at it from two angles. One: I read a huge number of scientific papers and think, “What does this mean for now, and if you extrapolate into the future, where might that go?” Two: I think about how to condense things. We've all read books where you're humming along because people read fiction for story and emotional connection, and then you hit a bit like: “I sat down in front of the president, and the president said, ‘Tell me what I need to know about the nuclear threat.'” And then it's like: insert memo. That's a deal-killer. It's like all things – how do you have a meaningful relationship with another person? It's not by just telling them your story. Even when you're telling them something about you, you need to be imagining yourself sitting in their shoes, hearing you. These are very different disciplines, fiction and nonfiction. But for the speculative nonfiction I write – “here's where things are now, and here's where the world is heading” – there's a lot of imagination that goes into that too. It feels in many ways like we're living in a sci-fi world because the rate of technological change has been accelerating continuously, certainly for the last 12,000 years since the dawn of agriculture. It's a balance. For me, I feel like I'm a better fiction writer because I write nonfiction, and I'm a better nonfiction writer because I write fiction. When I'm writing nonfiction, I don't want it to be boring either – I want people to feel like there's a story and characters and that they can feel themselves inside that story. Jo: Yeah, definitely. I think having some distance helps as well. If you're really deep into your topics, as you are, you have to leave that manuscript a little bit so you can go back with the eyes of the reader as opposed to your eyes as the expert. Then you can get their experience, which is great. Looking Beyond Author-Focused AI Fears Jo: I want to come to your technical knowledge, because AI is a big thing in the author and creative community, like everywhere else. One of the issues is that creators are focusing on just this tiny part of the impact of AI, and there's a much bigger picture. For example, in 2024, Demis Hassabis from Google DeepMind and his collaborative partner John Jumper won the Nobel Prize for Chemistry with AlphaFold. It feels to me like there's this massive world of what's happening with AI in health, climate, and other areas, and yet we are so focused on a lot of the negative stuff. Maybe you could give us a couple of things about what there is to be excited and optimistic about in terms of AI-powered science? Jamie: Sure. I'm so excited about all of the new opportunities that AI creates. But I also think there's a reason why evolution has preserved this very human feeling of anxiety: because there are real dangers. Anybody who's Pollyanna-ish and says, “Oh, the AI story is inevitably positive,” I'd be distrustful. And anyone who says, “We're absolutely doomed, this is the end of humanity,” I'd also be distrustful. So let me tell you the positives and the negatives, and maybe some thoughts about how we navigate toward the former and away from the latter. AI as the New Electricity Jamie: When people think of AI right now, they're thinking very narrowly about these AI tools and ChatGPT. But we don't think of electricity that way. Nobody says, “I know electricity – electricity is what happens at the power station.” We've internalised the idea that electricity is woven into not just our communication systems or our houses, but into our clothes, our glasses – it's woven into everything and has super-empowered almost everything in our modern lives. That's what AI is. In Superconvergence, the majority of the book is about positive opportunities: In healthcare, moving from generalised healthcare based on population averages to personalised or precision healthcare based on a molecular understanding of each person's individual biology. As we build these massive datasets like the UK Biobank, we can take a next jump toward predictive and preventive healthcare, where we're able to address health issues far earlier in the process, when interventions can be far more benign. I'm really excited about that, not to mention the incredible new kinds of treatments – gene therapies, or pharmaceuticals based on genetics and systems-biology analyses of patients. Then there's agriculture. Over the last hundred years, because of the technologies of the Green Revolution and synthetic fertilisers, we've had an incredible increase in agricultural productivity. That's what's allowed us to quadruple the global population. But if we just continue agriculture as it is, as we get towards ten billion wealthier, more empowered people wanting to eat like we eat, we're going to have to wipe out all the wild spaces on Earth to feed them. These technologies help provide different paths toward increasing agricultural productivity with fewer inputs of land, water, fertiliser, insecticides, and pesticides. That's really positive. I could go on and on about these positives – and I do – but there are very real negatives. I was a member of the WHO Expert Advisory Committee on Human Genome Editing after the first CRISPR babies were very unethically created in China. I'm extremely aware that these same capabilities have potentially incredible upsides and very real downsides. That's the same as every technology in the past, but this is happening so quickly that it's triggering a lot of anxieties. Governance, Responsibility, and Why Everyone Has a Role Jamie: The question now is: how do we optimise the benefits and minimise the harms? The short, unsexy word for that is governance. Governance is not just what governments do; it's what all of us do. That's why I try to write books, both fiction and nonfiction, to bring people into this story. If people “other” this story – if they say, “There's a technology revolution, it has nothing to do with me, I'm going to keep my head down” – I think that's dangerous. The way we're going to handle this as responsibly as possible is if everybody says, “I have some role. Maybe it's small, maybe it's big. The first step is I need to educate myself. Then I need to have conversations with people around me. I need to express my desires, wishes, and thoughts – with political leaders, organisations I'm part of, businesses.” That has to happen at every level. You're in the UK – you know the anti-slavery movement started with a handful of people in Cambridge and grew into a global movement. I really believe in the power of ideas, but ideas don't spread on their own. These are very human networks, and that's why writing, speaking, communicating – probably for every single person listening to this podcast – is so important. Jo: Mm, yeah. Fiction Like AI 2041 and Thinking Through the Issues Jo: Have you read AI 2041 by Kai-Fu Lee and Chen Qiufan? Jamie: No. I heard a bunch of their interviews when the book came out, but I haven't read it. Jo: I think that's another good one because it's fiction – a whole load of short stories. It came out a few years ago now, but the issues they cover in the stories, about different people in different countries – I remember one about deepfakes – make you think more about the topics and help you figure out where you stand. I think that's the issue right now: it's so complex, there are so many things. I'm generally positive about AI, but of course I don't want autonomous drone weapons, you know? The Messy Reality of “Bad” Technologies Jamie: Can I ask you about that? Because this is why it's so complicated. Like you, I think nobody wants autonomous killer drones anywhere in the world. But if you right now were the defence minister of Ukraine, and your children are being kidnapped, your country is being destroyed, you're fighting for your survival, you're getting attacked every night – and you're getting attacked by the Russians, who are investing more and more in autonomous killer robots – you kind of have two choices. You can say, “I'm going to surrender,” or, “I'm going to use what technology I have available to defend myself, and hopefully fight to either victory or some kind of stand-off.” That's what our societies did with nuclear weapons. Maybe not every American recognises that Churchill gave Britain's nuclear secrets to America as a way of greasing the wheels of the Anglo-American alliance during the Second World War – but that was our programme: we couldn't afford to lose that war, and we couldn't afford to let the Nazis get nuclear weapons before we did. So there's the abstract feeling of, “I'm against all war in the abstract. I'm against autonomous killer robots in the abstract.” But if I were the defence minister of Ukraine, I would say, “What will it take for us to build the weapons we can use to defend ourselves?” That's why all this stuff gets so complicated. And frankly, it's why the relationship between fiction and nonfiction is so important. If every novel had a situation where every character said, “Oh, I know exactly the right answer,” and then they just did the right answer and it was obviously right, it wouldn't make for great fiction. We're dealing with really complex humans. We have conflicting impulses. We're not perfect. Maybe there are no perfect answers – but how do we strive toward better rather than worse? That's the question. Jo: Absolutely. I don't want to get too political on things. How AI Is Changing the Writing Life Jo: Let's come back to authors. In terms of the creative process, the writing process, the research process, and the business of being an author – what are some of the ways that you already use AI tools, and some of the ways, given your futurist brain, that you think things are going to change for us? Jamie: Great question. I'll start with a little middle piece. I found you, Jo, through GPT-5. I asked ChatGPT, “I'm coming out with this book and I want to connect with podcasters who are a little different from the ones I've done in the past. I've been a guest on Joe Rogan twice and some of the bigger podcasts. Make me a list of really interesting people I can have great conversations with.” That's how I found you. So this is one reward of that process. Let me say that in the last year I've worked on three books, and I'll explain how my relationship with AI has changed over those books. Cleaning Up Citations (and Getting Burned) Jamie: First is the highly revised paperback edition of Superconvergence. When the hardback came out, I had – I don't normally work with research assistants because I like to dig into everything myself – but the one thing I do use a research assistant for is that I can't be bothered, when I'm writing something, to do the full Chicago-style footnote if I'm already referencing an academic paper. So I'd just put the URL as the footnote and then hire a research assistant and say, “Go to this URL and change it into a Chicago-style citation. That's it.” Unfortunately, my research assistant on the hardback used early-days ChatGPT for that work. He did the whole thing, came back, everything looked perfect. I said, “Wow, amazing job.” It was only later, as I was going through them, that I realised something like 50% of them were invented footnotes. It was very painful to go back and fix, and it took ten times more time. With the paperback edition, I didn't use AI that much, but I did say things like, “Here's all the information – generate a Chicago-style citation.” That was better. I noticed there were a few things where I stopped using the thesaurus function on Microsoft Word because I'd just put the whole paragraph into the AI and say, “Give me ten other options for this one word,” and it would be like a contextual thesaurus. That was pretty good. Talking to a Robot Pianist Character Jamie: Then, for my new novel Virtuoso, I was writing a character who is a futurist robot that plays the piano very beautifully – not just humanly, but almost finding new things in the music we've written and composing music that resonates with us. I described the actions of that robot in the novel, but I didn't describe the inner workings of the robot's mind. In thinking about that character, I realised I was the first science-fiction writer in history who could interrogate a machine about what it was “thinking” in a particular context. I had the most beautiful conversations with ChatGPT, where I would give scenarios and ask, “What are you thinking? What are you feeling in this context?” It was all background for that character, but it was truly profound. Co-Authoring The AI Ten Commandments with GPT-5 Jamie: Third, I have another book coming out in May in the United States. I gave a talk this summer at the Chautauqua Institution in upstate New York about AI and spirituality. I talked about the history of our human relationship with our technology, about how all our religious and spiritual traditions have deep technological underpinnings – certainly our Abrahamic religions are deeply connected to farming, and Protestantism to the printing press. Then I had a section about the role of AI in generating moral codes that would resonate with humans. Everybody went nuts for this talk, and I thought, “I think I'm going to write a book.” I decided to write it differently, with GPT-5 as my named co-author. The first thing I did was outline the entire book based on the talk, which I'd already spent a huge amount of time thinking about and organising. Then I did a full outline of the arguments and structures. Then I trained GPT-5 on my writing style. The way I did it – which I fully describe in the introduction to the book – was that I'd handle all the framing: the full introduction, the argument, the structure. But if there was a section where, for a few paragraphs, I was summarising a huge field of data, even something I knew well, I'd give GPT-5 the intro sentence and say, “In my writing style, prepare four paragraphs on this.” For example, I might write: “AI has the potential to see us humans like we humans see ant colonies.” Then I'd say, “Give me four paragraphs on the relationship between the individual and the collective in ant colonies.” I could have written those four paragraphs myself, but it would've taken a month to read the life's work of E.O. Wilson and then write them. GPT-5 wrote them in seconds or minutes, in its thinking mode. I'd then say, “It's not quite right – change this, change that,” and we'd go back and forth three or four times. Then I'd edit the whole thing and put it into the text. So this book that I could have written on my own in a year, I wrote a first draft of with GPT-5 as my named co-author in two days. The whole project will take about six months from start to finish, and I'm having massive human editing – multiple edits from me, plus a professional editor. It's not a magic AI button. But I feel strongly about listing GPT-5 as a co-author because I've written it differently than previous books. I'm a huge believer in the old-fashioned lone author struggling and suffering – that's in my novels, and in Virtuoso I explore that. But other forms are going to emerge, just like video games are a creative, artistic form deeply connected to technology. The novel hasn't been around forever – the current format is only a few centuries old – and forms are always changing. There are real opportunities for authors, and there will be so much crap flooding the market because everybody can write something and put it up on Amazon. But I think there will be a very special place for thoughtful human authors who have an idea of what humans do at our best, and who translate that into content other humans can enjoy. Traditional vs Indie: Why This Book Will Be Self-Published Jo: I'm interested – you mentioned that it's your named co-author. Is this book going through a traditional publisher, and what do they think about that? Or are you going to publish it yourself? Jamie: It's such a smart question. What I found quickly is that when you get to be an author later in your career, you have all the infrastructure – a track record, a fantastic agent, all of that. But there were two things that were really important to me here: I wanted to get this book out really fast – six months instead of a year and a half. It was essential to me to have GPT-5 listed as my co-author, because if it were just my name, I feel like it would be dishonest. Readers who are used to reading my books – I didn't want to present something different than what it was. I spoke with my agent, who I absolutely love, and she said that for this particular project it was going to be really hard in traditional publishing. So I did a huge amount of research, because I'd never done anything in the self-publishing world before. I looked at different models. There was one hybrid model that's basically the same as traditional, but you pay for the things the publisher would normally pay for. I ended up not doing that. Instead, I decided on a self-publishing route where I disaggregated the publishing process. I found three teams: one for producing the book, one for getting the book out into the world, and a smaller one for the audiobook. I still believe in traditional publishing – there's a lot of wonderful human value-add. But some works just don't lend themselves to traditional publishing. For this book, which is called The AI Ten Commandments, that's the path I've chosen. Jo: And when's that out? I think people will be interested. Jamie: April 26th. Those of us used to traditional publishing think, “I've finished the book, sold the proposal, it'll be out any day now,” and then it can be a year and a half. It's frustrating. With this, the process can be much faster because it's possible to control more of the variables. But the key – as I was saying – is to make sure it's as good a book as everything else you've written. It's great to speed up, but you don't want to compromise on quality. The Coming Flood of Excellent AI-Generated Work Jo: Yeah, absolutely. We're almost out of time, but I want to come back to your “flood of crap” and the “AI slop” idea that's going around. Because you are working with GPT-5 – and I do as well, and I work with Claude and Gemini – and right now there are still issues. Like you said about referencing, there are still hallucinations, though fewer. But fast-forward two, five years: it's not a flood of crap. It's a flood of excellent. It's a flood of stuff that's better than us. Jamie: We're humans. It's better than us in certain ways. If you have farm machinery, it's better than us at certain aspects of farming. I'm a true humanist. I think there will be lots of things machines do better than us, but there will be tons of things we do better than them. There's a reason humans still care about chess, even though machines can beat humans at chess. Some people are saying things I fully disagree with, like this concept of AGI – artificial general intelligence – where machines do everything better than humans. I've summarised my position in seven letters: “AGI is BS.” The only way you can believe in AGI in that sense is if your concept of what a human is and what a human mind is is so narrow that you think it's just a narrow range of analytical skills. We are so much more than that. Humans represent almost four billion years of embodied evolution. There's so much about ourselves that we don't know. As incredible as these machines are and will become, there will always be wonderful things humans can do that are different from machines. What I always tell people is: whatever you're doing, don't be a second-rate machine. Be a first-rate human. If you're doing something and a machine is doing that thing much better than you, then shift to something where your unique capacities as a human give you the opportunity to do something better. So yes, I totally agree that the quality of AI-generated stuff will get better. But I think the most creative and successful humans will be the ones who say, “I recognise that this is creating new opportunities, and I'm going to insert my core humanity to do something magical and new.” People are “othering” these technologies, but the technologies themselves are magnificent human-generated artefacts. They're not alien UFOs that landed here. It's a scary moment for creatives, no doubt, because there are things all of us did in the past that machines can now do really well. But this is the moment where the most creative people ask themselves, “What does it mean for me to be a great human?” The pat answers won't apply. In my Virtuoso novel I explore that a lot. The idea that “machines don't do creativity” – they will do incredible creativity; it just won't be exactly human creativity. We will be potentially huge beneficiaries of these capabilities, but we really have to believe in and invest in the magic of our core humanity. Where to Find Jamie and His Books Jo: Brilliant. So where can people find you and your books online? Jamie: Thank you so much for asking. My website is jamiemetzl.com – and my books are available everywhere. Jo: Fantastic. Thanks so much for your time, Jamie. That was great. Jamie: Thank you, Joanna.The post Writing The Future, And Being More Human In An Age of AI With Jamie Metzl first appeared on The Creative Penn.
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
Welcome to AI Unraveled (November 20, 2025): Your daily strategic briefing on the business impact of AI.Today's Highlights: Saudi Arabia signs landmark AI deals with xAI and Nvidia; Europe scales back crucial AI and privacy laws; Anthropic courts Microsoft and Nvidia to break free from AWS; and Google's Gemini 3 climbs leaderboards, reinforcing its path toward AGI.Strategic Pillars & Topics:
Google's much anticipated new large language model Gemini 3 begins rolling out today. We'll tell you what we learned from an early product briefing and bring you our conversation with Google executives Demis Hassabis and Josh Woodward, just ahead of the launch. Guests:Demis Hassabis, chief executive and co-founder of Google DeepMindJosh Woodward, vice president of Google Labs and Google Gemini Additional Reading: The Man Who ‘A.G.I.-Pilled' Google We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify. You can also subscribe via your favorite podcast app here https://www.nytimes.com/activate-access/audio?source=podcatcher. For more podcasts and narrated articles, download The New York Times app at nytimes.com/app.
A top Google scientist and 2024 Nobel laureate said that the most important skill for the next generation will be "learning how to learn" to keep pace with change as artificial intelligence transforms education and the workplace. Speaking at an ancient Roman theater at the foot of the Acropolis in Athens, Demis Hassabis, CEO of Google's DeepMind, said rapid technological change demands a new approach to learning and skill development. "It's very hard to predict the future, like 10 years from now, in normal cases. It's even harder today, given how fast AI is changing, even week by week," Hassabis told the audience. "The only thing you can say for certain is that huge change is coming." The neuroscientist and former chess prodigy said artificial general intelligence—a futuristic vision of machines that are as broadly smart as humans or at least can do many things as well as people can—could arrive within a decade. This, he said, will bring dramatic advances and a possible future of "radical abundance" despite acknowledged risks. Hassabis emphasized the need for "meta-skills," such as understanding how to learn and optimizing one's approach to new subjects, alongside traditional disciplines like math, science and humanities. "One thing we'll know for sure is you're going to have to continually learn ... throughout your career," he said. The DeepMind co-founder, who established the London-based research lab in 2010 before Google acquired it four years later, shared the 2024 Nobel Prize in Chemistry for developing AI systems that accurately predict protein folding—a breakthrough for medicine and drug discovery. Greek Prime Minister Kyriakos Mitsotakis joined Hassabis at the Athens event after discussing ways to expand AI use in government services. Mitsotakis warned that the continued growth of huge tech companies could create great global financial inequality. "Unless people actually see benefits, personal benefits, to this (AI) revolution, they will tend to become very skeptical," he said. "And if they see ... obscene wealth being created within very few companies, this is a recipe for significant social unrest." This article was provided by The Associated Press.
Google faces the greatest innovator's dilemma in history. They invented the Transformer — the breakthrough technology powering every modern AI system from ChatGPT to Claude (and, of course, Gemini). They employed nearly all the top AI talent: Ilya Sutskever, Geoff Hinton, Demis Hassabis, Dario Amodei — more or less everyone who leads modern AI worked at Google circa 2014. They built the best dedicated AI infrastructure (TPUs!) and deployed AI at massive scale years before anyone else. And yet... the launch of ChatGPT in November 2022 caught them completely flat-footed. How on earth did the greatest business in history wind up playing catch-up to a nonprofit-turned-startup?Today we tell the complete story of Google's 20+ year AI journey: from their first tiny language model in 2001 through the creation Google Brain, the birth of the transformer, the talent exodus to OpenAI (sparked by Elon Musk's fury over Google's DeepMind acquisition), and their current all-hands-on-deck response with Gemini. And oh yeah — a little business called Waymo that went from crazy moonshot idea to doing more rides than Lyft in San Francisco, potentially building another Google-sized business within Google. This is the story of how the world's greatest business faces its greatest test: can they disrupt themselves without losing their $140B annual profit-generating machine in Search?Sponsors:Many thanks to our fantastic Fall ‘25 Season partners:J.P. Morgan PaymentsSentryWorkOSShopifyAcquired's 10th Anniversary Celebration!When: October 20th, 4:00 PM PTWho: All of you!Where: https://us02web.zoom.us/j/84061500817?pwd=opmlJrbtOAen4YOTGmPlNbrOMLI8oo.1Links:Sign up for email updates and vote on future episodes!Geoff Hinton's 2007 Tech Talk at GoogleOur recent ACQ2 episode with Tobi LutkeWorldly Partners' Multi-Decade Alphabet StudyIn the PlexSupremecyGenius MakersAll episode sourcesCarve Outs:We're hosting the Super Bowl Innovation Summit!F1: The MovieTravelpro suitcasesGlue Guys PodcastSea of StarsStepchange PodcastMore Acquired:Get email updates and vote on future episodes!Join the SlackSubscribe to ACQ2Check out the latest swag in the ACQ Merch Store!Note: Acquired hosts and guests may hold assets discussed in this episode. This podcast is not investment advice, and is intended for informational and entertainment purposes only. You should do your own research and make your own independent decisions when considering any financial transactions.
The AI Breakdown: Daily Artificial Intelligence News and Discussions
What will it actually take to get to AGI? Today we unpack the “jagged frontier” of AI capabilities — systems that can dazzle at PhD-level reasoning one moment but stumble on high school math the next. We look at Demis Hassabis' timeline and critique of current models, the debate over whether today's AI really operates at PhD level, and why continual learning and memory remain the missing breakthroughs. We also explore how coding agents, real-world usage data, and persistent context may become critical steps on the road to AGI. Finally, in headlines: lawsuits over AI search, Apple leadership changes, OpenAI's renegotiated deal with Microsoft, and layoffs at xAI.
Welcome back to another episode of Upside at the EUVC Podcast, where Dan Bowyer, Mads Jensen of SuperSeed, Andrew J Scott of 7percent Ventures, and Lomax unpack the forces shaping European venture capital.This week, veteran journalist Mike Butcher (ex-TechCrunch Europe, The Europas, TechFugees) joins the pod. From the creator economy eating media brands, to Europe's fragmented ecosystem and the capital gap that just won't die, we dive into EU-Inc, Draghi's unfulfilled reforms, ASML's surprise bet on Mistral, Europe's defense awakening, Klarna's IPO, and quantum's hot streak.Here's what's covered:00:01 – Mike's ResetTechCrunch Europe closes; Mike reflects on redundancy, summer off, dabbling in social and video.03:00 – Media Evolution & Creator EconomyFrom '90s trade mags → TechCrunch → The Europas & TechFugees. Blogs as early social media; today's creators (MrBeast, Bari Weiss, Cleo Abram) echo that era. Bloomberg pushes reporters front and center as media becomes personality-driven.06:45 – Europe's Ecosystem & Debate CultureEurope isn't Silicon Valley's 101 highway — it's dozens of fragmented hubs. Conferences like Slush, Web Summit, VivaTech anchor the scene, but the missing ingredient is debate. US VCs spar on stage then grab a beer; Europe is still too polite.12:00 – All-In Summit DebriefMads' takeaways from LA: Musk on robotics (the “hand” bottleneck), Demis Hassabis on AGI (5–10 yrs away), Eric Schmidt on US–China AI race, Alex Karp on Europe's regulatory failures. The Valley vibe captured, but it's only one voice.17:00 – EU-Inc & Draghi ReportDraghi's 383 recommendations, just 11% implemented. €16T in pensions sit mostly in bonds; only 0.02–0.03% flows into VC (vs 1–2% in the US). Permitting bottlenecks: 44 months for energy approvals. Panel calls for a Brussels “crack unit,” employee stock option reform, and fixing skilled migration.35:00 – Deal of the Week: ASML × MistralASML leads a €2B round in Mistral at €11B valuation. Strategic and cultural fit (Netherlands ↔ Paris) mattered more than sovereignty. Mads: 14× revenue is a bargain vs US peers. Andrew: proof Europe's VCs are too small — corporates must fill the gap. Lomax: ASML knows it's a one-trick pony with 90% lithography share; diversifying into AI hedges risk.49:00 – Defense & Industrial BaseRussian drones hit Poland, NATO urgency spikes. UK pledges defense spend to 2.5% GDP by 2027, but procurement bottlenecks persist. Poland cuts red tape under fire; UK moves at peacetime pace. Andrew: real deterrence is industrial capacity. Mike: primes must be forced to buy from startups; dual-use innovators like Helsing show the way.59:00 – Klarna IPO & the Klarna MafiaKlarna IPOs at $15B (down from $46B peak). Oversubscribed; Sequoia nets ~$3.5B; Atomico 12M → 150M. A new “Klarna Mafia” of angels and operators will recycle liquidity back into Europe's ecosystem.01:03:00 – Quantum's Hot StreakPsiQuantum ($7B, Bristol roots), Quantinuum ($10B, Cambridge), IQM (Finland unicorn), Oxford Ionics' $1B exit. Europe has parity in talent but lacks growth capital. Lomax: “Quantum is hot, but a winter will come.” Andrew: Europe can win here — if the money shows up.01:05:00 – Wrap-upThe pod ends on optimism: Europe may not own AGI, but in quantum it has a fair fight.
(0:00) Introducing Sir Demis Hassabis, reflecting on his Nobel Prize win (2:39) What is Google DeepMind? How does it interact with Google and Alphabet? (4:01) Genie 3 world model (9:21) State of robotics models, form factors, and more (14:42) AI science breakthroughs, measuring AGI (20:49) Nano-Banana and the future of creative tools, democratization of creativity (24:44) Isomorphic Labs, probabilistic vs deterministic, scaling compute, a golden age of science Thanks to our partners for making this happen! Solana: https://solana.com/ OKX: https://www.okx.com/ Google Cloud: https://cloud.google.com/ IREN: https://iren.com/ Oracle: https://www.oracle.com/ Circle: https://www.circle.com/ BVNK: https://www.bvnk.com/ Follow Demis: https://x.com/demishassabis Follow the besties: https://x.com/chamath https://x.com/Jason https://x.com/DavidSacks https://x.com/friedberg Follow on X: https://x.com/theallinpod Follow on Instagram: https://www.instagram.com/theallinpod Follow on TikTok: https://www.tiktok.com/@theallinpod Follow on LinkedIn: https://www.linkedin.com/company/allinpod Intro Music Credit: https://rb.gy/tppkzl https://x.com/yung_spielburg Intro Video Credit: https://x.com/TheZachEffect
Get 40% off Ground News' unlimited access Vantage Plan at https://ground.news/airisk for only $5/month, explore how stories are framed worldwide and across the political spectrum.TAKE ACTION TO DEMAND AI SAFETY LAWS: https://safe.ai/actIn Episode 69 of For Humanity: An AI Risk Podcast, we explore one of the most striking acts of activism in the AI debate: hunger strikes aimed at pushing Big Tech to prioritize safety over speed.Michael and Dennis, two AI safety advocates, join John from outside DeepMind's London headquarters, where they are staging hunger strikes to demand that frontier AI development be paused. Inspired by Guido's protest in San Francisco, they are risking their health to push tech leaders like Demis Hassabis to make public commitments to slow down the AI race.This episode looks at how ordinary people are taking extraordinary steps to demand accountability, why this form of protest is gaining attention, and what history tells us about the power of public pressure. In this conversation, you'll discover: * Why hunger strikers believe urgent action on AI safety is necessary* How Big Tech companies are responding to growing public concern* The role of parents, workers, and communities in shaping AI policy* Parallels with past social movements that drove real change* Practical ways you can make your voice heard in the AI safety conversationThis isn't just about technology—it's about responsibility, leadership, and the choices we make for future generations.
The aftershocks of GPT-5's chaotic rollout continue as OpenAI scrambles to address user backlash, confusing model choices, and shifting product strategies. In this episode, Paul Roetzer and Mike Kaput also explore the fallout from a leaked Meta AI policy document that raises major ethical concerns, share insights from Demis Hassabis on the path to AGI, and cover the latest AI power plays: Sam Altman's trillion-dollar ambitions, his public feud with Elon Musk, an xAI leadership shake-up, chip geopolitics, Apple's surprising AI comeback, and more. Show Notes: Access the show notes and show links here Timestamps: 00:00:00 — Intro 00:06:00 — GPT-5's Continued Chaotic Rollout 00:16:03 — Meta's Controversial AI Policies 00:28:27 — Demis Hassabis on AI's Future 00:40:55 — What's Next for OpenAI After GPT-5? 00:46:41 — Altman / Musk Drama 00:50:55 — xAI Leadership Shake-Up 00:55:55 — Perplexity's Audacious Play for Google Chrome 00:58:32 — Chip Geopolitics 01:01:43 — Anthropic and AI in Government 01:05:17 — Apple's AI Turnaround 01:08:09 — Cohere Raises $500M for Enterprise AI 01:10:57 — AI in Education This episode is brought to you by our Academy 3.0 Launch Event. Join Paul Roetzer and the SmarterX team on August 19 at 12pm ET for the launch of AI Academy 3.0 by SmarterX —your gateway to personalized AI learning for professionals and teams. Discover our new on-demand courses, live classes, certifications, and a smarter way to master AI. Register here. This week's episode is brought to you by MAICON, our 6th annual Marketing AI Conference, happening in Cleveland, Oct. 14-16. The code POD100 saves $100 on all pass types. For more information on MAICON and to register for this year's conference, visit www.MAICON.ai. Visit our website Receive our weekly newsletter Join our community: Slack LinkedIn Twitter Instagram Facebook Looking for content and resources? Register for a free webinar Come to our next Marketing AI Conference Enroll in our AI Academy
Demis Hassabis, a pioneer in artificial intelligence, is shaping the future of humanity. As the CEO of Google DeepMind, he was first interviewed by correspondent Scott Pelley in 2023, during a time when chatbots marked the beginning of a new technological era. Since that interview, Hassabis has made headlines for his innovative work, including using an AI model to predict the structure of proteins, which earned him a Nobel Prize. Pelley returns to DeepMind's headquarters in London to discuss what's next for Hassabis, particularly his leadership in the effort to develop artificial general intelligence (AGI) – a type of AI that has the potential to match the versatility and creativity of the human brain. Fertility rates in the United States are currently near historic lows, largely because fewer women are having children in their 20s. As women delay starting families, many are opting for egg freezing, the process of retrieving and freezing unfertilized eggs, to preserve their fertility for the future. Does egg freezing provide women with a way to pause their biological clock? Correspondent Lesley Stahl interviews women who have decided to freeze their eggs and explores what the process entails physically, emotionally and financially. She also speaks with fertility specialists and an ethicist about success rates, equity issues and the increasing market potential of egg freezing. This is a double-length segment. To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices
L'intelligence artificielle est-elle sur le point de franchir un cap historique ? À l'occasion du sommet pour l'action sur l'IA, j'ai assisté à une rencontre exceptionnelle entre deux figures majeures du secteur : Demis Hassabis, cofondateur et PDG de DeepMind, et James Manyika, vice-président de Google chargé de la recherche. Ensemble, ils ont partagé leur vision des opportunités et des risques liés à l'essor de l'IA, en particulier sur la route vers l'intelligence artificielle générale (AGI).Rediffusion du 14/02/2025Dans cet échange captivant organisé par Google France, les deux intervenants reviennent sur les bénéfices actuels de l'IA, notamment dans le diagnostic médical dans les pays en développement, et sur la promesse d'un assistant numérique universel. Ils abordent également la perspective de systèmes intelligents capables d'exécuter des tâches complexes, et l'impact à venir sur le marché du travail.Mais cette évolution s'accompagne aussi de sérieux défis : sécurité, gouvernance, dérives possibles, éthique... Demis Hassabis insiste sur la nécessité de mettre en place des garde-fous et de s'assurer que les systèmes d'IA intègrent les bonnes valeurs. James Manyika, lui, appelle à anticiper dès maintenant les effets sur la société et à investir dans la formation.-----------
Send us a textJoin hosts Alex Sarlin and Claire Zau, a Partner and AI Lead at GSV Ventures as they explores the latest developments in education technology, from AI agents to teacher co-pilots, talent wars, and shifts in global AI strategies. ✨ Episode Highlights [00:00:00] AI teacher co-pilots evolve into agentic workflows.[00:02:15] OpenAI launches ChatGPT Agent for autonomous tasks.[00:04:24] Meta, Google, and OpenAI escalate AI talent wars.[00:07:38] Privacy guardrails emerge for AI agent actions.[00:10:20] ChatGPT pilots “Study Together” learning mode.[00:14:40] Teens use AI as companions, sparking debate.[00:19:58] AI multiplies both positive and negative behaviors.[00:29:11] Windsurf acquisition saga shows coding disruption.[00:37:18] Teacher AI tools gain value through workflow data.[00:42:48] DeepMind's rise positions Demis Hassabis as key leader.[00:45:32] Google offers free Gemini AI plan to Indian students.[00:49:39] Meta builds massive AI data centers for digital labor. Plus, special guests: [00:52:42] Matthew Gasda, a writer and director, on how educators can rethink writing and grading in the AI era. [01:13:30] Marc Graham, founder of Spark Education AI, on using AI to personalize reading and engage reluctant readers.
Demis Hassabis is the CEO of Google DeepMind and Nobel Prize winner for his groundbreaking work in protein structure prediction using AI. Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep475-sc See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc. Transcript: https://lexfridman.com/demis-hassabis-2-transcript CONTACT LEX: Feedback - give feedback to Lex: https://lexfridman.com/survey AMA - submit questions, videos or call-in: https://lexfridman.com/ama Hiring - join our team: https://lexfridman.com/hiring Other - other ways to get in touch: https://lexfridman.com/contact EPISODE LINKS: Demis's X: https://x.com/demishassabis DeepMind's X: https://x.com/GoogleDeepMind DeepMind's Instagram: https://instagram.com/GoogleDeepMind DeepMind's Website: https://deepmind.google/ Gemini's Website: https://gemini.google.com/ Isomorphic Labs: https://isomorphiclabs.com/ The MANIAC (book): https://amzn.to/4lOXJ81 Life Ascending (book): https://amzn.to/3AhUP7z SPONSORS: To support this podcast, check out our sponsors & get discounts: Hampton: Community for high-growth founders and CEOs. Go to https://joinhampton.com/lex Fin: AI agent for customer service. Go to https://fin.ai/lex Shopify: Sell stuff online. Go to https://shopify.com/lex LMNT: Zero-sugar electrolyte drink mix. Go to https://drinkLMNT.com/lex AG1: All-in-one daily nutrition drink. Go to https://drinkag1.com/lex OUTLINE: (00:00) - Introduction (00:29) - Sponsors, Comments, and Reflections (08:40) - Learnable patterns in nature (12:22) - Computation and P vs NP (21:00) - Veo 3 and understanding reality (25:24) - Video games (37:26) - AlphaEvolve (43:27) - AI research (47:51) - Simulating a biological organism (52:34) - Origin of life (58:49) - Path to AGI (1:09:35) - Scaling laws (1:12:51) - Compute (1:15:38) - Future of energy (1:19:34) - Human nature (1:24:28) - Google and the race to AGI (1:42:27) - Competition and AI talent (1:49:01) - Future of programming (1:55:27) - John von Neumann (2:04:41) - p(doom) (2:09:24) - Humanity (2:12:30) - Consciousness and quantum computation (2:18:40) - David Foster Wallace (2:25:54) - Education and research PODCAST LINKS: - Podcast Website: https://lexfridman.com/podcast - Apple Podcasts: https://apple.co/2lwqZIr - Spotify: https://spoti.fi/2nEwCF8 - RSS: https://lexfridman.com/feed/podcast/ - Podcast Playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4 - Clips Channel: https://www.youtube.com/lexclips
Med kunskaperna från utvecklingen av en brädspelstävlande AI knäckte de en av de stora frågorna inom biokemin hur proteiner veckar sig. För det fick de Nobelpris 2024. Lyssna på alla avsnitt i Sveriges Radio Play. Programmet sänder första gången 10/12-2024Med hjälp av artificiell intelligens har Demis Hassabis och John Jumper, skapat maskiner så intelligenta att de löst ett 50 år gammalt vetenskapligt mysterium – hur proteiner veckar sig. Hör om upptäckten som belönades med halva 2024 års Nobelpris i kemi, öppnar dörren för snabbare framsteg inom läkemedelsutveckling och kan förändra framtidens medicin.Reporter: Annika Östman annika.ostman@sr.seProducent: Lars Broström lars.brostrom@sr.se
2016 höll världen andan när AI-modellen AlphaGo utmanade världsmästaren i spelet Go och vann. 2024 belönades Demis Hassabis, hjärnan bakom modellen, med Nobelpris för en helt annan upptäckt. Lyssna på alla avsnitt i Sveriges Radio Play. Programmet sändes första gången 5/12-2024.Bara åtta år gammal köper Demis Hassabis sin första dator för vinstpengarna från en schackturnering. Som vuxen utvecklar han det första datorsystemet som lyckas överlista en mänsklig världsmästare i ett mer avancerat spel än schack. Vetenskapsradion träffar Demis Hassabis, en av Nobelpristagarna i kemi 2024, i ett personligt samtal – om vägen från schacknörd till Google-elit och Nobelpris.Reporter: Annika Östman annika.ostman@sr.se Producent: Lars Broström lars.brostrom@sr.se
Episode 18 of The Basic Income Show!What happened at this years Basic Income Guarantee (BIG) Conference? Let's talk about Zohran Mamdani and his Guaranteed Basic Income Bill.Chapters:00:00 Welcome to The Basic Income Show00:25 The BIG Conference08:17 Union of Basic Income Participants22:29 Newark New Jersey GBI Program Results27:14 Comingle Update28:54 Neurodivergence and UBI35:51 Zohran Mamdani has co-sponsored a GBI bill40:51 Canada's New Basic Income Bill S-20654:33 Georgia's In Her Hands GBI Program News59:43 Ireland's Basic Income for Artists Program Extended1:02:46 Vinod Khosla on AI and UBI1:07:24 New NSF Study About AI and UBI1:15:08 Demis Hassabis on AI and UBI1:19:16 Phonely's New Call Center AI1:26:36 ElevenLabs' New V3 Audio AI1:32:10 Trump's AI Czar David Sacks on AI and UBI1:33:00 Economist Ann Pettifor on UBI1:38:36 Basic Income for Climate Activists in Tuvalu1:46:26 Concluding RemarksSummary:In this conversation, Scott Santens and Conrad Shaw discuss the latest developments in the Basic Income movement, including the recent BIG conference in DC, community engagement, and the establishment of the Union of Basic Income Participants. They explore the importance of mutual aid, the impact of AI on employment, and legislative updates regarding Basic Income. The discussion also addresses critiques of Basic Income and highlights global perspectives on its implementation, emphasizing the need for economic empowerment and collective action.AI Job Disruption Calculator:https://fundforhumanity.org/national-science-foundation-ai-worker-impact-report/Vinod Khosla video:https://www.youtube.com/watch?v=8JZg0SuJozoKim Pate video:https://www.youtube.com/watch?v=DNFaXV1zeWc&t=443s See my ongoing compilation of UBI evidence on Bluesky:https://bsky.app/profile/scottsantens.com/post/3lckzcleo7s24See my ongoing compilation of UBI evidence on X: https://x.com/scottsantens/status/1766213155967955332For more info about UBI, please refer to my UBI FAQ: http://scottsantens.com/basic-income-faqDonate to the Income To Support All Foundation to support UBI projects:https://www.itsafoundation.orgSubscribe to the ITSA Newsletter for monthly UBI news:https://itsanewsletter.beehiiv.com/subscribeVisit Basic Income Today for daily UBI news:https://basicincometoday.comSign up for the Comingle waitlist for voluntary UBI:https://www.comingle.usFollow Scott:https://linktr.ee/scottsantensFollow Conrad:https://bsky.app/profile/theubiguy.bsky.socialhttps://www.linkedin.com/in/conradshaw/Follow Josh:https://bsky.app/profile/misterjworth.bsky.socialhttps://www.linkedin.com/in/joshworth/Special thanks to: Gisele Huff, Haroon Mokhtarzada, Steven Grimm, Judith Bliss, Lowell Aronoff, Jessica Chew, Katie Moussouris, David Ruark, Tricia Garrett, A.W.R., Daryl Smith, Larry Cohen, John Steinberger, Philip Rosedale, Liya Brook, Frederick Weber, Laurel gillespie, Dylan Hirsch-Shell, Tom Cooper, Robert Collins, Joanna Zarach, Mgmguy, Daragh Ward, Albert Wenger, Andrew Yang, Peter T Knight, Michael Finney, David Ihnen, Steve Roth, Miki Phagan, Walter Schaerer, Elizabeth Corker, Albert, Daniel Brockman, Natalie Foster, Joe Ballou, Arjun, Justin Dart, Felix Ling, S, Jocelyn Hockings, Mark Donovan, Jason Clark, Chuck Cordes, Mark Broadgate, Leslie Kausch, Braden Ferrin, Juro Antal, Austin, Deanna McHugh, Stephen Castro-Starkey, and all my other patrons for their support.If you'd like to see your name here in future video descriptions, you can do so by becoming a patron on Patreon at the UBI Producer level or above.Patreon: https://www.patreon.com/scottsantens/membership#universalbasicincome #BasicIncome #UBI
Dr. Paul Hanona and Dr. Arturo Loaiza-Bonilla discuss how to safely and smartly integrate AI into the clinical workflow and tap its potential to improve patient-centered care, drug development, and access to clinical trials. TRANSCRIPT Dr. Paul Hanona: Hello, I'm Dr. Paul Hanona, your guest host of the ASCO Daily News Podcast today. I am a medical oncologist as well as a content creator @DoctorDiscover, and I'm delighted to be joined today by Dr. Arturo Loaiza-Bonilla, the chief of hematology and oncology at St. Luke's University Health Network. Dr. Bonilla is also the co-founder and chief medical officer at Massive Bio, an AI-driven platform that matches patients with clinical trials and novel therapies. Dr. Loaiza-Bonilla will share his unique perspective on the potential of artificial intelligence to advance precision oncology, especially through clinical trials and research, and other key advancements in AI that are transforming the oncology field. Our full disclosures are available in the transcript of the episode. Dr. Bonilla, it's great to be speaking with you today. Thanks for being here. Dr. Arturo Loaiza-Bonilla: Oh, thank you so much, Dr. Hanona. Paul, it's always great to have a conversation. Looking forward to a great one today. Dr. Paul Hanona: Absolutely. Let's just jump right into it. Let's talk about the way that we see AI being embedded in our clinical workflow as oncologists. What are some practical ways to use AI? Dr. Arturo Loaiza-Bonilla: To me, responsible AI integration in oncology is one of those that's focused on one principle to me, which is clinical purpose is first, instead of the algorithm or whatever technology we're going to be using. If we look at the best models in the world, they're really irrelevant unless we really solve a real day-to-day challenge, either when we're talking to patients in the clinic or in the infusion chair or making decision support. Currently, what I'm doing the most is focusing on solutions that are saving us time to be more productive and spend more time with our patients. So, for example, we're using ambient AI for appropriate documentation in real time with our patients. We're leveraging certain tools to assess for potential admission or readmission of patients who have certain conditions as well. And it's all about combining the listening of physicians like ourselves who are end users, those who create those algorithms, data scientists, and patient advocates, and even regulators, before they even write any single line of code. I felt that on my own, you know, entrepreneurial aspects, but I think it's an ethos that we should all follow. And I think that AI shouldn't be just bolted on later. We always have to look at workflows and try to look, for example, at clinical trial matching, which is something I'm very passionate about. We need to make sure that first, it's easier to access for patients, that oncologists like myself can go into the interface and be able to pull the data in real time when you really need it, and you don't get all this fatigue alerts. To me, that's the responsible way of doing so. Those are like the opportunities, right? So, the challenge is how we can make this happen in a meaningful way – we're just not reacting to like a black box suggestion or something that we have no idea why it came up to be. So, in terms of success – and I can tell you probably two stories of things that we know we're seeing successful – we all work closely with radiation oncologists, right? So, there are now these tools, for example, of automated contouring in radiation oncology, and some of these solutions were brought up in different meetings, including the last ASCO meeting. But overall, we know that transformer-based segmentation tools; transformer is just the specific architecture of the machine learning algorithm that has been able to dramatically reduce the time for colleagues to spend allotting targets for radiation oncology. So, comparing the target versus the normal tissue, which sometimes it takes many hours, now we can optimize things over 60%, sometimes even in minutes. So, this is not just responsible, but it's also an efficiency win, it's a precision win, and we're using it to adapt even mid-course in response to tumor shrinkage. Another success that I think is relevant is, for example, on the clinical trial matching side. We've been working on that and, you know, I don't want to preach to the choir here, but having the ability for us to structure data in real time using these tools, being able to extract information on biomarkers, and then show that multi-agentic AI is superior to what we call zero-shot or just throwing it into ChatGPT or any other algorithm, but using the same tools but just fine-tuned to the point that we can be very efficient and actually reliable to the level of almost like a research coordinator, is not just theory. Now, it can change lives because we can get patients enrolled in clinical trials and be activated in different places wherever the patient may be. I know it's like a long answer on that, but, you know, as we talk about responsible AI, that's important. And in terms of what keeps me up at night on this: data drift and biases, right? So, imaging protocols, all these things change, the lab switch between different vendors, or a patient has issues with new emerging data points. And health systems serve vastly different populations. So, if our models are trained in one context and deployed in another, then the output can be really inaccurate. So, the idea is to become a collaborative approach where we can use federated learning and patient-centricity so we can be much more efficient in developing those models that account for all the populations, and any retraining that is used based on data can be diverse enough that it represents all of us and we can be treated in a very good, appropriate way. So, if a clinician doesn't understand why a recommendation is made, as you probably know, you probably don't trust it, and we shouldn't expect them to. So, I think this is the next wave of the future. We need to make sure that we account for all those things. Dr. Paul Hanona: Absolutely. And even the part about the clinical trials, I want to dive a little bit more into in a few questions. I just kind of wanted to make a quick comment. Like you said, some of the prevalent things that I see are the ambient scribes. It seems like that's really taken off in the last year, and it seems like it's improving at a pretty dramatic speed as well. I wonder how quickly that'll get adopted by the majority of physicians or practitioners in general throughout the country. And you also mentioned things with AI tools regarding helping regulators move things quicker, even the radiation oncologist, helping them in their workflow with contouring and what else they might have to do. And again, the clinical trials thing will be quite interesting to get into. The first question I had subsequent to that is just more so when you have large datasets. And this pertains to two things: the paper that you published recently regarding different ways to use AI in the space of oncology referred to drug development, the way that we look at how we design drugs, specifically anticancer drugs, is pretty cumbersome. The steps that you have to take to design something, to make sure that one chemical will fit into the right chemical or the structure of the molecule, that takes a lot of time to tinker with. What are your thoughts on AI tools to help accelerate drug development? Dr. Arturo Loaiza-Bonilla: Yes, that's the Holy Grail and something that I feel we should dedicate as much time and effort as possible because it relies on multimodality. It cannot be solved by just looking at patient histories. It cannot be solved by just looking at the tissue alone. It's combining all these different datasets and being able to understand the microenvironment, the patient condition and prior treatments, and how dynamic changes that we do through interventions and also exposome – the things that happen outside of the patient's own control – can be leveraged to determine like what's the best next step in terms of drugs. So, the ones that we heard the news the most is, for example, the Nobel Prize-winning [for Chemistry awarded to Demis Hassabis and John Jumper for] AlphaFold, an AI system that predicts protein structures right? So, we solved this very interesting concept of protein folding where, in the past, it would take the history of the known universe, basically – what's called the Levinthal's paradox – to be able to just predict on amino acid structure alone or the sequence alone, the way that three-dimensionally the proteins will fold. So, with that problem being solved and the Nobel Prize being won, the next step is, “Okay, now we know how this protein is there and just by sequence, how can we really understand any new drug that can be used as a candidate and leverage all the data that has been done for many years of testing against a specific protein or a specific gene or knockouts and what not?” So, this is the future of oncology and where we're probably seeing a lot of investments on that. The key challenge here is mostly working on the side of not just looking at pathology, but leveraging this digital pathology with whole slide imaging and identifying the microenvironment of that specific tissue. There's a number of efforts currently being done. One isn't just H&E, like hematoxylin and eosin, slides alone, but with whole imaging, now we can use expression profiles, spatial transcriptomics, and gene whole exome sequencing in the same space and use this transformer technology in a multimodality approach that we know already the slide or the pathology, but can we use that to understand, like, if I knock out this gene, how is the microenvironment going to change to see if an immunotherapy may work better, right? If we can make a microenvironment more reactive towards a cytotoxic T cell profile, for example. So, that is the way where we're really seeing the field moving forward, using multimodality for drug discovery. So, the FDA now seems to be very eager to support those initiatives, so that's of course welcome. And now the key thing is the investment to do this in a meaningful way so we can see those candidates that we're seeing from different companies now being leveraged for rare disease, for things that are going to be almost impossible to collect enough data, and make it efficient by using these algorithms that sometimes, just with multiple masking – basically, what they do is they mask all the features and force the algorithm to find solutions based on the specific inputs or prompts we're doing. So, I'm very excited about that, and I think we're going to be seeing that in the future. Dr. Paul Hanona: So, essentially, in a nutshell, we're saying we have the cancer, which is maybe a dandelion in a field of grass, and we want to see the grass that's surrounding the dandelion, which is the pathology slides. The problem is, to the human eye, it's almost impossible to look at every single piece of grass that's surrounding the dandelion. And so, with tools like AI, we can greatly accelerate our study of the microenvironment or the grass that's surrounding the dandelion and better tailor therapy, come up with therapy. Otherwise, like you said, to truly generate a drug, this would take years and years. We just don't have the throughput to get to answers like that unless we have something like AI to help us. Dr. Arturo Loaiza-Bonilla: Correct. Dr. Paul Hanona: And then, clinical trials. Now, this is an interesting conversation because if you ever look up our national guidelines as oncologists, there's always a mention of, if treatment fails, consider clinical trials. Or in the really aggressive cancers, sometimes you might just start out with clinical trials. You don't even give the standard first-line therapy because of how ineffective it is. There are a few issues with clinical trials that people might not be aware of, but the fact that the majority of patients who should be on clinical trials are never given the chance to be on clinical trials, whether that's because of proximity, right, they might live somewhere that's far from the institution, or for whatever reason, they don't qualify for the clinical trial, they don't meet the strict inclusion criteria. But a reason you mentioned early on is that it's simply impossible for someone to be aware of every single clinical trial that's out there. And then even if you are aware of those clinical trials, to actually find the sites and put in the time could take hours. And so, how is AI going to revolutionize that? Because in my mind, it's not that we're inventing a new tool. Clinical trials have always been available. We just can't access them. So, if we have a tool that helps with access, wouldn't that be huge? Dr. Arturo Loaiza-Bonilla: Correct. And that has been one of my passions. And for those who know me and follow me and we've spoke about it in different settings, that's something that I think we can solve. This other paradox, which is the clinical trial enrollment paradox, right? We have tens of thousands of clinical trials available with millions of patients eager to learn about trials, but we don't enroll enough and many trials close to accrual because of lack of enrollment. It is completely paradoxical and it's because of that misalignment because patients don't know where to go for trials and sites don't know what patients they can help because they haven't reached their doors yet. So, the solution has to be patient-centric, right? We have to put the patient at the center of the equation. And that was precisely what we had been discussing during the ASCO meeting. There was an ASCO Education Session where we talked about digital prescreening hubs, where we, in a patient-centric manner, the same way we look for Uber, Instacart, any solution that you may think of that you want something that can be leveraged in real time, we can use these real-world data streams from the patient directly, from hospitals, from pathology labs, from genomics companies, to continuously screen patients who can match to the inclusion/exclusion criteria of unique trials. So, when the patient walks into the clinic, the system already knows if there's a trial and alerts the site proactively. The patient can actually also do decentralization. So, there's a number of decentralized clinical trial solutions that are using what I call the “click and mortar” approach, which is basically the patient is checking digitally and then goes to the site to activate. We can also have the click and mortar in the bidirectional way where the patient is engaged in person and then you give the solution like the ones that are being offered on things that we're doing at Massive Bio and beyond, which is having the patient to access all that information and then they make decisions and enroll when the time is right. As I mentioned earlier, there is this concept drift where clinical trials open and close, the patient line of therapy changes, new approvals come in and out, and sites may not be available at a given time but may be later. So, having that real-time alerts using tools that are able already to extract data from summarization that we already have in different settings and doing this natural language ingestion, we can not only solve this issue with manual chart review, which is extremely cumbersome and takes forever and takes to a lot of one-time assessments with very high screen failures, to a real-time dynamic approach where the patient, as they get closer to that eligibility criteria, they get engaged. And those tools can be built to activate trials, audit trials, and make them better and accessible to patients. And something that we know is, for example, 91%-plus of Americans live close to either a pharmacy or an imaging center. So, imagine that we can potentially activate certain of those trials in those locations. So, there's a number of pharmacies, special pharmacies, Walgreens, and sometimes CVS trying to do some of those efforts. So, I think the sky's the limit in terms of us working together. And we've been talking with corporate groups, they're all interested in those efforts as well, to getting patients digitally enabled and then activate the same way we activate the NCTN network of the corporate groups, that are almost just-in-time. You can activate a trial the patient is eligible for and we get all these breakthroughs from the NIH and NCI, just activate it in my site within a week or so, as long as we have the understanding of the protocol. So, using clinical trial matching in a digitally enabled way and then activate in that same fashion, but not only for NCTN studies, but all the studies that we have available will be the key of the future through those prescreening hubs. So, I think now we're at this very important time where collaboration is the important part and having this silo-breaking approach with interoperability where we can leverage data from any data source and from any electronic medical records and whatnot is going to be essential for us to move forward because now we have the tools to do so with our phones, with our interests, and with the multiple clinical trials that are coming into the pipelines. Dr. Paul Hanona: I just want to point out that the way you described the process involves several variables that practitioners often don't think about. We don't realize the 15 steps that are happening in the background. But just as a clarifier, how much time is it taking now to get one patient enrolled on a clinical trial? Is it on the order of maybe 5 to 10 hours for one patient by the time the manual chart review happens, by the time the matching happens, the calls go out, the sign-up, all this? And how much time do you think a tool that could match those trials quicker and get you enrolled quicker could save? Would it be maybe an hour instead of 15 hours? What's your thought process on that? Dr. Arturo Loaiza-Bonilla: Yeah, exactly. So one is the matching, the other one is the enrollment, which, as you mentioned, is very important. So, it can take, from, as you said, probably between 4 days to sometimes 30 days. Sometimes that's how long it takes for all the things to be parsed out in terms of logistics and things that could be done now agentically. So, we can use agents to solve those different steps that may take multiple individuals. We can just do it as a supply chain approach where all those different steps can be done by a single agent in a simultaneous fashion and then we can get things much faster. With an AI-based solution using these frontier models and multi-agentic AI – and we presented some of this data in ASCO as well – you can do 5,000 patients in an hour, right? So, just enrolling is going to be between an hour and maximum enrollment, it could be 7 days for those 5,000 patients if it was done at scale in a multi-level approach where we have all the trials available. Dr. Paul Hanona: No, definitely a very exciting aspect of our future as oncologists. It's one thing to have really neat, novel mechanisms of treatment, but what good is it if we can't actually get it to people who need it? I'm very much looking for the future of that. One of the last questions I want to ask you is another prevalent way that people use AI is just simply looking up questions, right? So, traditionally, the workflow for oncologists is maybe going on national guidelines and looking up the stage of the cancer and seeing what treatments are available and then referencing the papers and looking at who was included, who wasn't included, the side effects to be aware of, and sort of coming up with a decision as to how to treat a cancer patient. But now, just in the last few years, we've had several tools become available that make getting questions easier, make getting answers easier, whether that's something like OpenAI's tools or Perplexity or Doximity or OpenEvidence or even ASCO has a Guidelines Assistant as well that is drawing from their own guidelines as to how to treat different cancers. Do you see these replacing traditional sources? Do you see them saving us a lot more time so that we can be more productive in clinic? What do you think is the role that they're going to play with patient care? Dr. Arturo Loaiza-Bonilla: Such a relevant question, particularly at this time, because these AI-enabled query tools, they're coming left and right and becoming increasingly common in our daily workflows and things that we're doing. So, traditionally, when we go and we look for national guidelines, we try to understand the context ourselves and then we make treatment decisions accordingly. But that is a lot of a process that now AI is helping us to solve. So, at face value, it seems like an efficiency win, but in many cases, I personally evaluate platforms as the chief of hem/onc at St. Luke's and also having led the digital engagement things through Massive Bio and trying to put things together, I can tell you this: not all tools are created equal. In cancer care, each data point can mean the difference between cure and progression, so we cannot really take a lot of shortcuts in this case or have unverified output. So, the tools are helpful, but it has to be grounded in truth, in trusted data sources, and they need to be continuously updated with, like, ASCO and NCCN and others. So, the reason why the ASCO Guidelines Assistant, for instance, works is because it builds on all these recommendations, is assessed by end users like ourselves. So, that kind of verification is critical, right? We're entering a phase where even the source material may be AI-generated. So, the role of human expert validation is really actually more important, not less important. You know, generalist LLMs, even when fine-tuned, they may not be enough. You can pull a few API calls from PubMed, etc., but what we need now is specialized, context-aware, agentic tools that can interpret multimodal and real-time clinical inputs. So, something that we are continuing to check on and very relevant to have entities and bodies like ASCO looking into this so they can help us to be really efficient and really help our patients. Dr. Paul Hanona: Dr. Bonilla, what do you want to leave the listener with in terms of the future direction of AI, things that we should be cautious about, and things that we should be optimistic about? Dr. Arturo Loaiza-Bonilla: Looking 5 years ahead, I think there's enormous promise. As you know, I'm an AI enthusiast, but always, there's a few priorities that I think – 3 of them, I think – we need to tackle head-on. First is algorithmic equity. So, most AI tools today are trained on data from academic medical centers but not necessarily from community practices or underrepresented populations, particularly when you're looking at radiology, pathology, and what not. So, those blind spots, they need to be filled, and we can eliminate a lot of disparities in cancer care. So, those frameworks to incentivize while keeping the data sharing using federated models and things that we can optimize is key. The second one is the governance on the lifecycle. So, you know, AI is not really static. So, unlike a drug that is approved and it just, you know, works always, AI changes. So, we need to make sure that we have tools that are able to retrain and recall when things degrade or models drift. So, we need to use up-to-date AI for clinical practice, so we are going to be in constant revalidation and make it really easy to do. And lastly, the human-AI interface. You know, clinicians don't need more noise or we don't need more black boxes. We need decision support that is clear, that we can interpret, and that is actionable. “Why are you using this? Why did we choose this drug? Why this dose? Why now?” So, all these things are going to help us and that allows us to trace evidence with a single click. So, I always call it back to the Moravec's paradox where we say, you know, evolution gave us so much energy to discern in the sensory-neural and dexterity. That's what we're going to be taking care of patients. We can use AI to really be a force to help us to be better clinicians and not to really replace us. So, if we get this right and we decide for transparency with trust, inclusion, etc., it will never replace any of our work, which is so important, as much as we want, we can actually take care of patients and be personalized, timely, and equitable. So, all those things are what get me excited every single day about these conversations on AI. Dr. Paul Hanona: All great thoughts, Dr. Bonilla. I'm very excited to see how this field evolves. I'm excited to see how oncologists really come to this field. I think with technology, there's always a bit of a lag in adopting it, but I think if we jump on board and grow with it, we can do amazing things for the field of oncology in general. Thank you for the advancements that you've made in your own career in the field of AI and oncology and just ultimately with the hopeful outcomes of improving patient care, especially cancer patients. Dr. Arturo Loaiza-Bonilla: Thank you so much, Dr. Hanona. Dr. Paul Hanona: Thanks to our listeners for your time today. If you value the insights that you hear on ASCO Daily News Podcast, please take a moment to rate, review, and subscribe wherever you get your podcasts. Disclaimer: The purpose of this podcast is to educate and to inform. This is not a substitute for professional medical care and is not intended for use in the diagnosis or treatment of individual conditions. Guests on this podcast express their own opinions, experience, and conclusions. Guest statements on the podcast do not express the opinions of ASCO. The mention of any product, service, organization, activity, or therapy should not be construed as an ASCO endorsement. More on today's speakers: Dr. Arturo Loaiza-Bonilla @DrBonillaOnc Dr. Paul Hanona @DoctorDiscover on YouTube Follow ASCO on social media: @ASCO on Twitter ASCO on Facebook ASCO on LinkedIn ASCO on BlueSky Disclosures: Paul Hanona: No relationships to disclose. Dr. Arturo-Loaiza-Bonilla: Leadership: Massive Bio Stock & Other Ownership Interests: Massive Bio Consulting or Advisory Role: Massive Bio, Bayer, PSI, BrightInsight, CardinalHealth, Pfizer, AstraZeneca, Medscape Speakers' Bureau: Guardant Health, Ipsen, AstraZeneca/Daiichi Sankyo, Natera
”I really love the notion of contributing something to physics.” — Chemistry laureate John Jumper has always been passionate about science and understanding the world. With the AI tool AlphaFold, he and his co-laureate Demis Hassabis have provided a possibility to predict protein structures. In this podcast conversation, Jumper speaks about the excitement of seeing how AI can help us more in the future.Jumper also shares his scientific journey and how he ended up working with AlphaFold. He describes a special memory from the 2018 CASP conference where AlphaFold was presented for the first time. Another life-changing moment was the announcement of the Nobel Prize in Chemistry in October 2024 – Jumper tells us how his life has changed since then. Through their lives and work, failures and successes – get to know the individuals who have been awarded the Nobel Prize on the Nobel Prize Conversations podcast. Find it on Acast, or wherever you listen to pods. https://linktr.ee/NobelPrizeConversations© Nobel Prize Outreach. Hosted on Acast. See acast.com/privacy for more information.
Interview with Stephen Witt Altman's Gentle Singularity Sutskever video: start at 5:50-6:40 Paris on Apple Glass OpenAI slams court order to save all ChatGPT logs, including deleted chats Disney and Universal Sue A.I. Firm for Copyright Infringement Apple paper: The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity Futurism on the paper Could AI make a Scorsese movie? Demis Hassabis and Darren Aronofsky discuss YouTube Loosens Rules Guiding the Moderation of Videos Meta Is Creating a New A.I. Lab to Pursue 'Superintelligence' Meta and Yandex are de-anonymizing Android users' web browsing identifiers Amazon 'testing humanoid robots to deliver packages' Google battling 'fox infestation' on roof of £1bn London office 23andMe's Former CEO Pushes Purchase Price Nearly $50 Million Higher Code to control vocal production with hands Warner Bros. Discovery to split into two public companies by next year Social media creators to overtake traditional media in ad revenue this year Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Stephen Witt Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: agntcy.org smarty.com/twit monarchmoney.com with code TWIT spaceship.com/twit
Interview with Stephen Witt Altman's Gentle Singularity Sutskever video: start at 5:50-6:40 Paris on Apple Glass OpenAI slams court order to save all ChatGPT logs, including deleted chats Disney and Universal Sue A.I. Firm for Copyright Infringement Apple paper: The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity Futurism on the paper Could AI make a Scorsese movie? Demis Hassabis and Darren Aronofsky discuss YouTube Loosens Rules Guiding the Moderation of Videos Meta Is Creating a New A.I. Lab to Pursue 'Superintelligence' Meta and Yandex are de-anonymizing Android users' web browsing identifiers Amazon 'testing humanoid robots to deliver packages' Google battling 'fox infestation' on roof of £1bn London office 23andMe's Former CEO Pushes Purchase Price Nearly $50 Million Higher Code to control vocal production with hands Warner Bros. Discovery to split into two public companies by next year Social media creators to overtake traditional media in ad revenue this year Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Stephen Witt Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: agntcy.org smarty.com/twit monarchmoney.com with code TWIT spaceship.com/twit
Interview with Stephen Witt Altman's Gentle Singularity Sutskever video: start at 5:50-6:40 Paris on Apple Glass OpenAI slams court order to save all ChatGPT logs, including deleted chats Disney and Universal Sue A.I. Firm for Copyright Infringement Apple paper: The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity Futurism on the paper Could AI make a Scorsese movie? Demis Hassabis and Darren Aronofsky discuss YouTube Loosens Rules Guiding the Moderation of Videos Meta Is Creating a New A.I. Lab to Pursue 'Superintelligence' Meta and Yandex are de-anonymizing Android users' web browsing identifiers Amazon 'testing humanoid robots to deliver packages' Google battling 'fox infestation' on roof of £1bn London office 23andMe's Former CEO Pushes Purchase Price Nearly $50 Million Higher Code to control vocal production with hands Warner Bros. Discovery to split into two public companies by next year Social media creators to overtake traditional media in ad revenue this year Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Stephen Witt Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: agntcy.org smarty.com/twit monarchmoney.com with code TWIT spaceship.com/twit
Interview with Stephen Witt Altman's Gentle Singularity Sutskever video: start at 5:50-6:40 Paris on Apple Glass OpenAI slams court order to save all ChatGPT logs, including deleted chats Disney and Universal Sue A.I. Firm for Copyright Infringement Apple paper: The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity Futurism on the paper Could AI make a Scorsese movie? Demis Hassabis and Darren Aronofsky discuss YouTube Loosens Rules Guiding the Moderation of Videos Meta Is Creating a New A.I. Lab to Pursue 'Superintelligence' Meta and Yandex are de-anonymizing Android users' web browsing identifiers Amazon 'testing humanoid robots to deliver packages' Google battling 'fox infestation' on roof of £1bn London office 23andMe's Former CEO Pushes Purchase Price Nearly $50 Million Higher Code to control vocal production with hands Warner Bros. Discovery to split into two public companies by next year Social media creators to overtake traditional media in ad revenue this year Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Stephen Witt Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: agntcy.org smarty.com/twit monarchmoney.com with code TWIT spaceship.com/twit
Thu, 29 May 2025 16:00:00 GMT http://relay.fm/material/518 http://relay.fm/material/518 Andy Ihnatko and Florence Ion Let's talk about what developers were promised at last week's Google I/O. Plus, what are Sam and Jony cooking up? Let's talk about what developers were promised at last week's Google I/O. Plus, what are Sam and Jony cooking up? clean 4329 Let's talk about what developers were promised at last week's Google I/O. Plus, what are Sam and Jony cooking up? This episode of Material is sponsored by: Vitally: A new era for customer success productivity. Get a free pair of AirPods Pro when you book a qualified meeting. Yawn Email: Tame your inbox with intelligent daily summaries. Start your 14-day free trial today. Links and Show Notes: Google I/O 2025 developer keynote Sam and Jony introduce io Google DeepMind's Demis Hassabis on AGI, Innovation and More S
This week, we take a field trip to Google and report back about everything the company announced at its biggest show of the year, Google I/O. Then, we sit down with Google DeepMind's chief executive and co-founder, Demis Hassabis, to discuss what his A.I. lab is building, the future of education, and what life could look like in 2030.Guest:Demis Hassabis, co-founder and chief executive of Google DeepMindAdditional Reading:At Google I/O, everything is changing and normal and scary and chillGoogle Unveils A.I. Chatbot, Signaling a New Era for SearchGoogle DeepMind C.E.O. Demis Hassabis on the Path From Chatbots to A.G.I.We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok. Unlock full access to New York Times podcasts and explore everything from politics to pop culture. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify.
Demis Hassabis is the CEO of Google DeepMind. Sergey Brin is the co-founder of Google. The two leading tech executives join Alex Kantrowitz for a live interview at Google's IO developer conference to discuss the frontiers of AI research. Tune in to hear their perspective on whether scaling is tapped out, how reasoning techniques have performed, what AGI actually means, the potential for an intelligence explosion, and much more. Tune in for a deep look into AI's cutting edge featuring two executives building it. --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology on Substack? Here's 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
OpenAI just pitched “OpenAI for Countries,” offering democracies a turnkey AI infrastructure while some of the world's richest quietly stockpile bunkers and provisions. We'll dig into billionaire Paul Tudor Jones's revelations about AI as an imminent security threat, and why top insiders are buying land and livestock to ride out the next catastrophe. Plus, a wild theory that Gavin has hatched regarding OpenAI's non-profit designation. Then, we break down the updated Google Gemini Pro 2.5's leap forward in coding… just 15 minutes to a working game prototype…and how this could put game creation in every kid's hands. Plus, Suno's 4.5 music model that finally brings human‑quality vocals, and robots gone wild in Chinese warehouses. AND OpenAI drops 3 billion on Windsurf, HeyGen's avatar model achieving flawless lip sync from any angle, the rise of blazing‑fast open source video engines, UCSD's whole‑body ambulatory robots shaking like nervous toddlers, and even Game of Thrones Muppet mashups with bizarre glitch art. STOCK YOUR PROVISIONS. THE ROBOT CLEANUP CREWS ARE NEXT. #ai #ainews #openai Join the discord: https://discord.gg/muD2TYgC8f Join our Patreon: https://www.patreon.com/AIForHumansShow AI For Humans Newsletter: https://aiforhumans.beehiiv.com/ Follow us for more on X @AIForHumansShow Join our TikTok @aiforhumansshow To book us for speaking, please visit our website: https://www.aiforhumans.show/ // Show Links // Does AI Pose an “Imminent Threat”? Paul Tudor Jones ‘Heard' About It Conference https://x.com/AndrewCurran_/status/1919759495129137572 Terrifying Robot Goes Crazy https://www.reddit.com/r/oddlyterrifying/comments/1kcbkfe/robot_on_hook_went_berserk_all_of_a_sudden/ Cleaner Robots To Pick Up After The Apocalypse https://x.com/kimmonismus/status/1919510163112779777 https://x.com/loki_robotics/status/1919325768984715652 OpenAI For Countries https://openai.com/global-affairs/openai-for-countries/ OpenAI Goes Non-Profit For Real This Time https://openai.com/index/evolving-our-structure/ New Google Gemini 2.5 Pro Model https://blog.google/products/gemini/gemini-2-5-pro-updates/ Demis Hassabis on the coding upgrade (good video of drawing an app) https://x.com/demishassabis/status/1919779362980692364 New Minecraft Bench looks good https://x.com/adonis_singh/status/1919864163137957915 Gavin's Bear Jumping Game (in Gemini Window) https://gemini.google.com/app/d0b6762f2786d8d2 OpenAI Buys Windsurf https://www.reuters.com/business/openai-agrees-buy-windsurf-about-3-billion-bloomberg-news-reports-2025-05-06/ Suno v4.5 https://x.com/SunoMusic/status/1917979468699931113 HeyGen Avatar v4 https://x.com/joshua_xu_/status/1919844622135627858 Voice Mirroring https://x.com/EHuanglu/status/1919696421625987220 New OpenSource Video Model From LTX https://x.com/LTXStudio/status/1919751150888239374 Using Runway References with 3D Models https://x.com/runwayml/status/1919376580922552753 Amo Introduces Whole Body Movements To Robotics (and looks a bit shaky rn) https://x.com/TheHumanoidHub/status/1919833230368235967 https://x.com/xuxin_cheng/status/1919722367817023779 Realistic Street Fighter Continue Screens https://x.com/StutteringCraig/status/1918372417615085804 Wandering Worlds - Runway Gen48 Finalist https://runwayml.com/gen48?film=wandering-woods Centaur Skipping Rope https://x.com/CaptainHaHaa/status/1919377295137005586 The Met Gala for Aliens https://x.com/AIForHumansShow/status/1919566617031393608 The Met Gala for Nathan Fielder & Sully https://x.com/AIForHumansShow/status/1919600216870637996 Loosening of Sora Rules https://x.com/AIForHumansShow/status/1919956025244860864
Google says we're not ready for AGI and honestly, they might be right. DeepMind's Demis Hassabis warns we could be just five years away from artificial general intelligence, and society isn't prepared. Um, yikes? VISIT OUR SPONSOR https://molku.ai/ In this episode, we break down Google's new “Era of Experience” paper and what it means for how AIs will learn from the real world. We talk agentic systems, long-term memory, and why this shift might be the key to creating truly intelligent machines. Plus, a real AI vending machine running on Claude, a half-marathon of robots in Beijing, and Cluely, the tool that lets you lie better with AI. We also cover new AI video tools from Minimax and Character.AI, Runway's 48-hour film contest, and Dia, the open-source voice model that can scream and cough better than most humans. Plus: AI Logan Paul, AI marketing scams, and one very cursed Shrek feet idea. AGI IS ALMOST HERE BUT THE ROBOTS, THEY STILL RUN. #ai #ainews #agi Join the discord: https://discord.gg/muD2TYgC8f Join our Patreon: https://www.patreon.com/AIForHumansShow AI For Humans Newsletter: https://aiforhumans.beehiiv.com/ Follow us for more on X @AIForHumansShow Join our TikTok @aiforhumansshow To book us for speaking, please visit our website: https://www.aiforhumans.show/ // Show Links // Demis Hassabis on 60 Minutes https://www.cbsnews.com/news/artificial-intelligence-google-deepmind-ceo-demis-hassabis-60-minutes-transcript/ We're Not Ready For AGI From Time Interview with Hasabis https://x.com/vitrupo/status/1915006240134234608 Google Deepmind's “Era of Experience” Paper https://storage.googleapis.com/deepmind-media/Era-of-Experience%20/The%20Era%20of%20Experience%20Paper.pdf ChatGPT Explainer of Era of Expereince https://chatgpt.com/share/680918d5-cde4-8003-8cf4-fb1740a56222 Podcast with David Silver, VP Reinforcement Learning GoogleDeepmind https://x.com/GoogleDeepMind/status/1910363683215008227 Intuicell Robot Learning on it's own https://youtu.be/CBqBTEYSEmA?si=U51P_R49Mv6cp6Zv Agentic AI “Moore's Law” Chart https://theaidigest.org/time-horizons AI Movies Can Win Oscars https://www.nytimes.com/2025/04/21/business/oscars-rules-ai.html?unlocked_article_code=1.B08.E7es.8Qnj7MeFBLwQ&smid=url-share Runway CEO on Oscars + AI https://x.com/c_valenzuelab/status/1914694666642956345 Gen48 Film Contest This Weekend - Friday 12p EST deadline https://x.com/runwayml/status/1915028383336931346 Descript AI Editor https://x.com/andrewmason/status/1914705701357937140 Character AI's New Lipsync / Video Tool https://x.com/character_ai/status/1914728332916384062 Hailuo Character Reference Tool https://x.com/Hailuo_AI/status/1914845649704772043 Dia Open Source Voice Model https://x.com/_doyeob_/status/1914464970764628033 Dia on Hugging Face https://huggingface.co/nari-labs/Dia-1.6B Cluely: New Start-up From Student Who Was Caught Cheating on Tech Interviews https://x.com/im_roy_lee/status/1914061483149001132 AI Agent Writes Reddit Comments Looking To “Convert” https://x.com/SavannahFeder/status/1914704498485842297 Deepfake Logan Paul AI Ad https://x.com/apollonator3000/status/1914658502519202259 The Humanoid Half-Marathon https://apnews.com/article/china-robot-half-marathon-153c6823bd628625106ed26267874d21 Video From Reddit of Robot Marathon https://www.reddit.com/r/singularity/comments/1k2mzyu/the_humanoid_robot_halfmarathon_in_beijing_today/ Vending Bench (AI Agents Run Vending Machines) https://andonlabs.com/evals/vending-bench Turning Kids Drawings Into AI Video https://x.com/venturetwins/status/1914382708152910263 Geriatric Meltdown https://www.reddit.com/r/aivideo/comments/1k3q62k/geriatric_meltdown_2000/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button
Bird flu, which has long been an emerging threat, took a significant turn in 2024 with the discovery that the virus had jumped from a wild bird to a cow. In just over a year, the pathogen has spread through dairy herds and poultry flocks across the United States. It has also infected people, resulting in 70 confirmed cases, including one fatality. Correspondent Bill Whitaker spoke with veterinarians and virologists who warn that, if unchecked, this outbreak could lead to a new pandemic. They also raise concerns about the Biden administration's slow response in 2024 and now the Trump administration's decision to lay off over 100 key scientists. Demis Hassabis, a pioneer in artificial intelligence, is shaping the future of humanity. As the CEO of Google DeepMind, he was first interviewed by correspondent Scott Pelley in 2023, during a time when chatbots marked the beginning of a new technological era. Since that interview, Hassabis has made headlines for his innovative work, including using an AI model to predict the structure of proteins, which earned him a Nobel Prize. Pelley returns to DeepMind's headquarters in London to discuss what's next for Hassabis, particularly his leadership in the effort to develop artificial general intelligence (AGI) – a type of AI that has the potential to match the versatility and creativity of the human brain. One of the most awe-inspiring and mysterious migrations in the natural world is currently taking place, stretching from Mexico to the United States and Canada. This incredible spectacle involves millions of monarch butterflies embarking on a monumental aerial journey. Correspondent Anderson Cooper reports from the mountains of Mexico, where the monarchs spent the winter months sheltering in trees before emerging from their slumber to take flight. To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices
How can AI help us understand and master deeply complex systems—from the game Go, which has 10 to the power 170 possible positions a player could pursue, or proteins, which, on average, can fold in 10 to the power 300 possible ways? This week, Reid and Aria are joined by Demis Hassabis. Demis is a British artificial intelligence researcher, co-founder, and CEO of the AI company, DeepMind. Under his leadership, DeepMind developed Alpha Go, the first AI to defeat a human world champion in Go and later created AlphaFold, which solved the 50-year-old protein folding problem. He's considered one of the most influential figures in AI. Demis, Reid, and Aria discuss game theory, medicine, multimodality, and the nature of innovation and creativity. For more info on the podcast and transcripts of all the episodes, visit https://www.possible.fm/podcast/ Listen to more from Possible here. Learn more about your ad choices. Visit podcastchoices.com/adchoices
For years, artificial intelligence companies have heralded the coming of artificial general intelligence, or AGI. OpenAI, which makes the chatbot ChatGPT, has said that their founding goal was to build AGI that “benefits all of humanity” and “gives everyone incredible new capabilities.”Google DeepMind cofounder Dr. Demis Hassabis has described AGI as a system that “should be able to do pretty much any cognitive task that humans can do.” Last year, OpenAI CEO Sam Altman said AGI will arrive sooner than expected, but that it would matter much less than people think. And earlier this week, Altman said in a blog post that the company knows how to build AGI as we've “traditionally understood it.”But what is artificial general intelligence supposed to be, anyway?Ira Flatow is joined by Dr. Melanie Mitchell, a professor at Santa Fe University who studies cognition in artificial intelligence and machine systems. They talk about the history of AGI, how biologists study animal intelligence, and what could come next in the field.Transcripts for each segment will be available after the show airs on sciencefriday.com. Subscribe to this podcast. Plus, to stay updated on all things science, sign up for Science Friday's newsletters.