POPULARITY
Categories
Want to Master AI Agents in 2025? Get the guide: https://clickhubspot.com/etv Episode 73: What's really holding back the future of AI—and are we truly prepared for what comes next? Matt Wolfe (https://x.com/mreflow) is joined by Mustafa Suleyman (https://x.com/mustafasuleyman), legendary AI innovator, co-founder of DeepMind, former founder of Inflection AI, and now CEO of Microsoft AI, where he's leading the massive Copilot transformation. This episode unpacks the myths around AI's “training wall,” whether hallucinations are actually a feature instead of a bug, the dawn of the agentic era—where AIs don't just chat, but plan and act for you—and the shifting landscape for software builders as anyone can ship products in minutes. Mustafa also shares firsthand stories and practical advice for leveraging today's AI—from offloading tasks to agents THIS WEEK, to why moats aren't about headcount or credentials in the new era. Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd — Show Notes: (00:00) AI Insights with Mustafa Suleyman (03:31) Adapting AI Amid Data Challenges (07:31) Technology's Misleading Terminology (12:16) Tool Use Defines Human Progress (15:49) Revolutionizing Code with AI Tools (16:31) Competitive Innovation Boom Ahead — Mentions: Mustafa Suleyman: https://mustafa-suleyman.ai/ Microsoft AI: https://www.microsoft.com/en-us/ai DeepMind: https://deepmind.google/ Inflection: https://inflection.ai/ Get the guide to build your own Custom GPT: https://clickhubspot.com/tnw — Check Out Matt's Stuff: • Future Tools - https://futuretools.beehiiv.com/ • Blog - https://www.mattwolfe.com/ • YouTube- https://www.youtube.com/@mreflow — Check Out Nathan's Stuff: Newsletter: https://news.lore.com/ Blog - https://lore.com/The Next Wave is a HubSpot Original Podcast // Brought to you by Hubspot Media // Production by Darren Clarke // Editing by Ezra Bakker Trupiano
Humayun Sheikh, Founder and CEO of Fetch.ai. He is an entrepreneur, investor, and a tech visionary who is passionate about technologies such as AI, machine learning, autonomous agents, and blockchains. In the past, he was a founding investor in DeepMind, where he supported commercialisation for early-stage AI & deep neural network technology. Currently, he is leading Fetch.ai as a CEO and co-founder, a start-up building the autonomy of the future. He is an expert on the topics of artificial intelligence, machine learning, autonomous agents, as well as the intersection of blockchain and commodities. In this conversation, we discuss:- AI outlook for the next couple years - The acceleration of AI - AI will unlock two main things: quantum compute and biotech - AI agents in crypto - Providing everyone an agentic system out of the box - Why is $FET undervalued? - The $FET Crypto Treasury News - Decentralized AI agents - AI & jobs Fetch.ai Website: Fetch.aiX: @Fetch_ai Discord: discord.gg/fetchaiHumayun SheikhX: @HMsheikh4 LinkedIn: Humayun Sheikh---------------------------------------------------------------------------------This episode is brought to you by EMCD.EMCD is a trailblazer in the Web3 fintech space, committed to redefining finance with a human-centered approach. For seven years, EMCD has been building tools that empower a diverse community of miners, traders, investors, digital nomads, and entrepreneurs. What started as a determined startup mining pool has grown into a global force, once ranking among the top 10 Bitcoin mining pools worldwide. Today, EMCD's mission is broader and bolder: creating innovative Web3 financial solutions that make wealth-building accessible to everyone, no matter where they are. Their platform enables users to grow assets without the stress of chasing volatile market trends or timing every dip and spike. By prioritizing purpose over hype, EMCD is crafting a future where finance serves individuals, not just markets. Dive into their vision and explore their cutting-edge tools at emcd.io.
The AI Breakdown: Daily Artificial Intelligence News and Discussions
Mustafa Suleyman, CEO of AI at Microsoft and co-founder of DeepMind, has published a provocative essay warning about the dangers of “seemingly conscious AI.” On today's Big Think edition of The AI Daily Brief, we explore his argument that as AI systems develop memory, personality, and the illusion of subjective experience, people may begin treating them as conscious beings—with profound consequences for society, law, and human identity. We dig into Suleyman's case for why this illusion matters more than the question of whether AI is actually conscious, the risks of model welfare debates, and why industry norms may need to change now.Brought to you by:KPMG – Discover how AI is transforming possibility into reality. Tune into the new KPMG 'You Can with AI' podcast and unlock insights that will inform smarter decisions inside your enterprise. Listen now and start shaping your future with every episode. https://www.kpmg.us/AIpodcastsBlitzy.com - Go to https://blitzy.com/ to build enterprise software in days, not months Vanta - Simplify compliance - https://vanta.com/nlwPlumb - The automation platform for AI experts and consultants https://useplumb.com/The Agent Readiness Audit from Superintelligent - Go to https://besuper.ai/ to request your company's agent readiness score.The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614Subscribe to the newsletter: https://aidailybrief.beehiiv.com/Interested in sponsoring the show? nlw@breakdown.network
A veteran Pulitzer Prize-winning journalist shadows the top thinkers in the field of Artificial Intelligence, introducing the breakthroughs and developments that will change the way we live and work. Artificial Intelligence has been “just around the corner” for decades, continually disappointing those who long believed in its potential. But now, with the emergence and growing use of ChatGPT, Gemini, and a rapidly multiplying number of other AI tools, many are wondering: Has AI's moment finally arrived? In AI Valley: Microsoft, Google, and the Trillion-Dollar Race to Cash In on Artificial Intelligence (Harper Collins, 2025), Pulitzer Prize-winning journalist Gary Rivlin brings us deep into the world of AI development in Silicon Valley. Over the course of more than a year, Rivlin closely follows founders and venture capitalists trying to capitalize on this AI moment. That includes LinkedIn founder Reid Hoffman, the legendary investor whom the Wall Street Journal once called, “the most connected person in Silicon Valley.” Through Hoffman, Rivlin is granted access to a number of companies on the cutting-edge of AI research, such as Inflection AI, the company Hoffman cofounded in 2022, and OpenAI, the San Francisco-based startup that sparked it all with its release at the end of that year of ChatGPT. In addition to Hoffman, Rivlin introduces us to other AI experts, including OpenAI cofounder Sam Altman and Mustafa Suleyman, the co-founder of DeepMind, an early AI startup that Google bought for $650 million in 2014. Rivlin also brings readers inside Microsoft, Meta, Google and other tech giants scrambling to keep pace. On this vast frontier, no one knows which of these companies will hit it big–or which will flame out spectacularly. In this riveting narrative marbled with familiar names such as Musk, Zuckerberg, and Gates, Rivlin chronicles breakthroughs as they happen, giving us a deep understanding of what's around the corner in AI development. An adventure story full of drama and unforgettable personalities, AI Valley promises to be the definitive story for anyone seeking to understand the latest phase of world-changing discoveries and the minds behind them. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/new-books-network
This week's episode of The Refresh dives into Walmart's evolving partnership with The Trade Desk, signaling potential changes in retail media alliances. We explore Google's use of large language models to combat ad fraud, achieving significant reductions in invalid traffic. Finally, we break down Variety's latest upfronts report, showing a continued decline in primetime TV ad commitments and notable growth in streaming investment. This week we cover: Walmart and The Trade Desk's relationship is moving from exclusive to open, raising questions about Walmart's retail data strategy and potential in-house platform development. The Trade Desk faces growing competition from vertically integrated giants like Amazon, Google, and Meta, which benefit from owned inventory and rich first-party data. Google's traffic quality team, in collaboration with Google Research and DeepMind, deployed large language models to detect and reduce mobile invalid traffic by 40%. Variety reports primetime TV ad commitments fell for the third consecutive year, with broadcast down 2.5% and cable down 4.3%. Streaming ad commitments surged nearly 18% year over year, driven by advanced targeting, programmatic buying opportunities, and high-value live sports content moving to digital platforms. Learn more about your ad choices. Visit megaphone.fm/adchoices
What happens when a world-class mathematician meets '80s college radio, Bill Gates' top-10 favorite books, and a host with an algebra redemption arc? A surprisingly funny, fast-moving conversation. Dr. Jordan Ellenberg—John D. MacArthur Professor of Mathematics at UW–Madison and author of How Not to Be Wrong—swaps stories about The Housemartins, consulting on NUMB3RS (yes, one of his lines aired), and competing at the International Mathematical Olympiad. There's a lot of laughter—and a fresh way to see math as culture, craft, and curiosity.But we also get practical about math education. We discuss the love/hate split students have for math and what it implies for curriculum design; a century of “new” methods (and if anything is truly new); how movie tropes (Good Will Hunting, etc.) shape student identity in math; soccer-drills vs scrimmage as a frame for algebra practice and “honest” applications; grades as feedback vs record; AI shifting what counts as computation vs math; why benchmarks miss the point and the risk of lowering writing standards with LLMs; and a preview of Jordan's pro-uncertainty thesis.Listen to Learn: A better answer to “Why am I learning this?” using a soccer analogyThe two big off-ramps of math for students, and tactics that keep more students on boardHow to replace the “born genius” myth with a mindset that helps any student do mathWhen a grade is a record vs. a motivator, and a simple replacement policy that turns a rough start into effort and growthWhat AI will and won't change in math class, and why “does it help create new math?” matters more than benchmark scores3 Big Takeaways from this Episode:1. Math mastery comes from practice plus meaning, not a “born genius.” Jordan puts it plainly: “genius is a thing that happens, not a kind of person,” and he uses the soccer drills vs scrimmage analogy to pair targeted practice with real tasks, with algebraic manipulation as a core high school skill. He urges teachers to “throw a lot of spaghetti at the wall” so different explanations land for different students, because real innovation is iterative and cooperative.2. Students fall off at fractions and Algebra I. How do we pull them back? Jordan names those two moments as the big off-ramps and points to multiple representations, honest applications, and frequent low‑stakes practice to keep kids in. Matt's own algebra story shows how a replacement policy turned failure into effort and persistence, reframing grades as motivation rather than just record‑keeping.3. AI will shift our capabilities and limits in math, but math is still a human task. Calculators and Wolfram already do student‑level work, and Jordan argues benchmarks like DeepMind vs the International Mathematical Olympiad matter less than whether tools help create new mathematics. He also warns against letting LLMs lower writing standards and says the real test is whether these systems add substantive math, not just win contests.Resources in this Episode:Visit Jordan Ellenberg's website! jordanellenberg.comRead How Not to Be Wrong: The Power of Mathematical ThinkingWe want to hear from you! Send us a text.Instagram - Facebook - YouTube - TikTok - Twitter - LinkedIn
OpenAI released GPT-5, and it's.... polarizing?Google dropped something kinda outta this world.And Anthropic picked a bad week to drop a new model.This week was one of the busiest in AI of the year. If you missed anything, this is your one-stop shot to get caught up. On Mondays, Everyday AI brings you the AI News That Matters. No fluff. No B.S. Just the meaningful AI news that impacts us all. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Thoughts on this? Join the convo and connect with other AI leaders on LinkedIn.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:OpenAI Releases GPT-5—Smarter, Faster ModelGPT-5 Integration in Microsoft Copilot, AzureApple Intelligence Announces GPT-5 IntegrationGPT-5 Multimodal Input and Output FeaturesGPT-5 Rollout Issues and Model Router BugsAnthropic Launches Claude Opus 4.1 UpdateGoogle Genie 3 World Model DemonstrationOpenAI Debuts GPT OSS Open Source ModelGoogle Gemini Guided Learning LaunchesEleven Labs Releases AI Music GeneratorMeta Forms TBD Lab for Llama ModelsChatGPT Plus Plan Rate Limit ControversyUser Backlash Over Removal of Old ModelsCompetition Among AI Model Providers EscalatesTimestamps:00:00 GPT-5's Global Impact Unveiled03:22 "GPT-5: Stellar Yet Polarizing Release"06:23 "OpenAI's Impactful GPT-5 Update"11:51 "GPT-5 Integration Expands Microsoft Reach"13:19 Microsoft Integrates GPT-5 in AI Tools17:15 "GPT-5 Surpasses, OpenAI's Model Looms"23:18 "Guided Learning with Google Gemini"25:26 "AI Integration Critique in Education"30:40 AI Industry Disruption by GPT OSS34:49 AI Advances: Genie 3 Unveiled37:54 AI Video in World Simulators42:23 ChatGPT Plus Users Gain Higher Limits46:36 Altman on Unhealthy AI Dependencies49:41 Tech Updates: New Releases and Controversies51:24 Tech Giants Launch Major AI ModelsKeywords:GPT-5, OpenAI, AI news, large language model, ChatGPT, Microsoft Copilot, Apple Intelligence, iOS 26, multimodal model, model router, reasoning models, AI hallucinations, factual accuracy, AI safety, customization, API pricing, Anthropic, Claude Opus 4.1, agentic tasks, software engineering, coding assistant, Google Genie 3, world model, DeepMind, persistent environments, embodied AI, physical mechanics, AI video generation, Sora, AI benchmarking, LM Arena, Google Gemini 2.5 Pro, Guided Learning, LearnLM, Gemini Experiences, active learning AI, AI in education, AI partnerships, Apple integration, real-time rSend Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner
Today's weekend edition focused on the Instagram Map feature - what it is, the backlash it's received and why. Lia Haberman of the In Case You Missed It newsletter stops by to explain what all the fuss is about and why it matters. Also Adam Mosseri busts an Instagram myth, and Ashley Coffey and I dive into AI news around Perplexity, Cloudflare, and Google's new AI model. Links:Lia Haberman and ICYMIICYMI:: Repost, Maps... Which Instagram updates do you really need?! (Substack)Lia's Threads Post about Instagram Maps (Threads) Instagram: Myth-Busting about who you interact with (Instagram) AI News:Perplexity is using stealth, undeclared crawlers to evade website no-crawl directives (Cloudflare) Some people are defending Perplexity after Cloudflare ‘named and shamed' it (TechCrunch)Google's DeepMind thinks its new Genie 3 world model presents a stepping stone toward AGI (TechCrunch) AI News You Should Know About - Entire Episode: (Audio) (Video) Sign Up for The Weekly Email Roundup: NewsletterLeave a Review: Apple PodcastsFollow Me on Instagram: @danielhillmedia
Logan Kilpatrick shares how DeepMind's organizational changes helped their resurgance in AI. What needs to happen to reach 100m developers. And why the next six months are more exciting than ever.This episode is brought to you by WorkOS. If you're thinking about selling to enterprise customers, WorkOS can help you add enterprise features like Single Sign On and audit logs. Links:Google DeepMind Logan Kilpatrick Logan Kilpatrick podcast NotebookLM Gemini CLI Veo
Plus: Microsoft is raiding Google's DeepMind for talent to bolster its AI ambitions. And, United Airlines resumes flights after a tech issue causes widespread delays. Azhar Sukri hosts. Sign up for WSJ's free What's News newsletter. Learn more about your ad choices. Visit megaphone.fm/adchoices
O carioca João Gabriel cresceu com o computador da família e, por isso, não foi difícil optar por Engenharia da Computação na hora de escolher um curso superior. Quando surgiu a oportunidade de fazer parte da graduação na França, ele tentou, depois tentou de novo, acabou conseguindo.Por lá, depois de uma pandemia, de uma fusão entre faculdades e de alguns malabarismos burocráticos, ele acabou ficando. Certo dia, depois de ajudar um amigo com um problema de trabalho, ele recebeu uma oferta de emprego que, apesar de não ter dado certo (de novo, por burocracia), acabou lhe levando à DeepMind em Londres.Neste episódio, o João conta como tem sido acompanhar (e contribuir) de dentro do Google toda a revolução das IAs generativas, além do dia a dia de se morar na terra onde há mais estrangeiros no mundo.Fabrício Carraro, o seu viajante poliglotaJoão Oliveira, Engenheiro Pesquisador de IA na Google Deepmind em Londres, InglaterraLinks:IA Sob Controle com o JoãoScikit LearnLeetCodeDocumentário do AlphaGoEstudo original do GPT de 2017Projeto LookoutConheça a Escola de Inteligência Artificial da Alura, mergulhe com profundidade no universo da IA aplicada a diferentes áreas de atuação, e domine as principais ferramentas que estão moldando o agora.TechGuide.sh, um mapeamento das principais tecnologias demandadas pelo mercado para diferentes carreiras, com nossas sugestões e opiniões.#7DaysOfCode: Coloque em prática os seus conhecimentos de programação em desafios diários e gratuitos. Acesse https://7daysofcode.io/Ouvintes do podcast Dev Sem Fronteiras têm 10% de desconto em todos os planos da Alura Língua. Basta ir a https://www.aluralingua.com.br/promocao/devsemfronteiras/e começar a aprender inglês e espanhol hoje mesmo! Produção e conteúdo:Alura Língua Cursos online de Idiomas – https://www.aluralingua.com.br/Alura Cursos online de Tecnologia – https://www.alura.com.br/Edição e sonorização: Rede Gigahertz de Podcasts
On this week's show Patrick Gray and Adam Boileau discuss the week's cybersecurity news. Google security engineering VP Heather Adkins drops by to talk about their AI bug hunter, and Risky Business producer Amberleigh Jack makes her main show debut. This episode explores the rise of AI-powered bug hunting: Google's Project Zero and Deepmind team up to find and report 20 bugs to open source projects The XBOW AI bug hunting platform sees success on HackerOne Is an AI James Kettle on the horizon? There's also plenty of regular cybersecurity news to discuss: On-prem Sharepoint's codebase is maintained out of China… awkward! China frets about the US backdooring its NVIDIA chips, how you like ‘dem apples, China? SonicWall advises customers to turn off their VPNs Hardware controlling Dell laptop fingerprint and card readers has nasty driver bugs Russia uses its ISPs to in-the-middle embassy computers and backdoor ‘em. The Russian government pushes VK's Max messenger for everything This week's show is sponsored by device management platform Devicie. Head of Solutions Sean Ollerton talks through the impending Windows 10 apocalypse, as Microsoft ends mainstream support. He says Windows 11 isn't as scary as people make out, but if the update isn't on your radar now, time is running out. This episode is also available on Youtube. Show notes Google says its AI-based bug hunter found 20 security vulnerabilities | TechCrunch Is XBOW's success the beginning of the end of human-led bug hunting? Not yet. | CyberScoop James Kettle on X: "There I am being careful to balance hyping my talk without going too far and then this gets published
On this week's AI Inside with Jason Howell and Jeff Jarvis, DeepMind shows off its Genie 3 simulation world model, Perplexity is under fire for controversial web crawling tactics, ElevenLabs unveils a commercial-ready AI music generator, and Illinois becomes the first state to ban AI-powered therapists. Enjoying the AI Inside podcast? Please rate us ⭐⭐⭐⭐⭐ in your podcatcher of choice! Note: Time codes subject to change depending on dynamic ad insertion by the distributor. CHAPTERS: 0:00:00 - Podcast begins 0:01:04 - Jason's three hour conversation with ChatGPT 0:12:24 - DeepMind reveals Genie 3 “world model” that creates real-time interactive simulations 0:22:41 - Open models by OpenAI 0:24:55 - LeCunn and Ng on China and open-source momentum 0:27:26 - A language model built for the public good 0:32:28 - Perplexity is using stealth, undeclared crawlers to evade website no-crawl directives 0:48:35 - ElevenLabs launches an AI music generator, which it claims is cleared for commercial use 0:57:12 - Illinois is the first state to ban AI therapists 0:59:55 - ChatGPT adds mental health guardrails after bot 'fell short in recognizing signs of delusion' 1:03:40 - OpenAI removes ChatGPT feature after private conversations leak to Google search 1:06:00 - Apple might be building its own AI ‘answer engine' 1:09:31 - Anthropic Unveils More Powerful AI Model Ahead of Rival GPT-5 Release 1:11:01 - Anthropic Revokes OpenAI's Access to Claude 1:13:23 - Hackers Hijacked Google's Gemini AI With a Poisoned Calendar Invite to Take Over a Smart Home 1:14:58 - Grok's ‘spicy' video setting instantly made me Taylor Swift nude deepfakes Learn more about your ad choices. Visit megaphone.fm/adchoices
“HR Heretics†| How CPOs, CHROs, Founders, and Boards Build High Performing Companies
Today on HR Heretics, Kelli and Nolan analyze the controversial Windsurf acquisition prompted by Windsurf employee #2's explosive social media post about receiving only 1% equity payout despite Google's $2 billion deal, highlighting Silicon Valley's eroding compensation norms.*Email us your questions or topics for Kelli & Nolan: hrheretics@turpentine.coFor coaching and advising inquire at https://kellidragovich.com/HR Heretics is a podcast from Turpentine.Support HR Heretics Sponsors:Planful empowers teams just like yours to unlock the secrets of successful workforce planning. Use data-driven insights to develop accurate forecasts, close hiring gaps, and adjust talent acquisition plans collaboratively based on costs today and into the future. ✍️ Go to https://planful.com/heretics to see how you can transform your HR strategy.Metaview is the AI platform built for recruiting. Our suite of AI agents work across your hiring process to save time, boost decision quality, and elevate the candidate experience.Learn why team builders at 3,000+ cutting-edge companies like Brex, Deel, and Quora can't live without Metaview.It only takes minutes to get up and running. Check it out!KEEP UP WITH NOLAN + KELLI ON LINKEDINNolan: https://www.linkedin.com/in/nolan-church/Kelli: https://www.linkedin.com/in/kellidragovich/—TIMESTAMPS:(00:00) Intro(00:13) Prem's Bombshell Tweet(01:32) The DeepMind vs Cognition Choice(02:16) Clarifying the Exploding Offer(04:15) Kelly's Google Looker Experience(05:00) Why Contract Protections Don't Matter(06:07) Leadership Accountability(07:16) Silicon Valley's Broken Unwritten Rules(08:38) Sponsors: Planful | Metaview(11:37) Culture vs Money in Acquisitions(13:00) First Principles: The New Acquisition Reality(14:56) Gary Tan's Tone-Deaf Response(15:41) The Chaos of Modern Tech(17:00) The Power of Social Media Transparency(17:49) Wrap-Up This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit hrheretics.substack.com
This episode features Shlomi Fuchter and Jack Parker Holder from Google DeepMind, who are unveiling a new AI called Genie 3. The host, Tim Scarfe, describes it as the most mind-blowing technology he has ever seen. We were invited to their offices to conduct the interview (not sponsored).Imagine you could create a video game world just by describing it. That's what Genie 3 does. It's an AI "world model" that learns how the real world works by watching massive amounts of video. Unlike a normal video game engine (like Unreal or the one for Doom) that needs to be programmed manually, Genie generates a realistic, interactive, 3D world from a simple text prompt.**SPONSOR MESSAGES***Prolific: Quality data. From real people. For faster breakthroughs.https://prolific.com/mlst?utm_campaign=98404559-MLST&utm_source=youtube&utm_medium=podcast&utm_content=script-gen***Here's a breakdown of what makes it so revolutionary:From Text to a Virtual World: You can type "a drone flying by a beautiful lake" or "a ski slope," and Genie 3 creates that world for you in about three seconds. You can then navigate and interact with it in real-time.It's Consistent: The worlds it creates have a reliable memory. If you look away from an object and then look back, it will still be there, just as it was. The guests explain that this consistency isn't explicitly programmed in; it's a surprising, "emergent" capability of the powerful AI model.A Huge Leap Forward: The previous version, Genie 2, was a major step, but it wasn't fast enough for real-time interaction and was much lower resolution. Genie 3 is 720p, interactive, and photorealistic, running smoothly for several minutes at a time.The Killer App - Training Robots: Beyond entertainment, the team sees Genie 3 as a game-changer for training AI. Instead of training a self-driving car or a robot in the real world (which is slow and dangerous), you can create infinite simulations. You can even prompt rare events to happen, like a deer running across the road, to teach an AI how to handle unexpected situations safely.The Future of Entertainment: this could lead to a "YouTube version 2" or a new form of VR, where users can create and explore endless, interconnected worlds together, like the experience machine from philosophy.While the technology is still a research prototype and not yet available to the public, it represents a monumental step towards creating true artificial worlds from the ground up.Jack Parker Holder [Research Scientist at Google DeepMind in the Open-Endedness Team]https://jparkerholder.github.io/Shlomi Fruchter [Research Director, Google DeepMind]https://shlomifruchter.github.io/TOC:[00:00:00] - Introduction: "The Most Mind-Blowing Technology I've Ever Seen"[00:02:30] - The Evolution from Genie 1 to Genie 2[00:04:30] - Enter Genie 3: Photorealistic, Interactive Worlds from Text[00:07:00] - Promptable World Events & Training Self-Driving Cars[00:14:21] - Guest Introductions: Shlomi Fuchter & Jack Parker Holder[00:15:08] - Core Concepts: What is a "World Model"?[00:19:30] - The Challenge of Consistency in a Generated World[00:21:15] - Context: The Neural Network Doom Simulation[00:25:25] - How Do You Measure the Quality of a World Model?[00:28:09] - The Vision: Using Genie to Train Advanced Robots[00:32:21] - Open-Endedness: Human Skill and Prompting Creativity[00:38:15] - The Future: Is This the Next YouTube or VR?[00:42:18] - The Next Step: Multi-Agent Simulations[00:52:51] - Limitations: Thinking, Computation, and the Sim-to-Real Gap[00:58:07] - Conclusion & The Future of Game EnginesREFS:World Models [David Ha, Jürgen Schmidhuber]https://arxiv.org/abs/1803.10122POEThttps://arxiv.org/abs/1901.01753[Akarsh Kumar, Jeff Clune, Joel Lehman, Kenneth O. Stanley]The Fractured Entangled Representation Hypothesishttps://arxiv.org/pdf/2505.11581TRANSCRIPT:https://app.rescript.info/public/share/Zk5tZXk6mb06yYOFh6nSja7Lg6_qZkgkuXQ-kl5AJqM
Google DeepMind has revealed Genie 3, its latest foundation world model that the AI lab says presents a crucial stepping stone on the path to artificial general intelligence, or human-like intelligence. Also, enterprise security company SonicWall is urging its customers to disable a core feature of its most recent line-up of firewall devices after security researchers reported an uptick in ransomware incidents targeting SonicWall customers. Learn more about your ad choices. Visit podcastchoices.com/adchoices
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
A daily Chronicle of AI Innovations in August 05th 2025Hello AI Unraveled Listeners,In today's AI Daily News,
Demis Hassabis, a pioneer in artificial intelligence, is shaping the future of humanity. As the CEO of Google DeepMind, he was first interviewed by correspondent Scott Pelley in 2023, during a time when chatbots marked the beginning of a new technological era. Since that interview, Hassabis has made headlines for his innovative work, including using an AI model to predict the structure of proteins, which earned him a Nobel Prize. Pelley returns to DeepMind's headquarters in London to discuss what's next for Hassabis, particularly his leadership in the effort to develop artificial general intelligence (AGI) – a type of AI that has the potential to match the versatility and creativity of the human brain. Fertility rates in the United States are currently near historic lows, largely because fewer women are having children in their 20s. As women delay starting families, many are opting for egg freezing, the process of retrieving and freezing unfertilized eggs, to preserve their fertility for the future. Does egg freezing provide women with a way to pause their biological clock? Correspondent Lesley Stahl interviews women who have decided to freeze their eggs and explores what the process entails physically, emotionally and financially. She also speaks with fertility specialists and an ethicist about success rates, equity issues and the increasing market potential of egg freezing. This is a double-length segment. To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices
L'intelligence artificielle est-elle sur le point de franchir un cap historique ? À l'occasion du sommet pour l'action sur l'IA, j'ai assisté à une rencontre exceptionnelle entre deux figures majeures du secteur : Demis Hassabis, cofondateur et PDG de DeepMind, et James Manyika, vice-président de Google chargé de la recherche. Ensemble, ils ont partagé leur vision des opportunités et des risques liés à l'essor de l'IA, en particulier sur la route vers l'intelligence artificielle générale (AGI).Rediffusion du 14/02/2025Dans cet échange captivant organisé par Google France, les deux intervenants reviennent sur les bénéfices actuels de l'IA, notamment dans le diagnostic médical dans les pays en développement, et sur la promesse d'un assistant numérique universel. Ils abordent également la perspective de systèmes intelligents capables d'exécuter des tâches complexes, et l'impact à venir sur le marché du travail.Mais cette évolution s'accompagne aussi de sérieux défis : sécurité, gouvernance, dérives possibles, éthique... Demis Hassabis insiste sur la nécessité de mettre en place des garde-fous et de s'assurer que les systèmes d'IA intègrent les bonnes valeurs. James Manyika, lui, appelle à anticiper dès maintenant les effets sur la société et à investir dans la formation.-----------
Dimitri and Khalid speak with academic and Substack writer Vincent Lê about the current fevered dystopian landscape of AI, including: the Silicon Valley philosophy of "Rationalism", the Zizian cult, the qualitative difference between LLMs and self-training AIs like AlphaGo and DeepMind, AlphaGo mastering the ancient Chinese game Go, Scott Boorman's 1969 book "Protracted Game: A Wei-ch'i Interpretation of Maoist Revolutionary Strategy", Capital as the first true AGI system, the Bolshevik Revolution as the greatest attempt to build a friendly alternative AGI, and more...part one of two. Vincent's Substack: https://vincentl3.substack.com
Support the show to get full episodes, full archive, and join the Discord community. The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists. Read more about our partnership. Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released. To explore more neuroscience news and perspectives, visit thetransmitter.org. Do AI engineers need to emulate some processes and features found only in living organisms at the moment, like how brains are inextricably integrated with bodies? Is consciousness necessary for AI entities if we want them to play nice with us? Is quantum physics part of that story, or a key part, or the key part? Jennifer Prendki believes if we continue to scale AI, it will get us more of the same of what we have today, and that we should look to biology, life, and possibly consciousness to enhance AI. Jennifer is a former particle physicist turned entrepreneur and AI expert, focusing on curating the right kinds and forms of data to train AI, and in that vein she led those efforts at Deepmind on the foundation models ubiquitous in our lives now. I was curious why someone with that background would come to the conclusion that AI needs inspiration from life, biology, and consciousness to move forward gracefully, and that it would be useful to better understand those processes in ourselves before trying to build what some people call AGI, whatever that is. Her perspective is a rarity among her cohorts, which we also discuss. And get this: she's interested in these topics because she cares about what happens to the planet and to us as a species. Perhaps also a rarity among those charging ahead to dominate profits and win the race Jennifer's website: Quantum of Data. The blog posts we discuss: The Myth of Emergence Embodiment & Sentience: Why the Body still Matters The Architecture of Synthetic Consciousness On Time and Consciousness Superalignment and the Question of AI Personhood. 0:00 - Intro 3:25 - Jennifer's background 13:10 - Consciousness 16:38 - Life and consciousness 23:16 - Superalignment 40:11 - Quantum 1:04:45 - Wetware and biological mimicry 1:15:03 - Neural interfaces 1:16:48 - AI ethics 1:2:35 - AI models are not models 1:27:13 - What scaling will get us 1:39:53 - Current roadblocks 1:43:19 - Philosophy
In this episode of Hashtag Trending, host Jim Love discusses various hot topics in tech. The new US administration's AI plan faces widespread criticism for its ambiguous and potentially punitive nature. Researchers in quantum computing at the University of Sydney achieve a significant milestone in error reduction, bringing practical use closer. Google's DeepMind develops robots capable of continuously learning through a game of ping pong. Meta faces a unique class action lawsuit over allegedly using pirated adult content to train AI models. Lastly, a notable Starlink outage results in unexpectedly faster internet speeds for users. Tune in for more in-depth insights and analyses. 00:00 Introduction and Overview 00:41 Controversial US AI Action Plan 02:51 Quantum Computing Breakthrough 04:46 Google's AI-Powered Ping Pong 06:11 Meta's Legal Troubles with AI Training Data 07:59 Starlink's Unexpected Speed Boost 09:12 Conclusion and Sign-Off
Welcome back to another episode of the EUVC Podcast, your trusted inside track on the people, deals, and dynamics shaping European venture. This week marks a major milestone — Episode 50! To celebrate, Dan Bowyer, Mads Jensen of SuperSeed and Lomax from Outsized Ventures, and Andrew J. Scott return to unpack the headlines and trends shaping the European tech landscape.From the UK government's OpenAI partnership and what it means, to the missed boat on stablecoins, and to AI outperforming the brightest minds in math—this episode cuts deep into the future of tech, sovereignty, and competitiveness in Europe.Whether you're a founder navigating policy shifts, an investor eyeing infrastructure plays, or just an AI-curious policy wonk—this one's for you.Here's what's covered00:00 | Celebrating Episode 50The gang reflects on hitting a podcasting milestone and shares quick updates from Denmark, Paris, and a beachside founder retreat.03:30 | OpenAI x UK Government: A Real Deal?The UK's MOU with OpenAI is meant to boost public sector productivity—but is it too flimsy to matter? The hosts debate if this partnership is toothless signaling or meaningful progress.06:00 | Can AI Actually Transform Public Services?From “Humpfree the Chatbot” to NHS waitlists, the panel weighs in on the real-world use cases, and how opt-in AI diagnostics could solve the NHS backlog.09:30 | The Bigger Picture: AI Sovereignty and StrategyWith the UK relying on US players (OpenAI, Anthropic, Nvidia), are we compromising our digital sovereignty? Andrew drops the big question: Is this the modern equivalent of exporting raw strategic resources?14:00 | US vs UK AI Plans: Build, Baby, Build vs. Think, Baby, ThinkThe team compares the UK's thoughtful “consultancy-style” AI strategy with the US's aggressive, deregulatory action plan—complete with eagles and executive orders.19:00 | Policy Recommendations from the PodFrom national compute backbones and Buy-UK mandates to AI visa fast-tracks and sovereign LLMs — the panel proposes big ideas Europe should act on today.25:00 | Stablecoins: UK's Missed OpportunityWhile Japan, Singapore, and the US regulate stablecoins, the UK is just starting consultations. Why? And what's at stake?30:00 | Dollar Dominance ReinventedMads explains how stablecoins are reinforcing US economic control — and how UK hesitation risks long-term relevance in fintech.34:00 | Ideas for UK Leadership in StablecoinsCould interest-bearing stablecoins become London's new edge? Could we reclaim fintech innovation by embracing DeFi rails?38:00 | AI Wins Gold at the Maths OlympiadGoogle's DeepMind and OpenAI hit gold-level scores at the IMO. The gang discusses the leap in AI's creative reasoning and what it means for R&D, drug discovery, and Europe's scientific leadership.43:00 | Should Europe Build Its Own Sovereign Research Hub?From CERN-for-AI to training sovereign models, the crew asks whether public sector moonshots are the right way to compete.48:00 | Deal of the Week: Eurazeo's €650M Fund for AI ScaleupsIn a capital-constrained landscape, Eurazeo closes a rare growth fund to back Europe's AI champions.50:00 | Wildcard: AI vs. RaccoonsAndrew shares a niche but hilarious use case for computer vision AI: keeping raccoons out of houses. No joke.
Send us a textJoin hosts Alex Sarlin and Claire Zau, a Partner and AI Lead at GSV Ventures as they explores the latest developments in education technology, from AI agents to teacher co-pilots, talent wars, and shifts in global AI strategies. ✨ Episode Highlights [00:00:00] AI teacher co-pilots evolve into agentic workflows.[00:02:15] OpenAI launches ChatGPT Agent for autonomous tasks.[00:04:24] Meta, Google, and OpenAI escalate AI talent wars.[00:07:38] Privacy guardrails emerge for AI agent actions.[00:10:20] ChatGPT pilots “Study Together” learning mode.[00:14:40] Teens use AI as companions, sparking debate.[00:19:58] AI multiplies both positive and negative behaviors.[00:29:11] Windsurf acquisition saga shows coding disruption.[00:37:18] Teacher AI tools gain value through workflow data.[00:42:48] DeepMind's rise positions Demis Hassabis as key leader.[00:45:32] Google offers free Gemini AI plan to Indian students.[00:49:39] Meta builds massive AI data centers for digital labor. Plus, special guests: [00:52:42] Matthew Gasda, a writer and director, on how educators can rethink writing and grading in the AI era. [01:13:30] Marc Graham, founder of Spark Education AI, on using AI to personalize reading and engage reluctant readers.
What does the White House have to say about AI? In episode 65 of Mixture of Experts, host Tim Hwang is joined by Kate Soule, Gabe Goodhart and first-time guest, Mihai Criveti. First, Google DeepMind shared that Gemini Deep Think won Gold at IMO. Next, who is using ChatGPT agents? We get our experts' thoughts. Then, Mihai takes us through MCP Gateway and what this means for next-gen AI systems. Finally, special guest, Ryan Hagemann, joins us to analyze the White House's new AI Action Plan, released this week. What does this mean for AI policy? Tune in to Mixture of Experts to find out! 00:00 – Intro 01:16 - DeepMind at IMO 16:27 - ChatGPT agents 25:43 - MCP Gateway 35:45 - AI Action Plan The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity.
In this episode, Colin and Samir sit down with Mustafa Suleyman, CEO of Microsoft AI and co-founder of DeepMind, for a conversation about the future of artificial intelligence and what it means for creators, culture, and the future of the internet. Mustafa shares how his journey from running a juice stand in Camden Market to building one of the world's leading AI companies has shaped his view of technology and society. We dive into the emotional and creative potential of AI companions, the importance of trust and brand in the age of generative tools, and why the digital spaces we work in need more texture, personality, and "digital patina." Whether you're a creator, founder, or just trying to understand where the internet is going, this conversation will spark new ideas—and probably change the way you think about the future. Learn more about your ad choices. Visit megaphone.fm/adchoices
In this special live recording from our Lexington AI Meetup, we sit down with Steve Crossan, a founding member of Google DeepMind's AlphaFold team and former Google product leader. Steve helped launch groundbreaking AI research as part of the team that built AlphaFold, the model that cracked one of biology's grand challenges.AlphaFold can predict a protein's 3D structure using only its amino acid sequence - a task that once took scientists months or years now completed in minutes. With the release of AlphaFold 3, the model now maps not just proteins, but how they interact with DNA, RNA, drugs, and antibodies - a huge leap for drug discovery and synthetic biology.Steve breaks down the origin story of AlphaFold, the future of AI-powered science, and what's next for healthcare, drug development, and beyond. A special thank you to Brent Seales and Randall Stevens for helping us coordinate Steve's talk during his visit in Lexington!If you'd like to stay up to date about upcoming Middle Tech events, subscribe to our newsletter at middletech.beehiiv.com.
Mustafa Suleyman is a key figure in the artificial intelligence world. He's Microsoft AI CEO, with roots in Google's DeepMind and Inflection AI. Suleyman recently joined WSJ columnists Christopher Mims and Tim Higgins on an episode of their Bold Names podcast. They discuss why AI assistants are central to Microsoft's AI future, the company's relationship with OpenAI, and what Suleyman really thinks about “artificial general intelligence.” Tech News Briefing brings you an encore of that episode. Listen and subscribe to Bold Names. Learn more about your ad choices. Visit megaphone.fm/adchoices
Welcome back to another action-packed episode of Tank Talks! Join host Matt Cohen with John Ruffalo as they break down the high-stakes drama shaking the AI and tech world. First up: the shocking collapse of OpenAI's $3 billion deal to acquire Windsurf, derailed by Microsoft's IP grip, and Google DeepMind's lightning-fast $2.4 billion countermove to snag top talent. Was this a regulatory dodge or a ruthless talent grab?The plot thickens as Cognition swoops in to rescue Windsurf's abandoned employees, sparking fiery debates about ethics in tech acquisitions. Meanwhile, Meta's throwing half-billion-dollar offers and unlimited GPU access at AI researchers, will this arms race kill open-source AI?From Google's $3 billion hydropower deal to private equity's risky play for Grant Thornton, no stone is left unturned. Plus, is the red-hot IPO market ready for crypto's comeback? Strap in for a no-holds-barred dive into the deals, power struggles, and Silicon Valley scheming you need to know about!OpenAI's $3 Billion Deal Collapse: A Tech Industry Shock (00:45)It all started with a major deal unraveling: OpenAI's attempt to acquire Windsurf, a competitor to Cursor, fell apart due to a contractual conflict with Microsoft. Matt and John break down what went wrong, how this impacts the AI talent war, and the broader implications for future tech acquisitions.Google DeepMind's $2.4 Billion Deal: A New Era of AI Acquisition (02:05)Google swoops in to capitalize on the situation with a $2.4 billion licensing deal for Windsurf's key staff and technology. Matt and John explore how this move positions Google and whether it signals a new wave of AI-powered business acquisitions.The Ethics of Acquihires and Minority Shareholder Issues (05:10)What happens when top employees leave with huge payouts, while others are left behind in the dust? John and Matt discuss the ethical and legal complexities of acquihires and the tension between founders, employees, and investors when money and control are on the line.Cognition's Quick Move to Acquire Windsurf (07:00)In a dramatic twist, AI company Cognition steps in to acquire Windsurf and its employees, turning the situation around. Matt and John analyze the speed and strategy behind this acquisition and what it means for competition in the AI coding space.Mark Zuckerberg's AI Talent Strategy: Unlimited GPUs and $500M Deals (09:00)Zuckerberg's bold move to attract top AI talent with unlimited GPU access and eye-popping compensation packages is making waves. But is it desperation or a stroke of genius? Tune in as Matt and John debate the future of AI talent wars and Meta's place in the race for superintelligence.The Power Struggles Behind AI and Crypto Investments (11:05)It's not just about technology; it's about energy, too. Matt and John discuss the power struggles behind data centers, microgrids, and massive AI and crypto energy consumption, including the huge investments made by Meta, Google, and Oracle to secure their futures.Grant Thornton's Global Franchise Issues: When Private Equity Meets AI (12:20)Private equity's increasing role in professional services firms like Grant Thornton is causing some tension. Matt and John explore how AI and cross-border partnerships are shaking up the accounting world, leading to serious questions about the future of global firms.Bitcoin Soars and IPOs Heat Up: The Crypto Revolution (15:30)Bitcoin is soaring and the IPO market is heating up with crypto companies eager to go public. John and Matt discuss the latest developments in the blockchain world, and whether there's room for Canadian companies to make waves on the IPO stage.Connect with John Ruffolo on LinkedIn: https://ca.linkedin.com/in/joruffoloConnect with Matt Cohen on LinkedIn: https://ca.linkedin.com/in/matt-cohen1Visit the Ripple Ventures website: https://www.rippleventures.com/ This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit tanktalks.substack.com
What if your company had a digital brain that never forgot, always knew the answer, and could instantly tap the knowledge of your best engineers, even after they left? Superintelligence can feel like a hand‑wavy pipe‑dream— yet, as Misha Laskin argues, it becomes a tractable engineering problem once you scope it to the enterprise level. Former DeepMind researcher Laskin is betting on an oracle‑like AI that grasps every repo, Jira ticket and hallway aside as deeply as your principal engineer—and he's building it at Reflection AI.In this wide‑ranging conversation, Misha explains why coding is the fastest on‑ramp to superintelligence, how “organizational” beats “general” when real work is on the line, and why today's retrieval‑augmented generation (RAG) feels like “exploring a jungle with a flashlight.” He walks us through Asimov, Reflection's newly unveiled code‑research agent that fuses long‑context search, team‑wide memory and multi‑agent planning so developers spend less time spelunking for context and more time shipping.We also rewind his unlikely journey—from physics prodigy in a Manhattan‑Project desert town, to Berkeley's AI crucible, to leading RLHF for Google Gemini—before he left big‑lab comfort to chase a sharper vision of enterprise super‑intelligence. Along the way: the four breakthroughs that unlocked modern AI, why capital efficiency still matters in the GPU arms‑race, and how small teams can lure top talent away from nine‑figure offers.If you're curious about the next phase of AI agents, the future of developer tooling, or the gritty realities of scaling a frontier‑level startup—this episode is your blueprint.Reflection AIWebsite - https://reflection.aiLinkedIn - https://www.linkedin.com/company/reflectionaiMisha LaskinLinkedIn - https://www.linkedin.com/in/mishalaskinX/Twitter - https://x.com/mishalaskinFIRSTMARKWebsite - https://firstmark.comX/Twitter - https://twitter.com/FirstMarkCapMatt Turck (Managing Director)LinkedIn - https://www.linkedin.com/in/turck/X/Twitter - https://twitter.com/mattturck(00:00) Intro (01:42) Reflection AI: Company Origins and Mission (04:14) Making Superintelligence Concrete (06:04) Superintelligence vs. AGI: Why the Goalposts Moved (07:55) Organizational Superintelligence as an Oracle (12:05) Coding as the Shortcut: Hands, Legs & Brain for AI (16:00) Building the Context Engine (20:55) Capturing Tribal Knowledge in Organizations (26:31) Introducing Asimov: A Deep Code Research Agent (28:44) Team-Wide Memory: Preserving Institutional Knowledge (33:07) Multi-Agent Design for Deep Code Understanding (34:48) Data Retrieval and Integration in Asimov (38:13) Enterprise-Ready: VPC and On-Prem Deployments (39:41) Reinforcement Learning in Asimov's Development (41:04) Misha's Journey: From Physics to AI (42:06) Growing Up in a Science-Driven Desert Town (53:03) Building General Agents at DeepMind (56:57) Founding Reflection AI After DeepMind (58:54) Product-Driven Superintelligence: Why It Matters (01:02:22) The State of Autonomous Coding Agents (01:04:26) What's Next for Reflection AI
Grok 4 from xAI just aced “Humanity's Last Exam” benchmarks while Grok 3 had a catastrophic public meltdown. What does this mean for the future of AI and Elon Musk's credibility? And, in other AI news, OpenAI's GPT-5 is rumored to land next week along with a new open-source reasoning model, Google's DeepMind launches AI-designed drugs into human trials, and Perplexity's new AI browser Comet sparks OpenAI's plan to crush Chrome. PLUS YouTube cracks down on AI-generated spam while updating image-to-video in VEO 3, Moon Valley releases an “ethical” AI video platform, and why you should probably stop kicking robots. AI IS GETTING SMARTER...BUT WE STILL CONTROL THE TREATS. Join the discord: https://discord.gg/muD2TYgC8f Join our Patreon: https://www.patreon.com/AIForHumansShow AI For Humans Newsletter: https://aiforhumans.beehiiv.com/ Follow us for more on X @AIForHumansShow Join our TikTok @aiforhumansshow To book us for speaking, please visit our website: https://www.aiforhumans.show/ // Show Links // Grok4: The Smartest Model Yet? https://x.com/xai/status/1943158495588815072 Elon Says Grok-4 is better than PhD Level… https://x.com/teslaownersSV/status/1943168634672566294 Benchmarks https://x.com/ArtificialAnlys/status/1943166841150644622 https://x.com/arcprize/status/1943168950763950555 McKay Wrigley Grok 4 Heavy Example https://x.com/mckaywrigley/status/1943385794414334032 Grok Goes Bad: The Unhinged Behavior https://www.nytimes.com/2025/07/08/technology/grok-antisemitism-ai-x.html https://www.npr.org/2025/07/09/nx-s1-5462609/grok-elon-musk-antisemitic-racist-content X CEO Linda Yaccarino Quits https://www.cnbc.com/2025/07/09/linda-yaccarino-x-elon-musk.html Elon still trying to fix answers https://x.com/elonmusk/status/1943240153587421589 OpenAI Poaches Tesla/xAi People https://www.wired.com/story/openai-new-hires-scaling/ Apple's Top AI Exec Leaves For Meta https://x.com/markgurman/status/1942341725499863272 OpenAI's open-source model coming as soon as next week and compares to o3-mini https://www.theverge.com/notepad-microsoft-newsletter/702848/openai-open-language-model-o3-mini-notepad Perplexity's Comet Browser Launches https://comet.perplexity.ai/ OpenAI Fires Back With Its Browser News https://x.com/AndrewCurran_/status/1943008960803680730 YouTube *Might* Change Their Policies to Limit Faceless AI Videos (and mass produced content) https://techcrunch.com/2025/07/09/youtube-prepares-crackdown-on-mass-produced-and-repetitive-videos-as-concern-over-ai-slop-grows/ Google VEO 3 Image-to-Vid launched https://x.com/Uncanny_Harry/status/1942686253817974984 https://x.com/CaptainHaHaa/status/1942907271841030183 https://x.com/TheoMediaAI/status/1942564887114166493 My test + ask for sound sampling from the team: https://x.com/AIForHumansShow/status/1942597607312040348 Moonvalley Launches AI Video Platform https://www.moonvalley.com/ GoogleDeepMinds's Isomorphic Labs Starts Human Trials on AI generated drugs https://www.aol.com/finance/google-deepmind-grand-ambitions-cure-130000934.html?utm_source=perplexity&guccounter=1 Noetix N2 Robot Endures Abuse From Its Developer https://x.com/TheHumanoidHub/status/1941935665173963085 https://noetixrobotics.com/products-138.html Kavan The Kid (the AI Batman video guy) CRUSHED His New Original Trailer https://x.com/Kavanthekid/status/1940452444850589999 Reachy The Robot from Hugging Face https://x.com/Thom_Wolf/status/1942887160983466096 Autonomous Robot Excavator Building a Wall https://x.com/lukas_m_ziegler/status/1941815414683521488
Pushmeet Kohli leads AI for Science at DeepMind, where his team has created AlphaEvolve, an AI system that discovers entirely new algorithms and proves mathematical results that have eluded researchers for decades. From improving 50-year-old matrix multiplication algorithms to generating interpretable code for complex problems like data center scheduling, AlphaEvolve represents a new paradigm where LLMs coupled with evolutionary search can outperform human experts. Pushmeet explains the technical architecture behind these breakthroughs and shares insights from collaborations with mathematicians like Terence Tao, while discussing how AI is accelerating scientific discovery across domains from chip design to materials science. Hosted by Sonya Huang and Pat Grady, Sequoia Capital
Solon Angel is the founder of MindBridge and now Remitian, and he's been at the forefront of applying AI to deeply unsexy but powerful domains like accounting and tax compliance. In this episode, he shares the origin story of MindBridge, how a DeepMind demo changed his life, and what it's like to build a modern startup where AI plays the role of a product manager, podcast producer, and even financial advisor. Solon also demoed his newest AI agent that proactively manages tax remittances before late fees hit. If you're wondering what the future of AI-powered businesses looks like, this is a masterclass.Timestamps:00:00 — The DeepMind demo that inspired Solon01:00 — Solon's background and the early days of MindBridge03:00 — The “dumb rule” state of AI in financial auditing04:30 — Selling AI to skeptical accountants in 201506:00 — The staggering cost of late tax fees ($60B/year!)08:00 — Remitian: an AI agent that pays your taxes for you10:00 — Why Fellow is a core part of how Remitian runs11:30 — How AI helps eliminate the need for a product manager13:00 — Rewriting 3 years of code in 3 months with AI16:00 — The shift in what matters: creativity over code18:00 — Calorie-tracking app Cal AI and teen founders19:00 — Solon's AI-powered investment tool (+21% YTD)20:00 — Live demo: AI agent managing tax payments23:00 — Future vision: AI offering instant tax loans25:00 — How Remitian uses Notebook LM for internal podcasts27:00 — AI updates for board members in 10-minute clips28:00 — Notion AI's “research mode” vs. “ask” mode30:00 — Predicting the rise of startups for content auto-archiving33:00 — Solon's final thoughts: beating billion-dollar firms with AITools & Technologies Mentioned:Fellow – Used for meeting AI transcripts, pre-reads, and knowledge sharingNotebook LM (Google) – Turns transcripts into internal podcastsNotion AI – Used for deep research, summarizing objections, and discovering product insightsSlack – Centralized communication, connected with other AI toolsCursor – AI coding tool used to rewrite years of code in monthsSoft Type 2 – Mentioned in relation to efficient AI-based prototypingCal AI – Food photo calorie tracker built by a 17-year-old founderChatGPT Vision – Used by Solon to interpret emotions via facial expressionsCustom AI Trader – Built by Solon for sentiment-based trading, outperformed the marketRemitian's AI Agent – Calls users, checks funds, splits tax payments, and offers loansSubscribe at thisnewway.com to get the step-by-step playbooks, tools, and workflows.
Plus Are We About To Lose Every Job To Robots?Like this? Get AIDAILY, delivered to your inbox, 3x a week. Subscribe to our newsletter at https://aidaily.usAI Is Radicalizing Both Sides—Welcome to the New Culture WarAI isn't just changing tools—it's fueling a full-blown ideological war. Some see AI as the ultimate evolution of human progress, while skeptics dismiss it as a bubble or worse. The tech's strengths and flaws—automation, hallucinations, social impact—are driving both hype and backlash. This split isn't about tech, it's about belief. Would You Replace Your CEO with an AI Avatar? Tech CEOs Are Testing the WatersSome tech bosses are literally sending AI versions of themselves to earnings calls: Klarna's Sebastian Siemiatkowski and Zoom's Eric Yuan let digital avatars do their talking for them. Meanwhile, Klarna's CEO even admitted AI could eventually replace his own job—though real-world chaos means humans aren't fully out… yet. Winning the AI Race Means More Than Just Tech—The U.S. Needs StrategyThe U.S.–China AI competition isn't just about building smarter bots—it's a full-on geopolitical showdown. It's a race on three fronts: developing AGI, embedding AI across societies, and securing chips, data lanes, and regulation. To stay on top, the U.S. needs a holistic plan that blends innovation, smart policy, and defense—not just private‑sector hype. Left Tech for Welding—Here's Why It Was the Best Move EverTabby Toney got laid off from her software gig in May and bailed on tech because AI was making everything feel shallow. Instead, she's welding again—tapping into creativity, problem-solving, and actual hands-on work. No more burnout, no more prep for grueling interviews—just real craft. We're About to Lose Almost Every Job to Robots—Here's the DealFuturist Adam Dorr says in the next ~20 years, robots and AI will snatch nearly all jobs—cooking, coding, caring—faster and cheaper than us. Some human roles may hang on, but not nearly enough. Society's gotta rethink how we share value, income, and purpose before chaos hits. AI That Promises to ‘Solve All Diseases' Is Heading Into Human TrialsA stealthy Google-owned lab, Isomorphic Labs (spun out from DeepMind), is now testing AI-designed cancer drugs in humans. Backed by AlphaFold 3, it designs molecules in silico, aiming to slash the 10–15 year, billion-dollar drug timeline. But with no clue how the AI makes decisions, questions around safety, transparency, pricing, and monopoly loom large.
I'm excited to announce the fifth episode of our new series, What's New in Science, co-hosted by Sabine Hossenfelder. Once again, Sabine and I each brought a few recent science stories to the table, and we took turns introducing them before diving into thoughtful discussions. It's a format that continues to spark engaging exchanges, and based on the feedback we've received, it's resonating well with listeners.In this month's episode Sabine first explored the possibility that huge terrestrial accessible reservoirs of hydrogen may exist that could provide the basis for a viable hydrogen fuel economy. Then we turned to the results from the wonderful new Vera C. Rubin Telescope in Chile, and what that telescope could do for our evolving picture of the cosmos. After that Sabine introduced a discussion of a scientific paper I wrote with colleagues on implications of mathematical incompleteness theorems for the possible existence of a physical Theory of Everything. Then on to the newly released results from a muon g-2 experiment at Fermilab, which after almost 2 decades of efforts, seems to have demonstrated that predictions from the the Standard Model of Particle Physics, alas, continue to agree with experiments, showing no signs of new physics. After that, we explored a new claim by DeepMind about the abilities of AI systems to design and test new coding algorithms, which might be used to train future systems. Besides the science-fiction sounding nature of this, it could also help reduce the amount of energy needed to build and train LLMs. Finally, returning to my own interest in new results related to the cosmic origin of life, we discussed anew result showing why polycyclic hydrocarbons, which one might expect would be destroyed by radiation in space, seem to survive. This could be important for understanding how organic seeds for life managed to survive long enough to arrive on the early Earth. As always, an ad-free video version of this podcast is also available to paid Critical Mass subscribers. Your subscriptions support the non-profit Origins Project Foundation, which produces the podcast. The audio version is available free on the Critical Mass site and on all podcast sites, and the video version will also be available on the Origins Project YouTube. Get full access to Critical Mass at lawrencekrauss.substack.com/subscribe
In this episode, I chat with Samuel Albanie about the Google DeepMind paper he co-authored called "An Approach to Technical AGI Safety and Security". It covers the assumptions made by the approach, as well as the types of mitigations it outlines. Patreon: https://www.patreon.com/axrpodcast Ko-fi: https://ko-fi.com/axrpodcast Transcript: https://axrp.net/episode/2025/07/06/episode-45-samuel-albanie-deepminds-agi-safety-approach.html Topics we discuss, and timestamps: 0:00:37 DeepMind's Approach to Technical AGI Safety and Security 0:04:29 Current paradigm continuation 0:19:13 No human ceiling 0:21:22 Uncertain timelines 0:23:36 Approximate continuity and the potential for accelerating capability improvement 0:34:29 Misuse and misalignment 0:39:34 Societal readiness 0:43:58 Misuse mitigations 0:52:57 Misalignment mitigations 1:05:20 Samuel's thinking about technical AGI safety 1:14:02 Following Samuel's work Samuel on Twitter/X: x.com/samuelalbanie Research we discuss: An Approach to Technical AGI Safety and Security: https://arxiv.org/abs/2504.01849 Levels of AGI for Operationalizing Progress on the Path to AGI: https://arxiv.org/abs/2311.02462 The Checklist: What Succeeding at AI Safety Will Involve: https://sleepinyourhat.github.io/checklist/ Measuring AI Ability to Complete Long Tasks: https://arxiv.org/abs/2503.14499 Episode art by Hamish Doodles: hamishdoodles.com
Our 214th episode with a summary and discussion of last week's big AI news! Recorded on 06/27/2025 Hosted by Andrey Kurenkov and Jeremie Harris. Feel free to email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai Read out our text newsletter and comment on the podcast at https://lastweekin.ai/. In this episode: Meta's hiring of key engineers from OpenAI and Thinking Machines Lab securing a $2 billion seed round with a valuation of $10 billion. DeepMind introduces Alpha Genome, significantly advancing genomic research with a model comparable to Alpha Fold but focused on gene functions. Taiwan imposes technology export controls on Huawei and SMIC, while Getty drops key copyright claims against Stability AI in a groundbreaking legal case. A new DeepMind research paper introduces a transformative approach to cognitive debt in AI tasks, utilizing EEG to assess cognitive load and recall in essay writing with LLMs. Timestamps + Links: (00:00:10) Intro / Banter (00:01:22) News Preview (00:02:15) Response to listener comments Tools & Apps (00:06:18) Google is bringing Gemini CLI to developers' terminals (00:12:09) Anthropic now lets you make apps right from its Claude AI chatbot Applications & Business (00:15:54) Sam Altman takes his ‘io' trademark battle public (00:21:35) Huawei Matebook Contains Kirin X90, using SMIC 7nm (N+2) Technology (00:26:05) AMD deploys its first Ultra Ethernet ready network card — Pensando Pollara provides up to 400 Gbps performance (00:31:21) Amazon joins the big nuclear party, buying 1.92 GW for AWS (00:33:20) Nvidia goes nuclear — company joins Bill Gates in backing TerraPower, a company building nuclear reactors for powering data centers (00:36:18) Mira Murati's Thinking Machines Lab closes on $2B at $10B valuation (00:41:02) Meta hires key OpenAI researcher to work on AI reasoning models Research & Advancements (00:49:46) Google's new AI will help researchers understand how our genes work (00:55:13) Direct Reasoning Optimization: LLMs Can Reward And Refine Their Own Reasoning for Open-Ended Tasks (01:01:54) Farseer: A Refined Scaling Law in Large Language Models (01:06:28) LLM-First Search: Self-Guided Exploration of the Solution Space Policy & Safety (01:11:20) Unsupervised Elicitation of Language Models (01:16:04) Taiwan Imposes Technology Export Controls on Huawei, SMIC (01:18:22) Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task Synthetic Media & Art (01:23:41) Judge Rejects Authors' Claim That Meta AI Training Violated Copyrights (01:29:46) Getty drops key copyright claims against Stability AI, but UK lawsuit continues
Few people developing artificial intelligence have as much experience in the field as Microsoft AI CEO Mustafa Suleyman. He co-founded DeepMind, helped Google develop its large language models and designed AI chatbots with personality at his former startup, Inflection AI. Now, he's tasked with leading Microsoft's efforts on its consumer AI products. On the latest episode of the Bold Names podcast, Suleyman speaks to WSJ's Christopher Mims and Tim Higgins about why AI assistants are central to his plans for Microsoft's AI future. Plus, they discuss the company's relationship with OpenAI, and what Suleyman really thinks about “artificial general intelligence.” Check Out Past Episodes: Booz Allen CEO on Silicon Valley's Turn to Defense Tech: ‘We Need Everybody.' Venture Capitalist Sarah Guo's Surprising Bet on Unsexy AI Reid Hoffman Says AI Isn't an ‘Arms Race,' but America Needs to Win Salesforce CEO Marc Benioff and the AI ‘Fantasy Land' Let us know what you think of the show. Email us at BoldNames@wsj.com Sign up for the WSJ's free Technology newsletter. Read Christopher Mims's Keywords column . Read Tim Higgins's column. Learn more about your ad choices. Visit megaphone.fm/adchoices
OpenAI und Microsoft ringen um die AGI-Klausel. Meta lockt Ex-OpenAI-Forscher mit Mega-Boni. Harvey sammelt 300 Mio. für KI-Juristen. Chinas KI-Offensive stockt wegen Chip-Embargos. DeepMind sagt Genfunktionen voraus. Scale-AI-Daten lagen offen im Netz. ChatGPT und Perplexity erobern WhatsApp. USA gefährden den DMA für Autozölle. Tesla verliert in Europa Marktanteile. Google Offerwall soll Publisher trösten. RFK Jr. streicht Impfgelder. ICE scannt Gesichter per App. Salesforce meldet 30 % KI-Produktivität. Trump-Phone stammt doch aus China. Unterstütze unseren Podcast und entdecke die Angebote unserer Werbepartner auf doppelgaenger.io/werbung. Vielen Dank! Philipp Glöckler und Philipp Klöckner sprechen heute über: (00:00:00) OpenAI ↔ Microsoft – AGI-Klausel (00:04:00) Meta heuert Ex-OpenAI/DeepMind-Forscher an (00:11:50) Harvey – 300 Mio.$-Runde für Legal-AI (00:21:20) China-KI-Offensive & Chip-Embargo (00:25:00) DeepMind AlphaGenome – Gen-Funktions-Prediction (00:27:25) Scale-AI-Leak: offenliegende Kundendaten (00:32:00) ChatGPT & Perplexity erobern WhatsApp (00:36:40) DMA in Gefahr – EU/USA-Autodea (00:39:50) Tesla-Absatzrückgang in Europa (00:42:00) Reddit „Human Verification“ (00:44:00) Google Offerwall gegen KI-Traffic-Verlust (00:45:30) Schmuddelecke Shownotes Keynote Deck - Coatue OpenAI, Microsoft Konflikt: Intelligenz von KI entscheidend – wsj.com Meta engagiert OpenAI-Forscher für KI-Modelle – techcrunch.com Meta gewinnt den Talentwettstreit mit OpenAI – theverge.com Harvey erhält $300 Millionen bei $5 Milliarden Bewertung für juristische KI – fortune.com China kurz vor über 100 DeepSeeks, sagt Ex-Spitzenbeamter – bloomberg.com DeepSeeks Fortschritt durch US-Exportkontrollen gebremst – theinformation.com Google Gen Tool – technologyreview.com Scale AI: Sensible Kundendaten in öffentlichen Google-Dokumenten offengelegt – africa.businessinsider.com Einer der besten Hacker des Landes ist ein KI-Bot – bloomberg.com Meta fügt KI-gestützte Zusammenfassungen zu WhatsApp hinzu – techcrunch.com Meta im KI-Wettbewerb: WhatsApp als Chatbot-Schlachtfeld – Business Insider Meta plant Übernahme von AI-Startup PlayAI – bloomberg.com Aussetzung des DMA? - Sorge vor EU-USA Kuhhandel – share.google Teslas europäische Verkaufszahlen sinken fünften Monat in Folge – on.ft.com Reddit verspricht menschlich zu bleiben – on.ft.com Krypto-Besitz könnte Hypotheken erleichtern – businessinsider.com Da KI den Suchverkehr reduziert, startet Google Offerwall zur Umsatzsteigerung – techcrunch.com Robert Kennedy stoppt US-Finanzierung für globale Impfstoffallianz – ft.com ICE App – 404media.co Salesforce-CEO: 30 % der internen Arbeit durch KI – bloomberg.com Trump Mobile: Neue Telefone 'made in America' – eu.usatoday.com
What does it take to build AI-powered products that scale? In this episode, we are joined by Jonathan Evens, Product Lead at Google DeepMind, to explore the evolving role of product management in the AI era. Jonathan draws from experience across startups and Big Tech, ranging from smart grid systems to LLM-powered search, to reveal how high-impact AI products come to life. He breaks down DeepMind's journey launching “AI Overviews” in Search, from beta testing in May 2023 to worldwide rollout just six months later. Jonathan also shares frameworks for balancing problem-led versus technology-led thinking, future-proofing AI roadmaps, and making intelligent experiences (not just features) the north star. He's here to help product teams demystify LLMs and launch bold AI functionality with nimble development cycles. Tune in to gain practical advice for integrating AI thoughtfully, iterating quickly, and delivering real value.
Our 212th episode with a summary and discussion of last week's big AI news! Recorded on 06/33/2025 Hosted by Andrey Kurenkov and Jeremie Harris. Feel free to email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai Read out our text newsletter and comment on the podcast at https://lastweekin.ai/. In this episode: OpenAI introduces O3 PRO for ChatGPT, highlighting significant improvements in performance and cost-efficiency. Anthropic sees an influx of talent from OpenAI and DeepMind, with significantly higher retention rates and competitive advantages in AI capabilities. New research indicates that reinforcing negative responses in LLMs significantly improves performance across all metrics, highlighting novel approaches in reinforcement learning. A security flaw in Microsoft Copilot demonstrates the growing risk of AI agents being hacked, emphasizing the need for robust protection against zero-click attacks. Timestamps + Links: (00:00:11) Intro / Banter (00:01:31) News Preview (00:02:46) Response to Listener Reviews Tools & Apps (00:04:48) OpenAI adds o3 Pro to ChatGPT and drops o3 price by 80 per cent, but open-source AI is delayed (00:09:10) Cursor AI editor hits 1.0 milestone, including BugBot and high-risk background agents (00:13:07) Mistral releases a pair of AI reasoning models (00:16:18) Elevenlabs' Eleven v3 lets AI voices whisper, laugh and express emotions naturally (00:19:00) ByteDance's Seedance 1.0 is trading blows with Google's Veo 3 (00:22:42) Google Reveals $20 AI Pro Plan With Veo 3 Fast Video Generator For Budget Creators Applications & Business (00:25:42) OpenAI and DeepMind are losing engineers to Anthropic in a one-sided talent war (00:34:32) OpenAI slams court order to save all ChatGPT logs, including deleted chats (00:37:24) Nvidia's Biggest Chinese Rival Huawei Struggles to Win at Home (00:43:06) Huawei Expected to Break Semiconductor Barriers with Development of High-End 3nm GAA Chips; Tape-Out by 2026 (00:45:21) TSMC's 1.4nm Process, Also Called Angstrom, Will Make Even The Most Lucrative Clients Think Twice When Placing Orders, With An Estimate Claiming That Each Wafer Will Cost $45,000 (00:47:43) Mistral AI Launches Mistral Compute To Replace Cloud Providers from US, China Projects & Open Source (00:51:26) ProRL: Prolonged Reinforcement Learning Expands Reasoning Boundaries in Large Language Models Research & Advancements (00:57:27) Kinetics: Rethinking Test-Time Scaling Laws (01:05:12) The Surprising Effectiveness of Negative Reinforcement in LLM Reasoning (01:10:45) Predicting Empirical AI Research Outcomes with Language Models (01:15:02) EXP-Bench: Can AI Conduct AI Research Experiments? Policy & Safety (01:20:07) Large Language Models Often Know When They Are Being Evaluated (01:24:56) Beyond Induction Heads: In-Context Meta Learning Induces Multi-Phase Circuit Emergence (01:31:16) Exclusive: New Microsoft Copilot flaw signals broader risk of AI agents being hacked—‘I would be terrified' (01:35:01) Claude Gov Models for U.S. National Security Customers Synthetic Media & Art (01:37:32) Disney And NBCUniversal Sue AI Company Midjourney For Copyright Infringement (01:40:39) AMC Networks is teaming up with AI company Runway
Terence Tao is widely considered to be one of the greatest mathematicians in history. He won the Fields Medal and the Breakthrough Prize in Mathematics, and has contributed to a wide range of fields from fluid dynamics with Navier-Stokes equations to mathematical physics & quantum mechanics, prime numbers & analytics number theory, harmonic analysis, compressed sensing, random matrix theory, combinatorics, and progress on many of the hardest problems in the history of mathematics. Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep472-sc See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc. Transcript: https://lexfridman.com/terence-tao-transcript CONTACT LEX: Feedback - give feedback to Lex: https://lexfridman.com/survey AMA - submit questions, videos or call-in: https://lexfridman.com/ama Hiring - join our team: https://lexfridman.com/hiring Other - other ways to get in touch: https://lexfridman.com/contact EPISODE LINKS: Terence's Blog: https://terrytao.wordpress.com/ Terence's YouTube: https://www.youtube.com/@TerenceTao27 Terence's Books: https://amzn.to/43H9Aiq SPONSORS: To support this podcast, check out our sponsors & get discounts: Notion: Note-taking and team collaboration. Go to https://notion.com/lex Shopify: Sell stuff online. Go to https://shopify.com/lex NetSuite: Business management software. Go to http://netsuite.com/lex LMNT: Zero-sugar electrolyte drink mix. Go to https://drinkLMNT.com/lex AG1: All-in-one daily nutrition drink. Go to https://drinkag1.com/lex OUTLINE: (00:00) - Introduction (00:36) - Sponsors, Comments, and Reflections (09:49) - First hard problem (15:16) - Navier–Stokes singularity (35:25) - Game of life (42:00) - Infinity (47:07) - Math vs Physics (53:26) - Nature of reality (1:16:08) - Theory of everything (1:22:09) - General relativity (1:25:37) - Solving difficult problems (1:29:00) - AI-assisted theorem proving (1:41:50) - Lean programming language (1:51:50) - DeepMind's AlphaProof (1:56:45) - Human mathematicians vs AI (2:06:37) - AI winning the Fields Medal (2:13:47) - Grigori Perelman (2:26:29) - Twin Prime Conjecture (2:43:04) - Collatz conjecture (2:49:50) - P = NP (2:52:43) - Fields Medal (3:00:18) - Andrew Wiles and Fermat's Last Theorem (3:04:15) - Productivity (3:06:54) - Advice for young people (3:15:17) - The greatest mathematician of all time PODCAST LINKS: - Podcast Website: https://lexfridman.com/podcast - Apple Podcasts: https://apple.co/2lwqZIr - Spotify: https://spoti.fi/2nEwCF8 - RSS: https://lexfridman.com/feed/podcast/ - Podcast Playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4 - Clips Channel: https://www.youtube.com/lexclips
Jonathan Godwin is co-founder and CEO of Orbital Materials, an AI-first materials-engineering start-up. The company open-sourced Orb, a state-of-the-art simulation model, and now designs bespoke porous materials—its first aimed at cooling data-centres while capturing CO₂ or water. Jonathan shares how his DeepMind background shaped Orbital's “design-before-experiment” approach, why the team chose data-center sustainability as a beachhead market, and what it takes to build a vertically integrated, AI-native industrial company. The conversation explores the future of faster, cheaper R&D, the role of advanced materials in decarbonization, and the leap from software to physical products.In this episode, we cover: [02:12] Johnny's path from DeepMind to materials start-up[04:02] Trial-and-error vs AI-driven design shift[06:40] University/industry dynamics in materials R&D[10:17] Generative agent plus simulation for rapid discovery[13:01] Mitigating hallucinations with virtual experiments[18:18] Choosing a “hero” product and vertical integration[25:43] Dual-use chiller for cooling and CO₂ or water capture[32:26] Partnering on manufacturing to stay asset-light[35:58] Building an AI-native industrial giant of the future[36:51]: Orbital's investorsEpisode recorded on April 30, 2025 (Published on May 27, 2025) Enjoyed this episode? Please leave us a review! Share feedback or suggest future topics and guests at info@mcj.vc.Connect with MCJ:Cody Simms on LinkedInVisit mcj.vcSubscribe to the MCJ Newsletter*Editing and post-production work for this episode was provided by The Podcast Consultant
Head on over to https://cell.ver.so/TOE and use coupon code TOE at checkout to save 15% on your first order. Get ready to witness a turning point in mathematical history: in this episode, we dive into the AI breakthroughs that stunned number theorists worldwide. Join us as Professor Yang-Hue Hi discusses the murmuration conjecture, shows how DeepMind, OpenAI, and EpochAI are rewriting the rules of pure math, and reveals what happens when machines start making research-level discoveries faster than any human could. AI is taking us beyond proof straight into the future of discovery. As a listener of TOE you can get a special 20% off discount to The Economist and all it has to offer! Visit https://www.economist.com/toe Join My New Substack (Personal Writings): https://curtjaimungal.substack.com Listen on Spotify: https://open.spotify.com/show/4gL14b92xAErofYQA7bU4e Timestamps: 00:00 Introduction to a New Paradigm 01:34 The Changing Landscape of Research 03:30 Categories of Machine Learning in Mathematics 06:53 Researchers: Birds vs. Hedgehogs 09:36 Personal Experiences with AI in Research 11:44 The Future Role of Academics 14:08 Presentation on the AI Mathematician 16:14 The Role of Intuition in Discovery 18:00 AI's Assistance in Vague Problem Solving 18:48 Newton and AI: A Historical Perspective 20:59 Literature Processing with AI 24:34 Acknowledging Modern Mathematicians 26:54 The Influence of Data on Mathematical Discovery 30:22 The Riemann Hypothesis and Its Implications 31:55 The BST Conjecture and Data Evolution 33:29 Collaborations and AI Limitations 36:04 The Future of Mathematics and AI 38:31 Image Processing and Mathematical Intuition 41:57 Visual Thinking in Mathematics 49:24 AI-Assisted Discovery in Mathematics 51:34 The Murmuration Conjecture and AI Interaction 57:05 Hierarchies of Difficulty 58:43 The Memoration Breakthrough 1:00:28 Understanding the BSD Conjecture 1:01:45 Diophantine Equations Explained 1:03:39 The Cubic Complexity 1:19:03 Neural Networks and Predictions 1:21:36 Breaking the Birch Test 1:24:44 The BSD Conjecture Clarified 1:26:21 The Role of AI in Discovery 1:30:29 The Memoration Phenomenon 1:32:59 PCA Analysis Insights 1:35:50 The Emergence of Memoration 1:38:35 Conjectures and AI's Role 1:41:29 Generalizing Biases in Mathematics 1:44:55 The Future of AI in Mathematics 1:49:28 The Brave New World of Discovery Links Mentioned: - Topology and Physics (book): https://amzn.to/3ZoneEn - Machine Learning in Pure Mathematics and Theoretical Physics (book): https://amzn.to/4k8SXC6 - The Calabi-Yau Landscape (book): https://amzn.to/43DO7H0 - Yang-Hui's bio and published papers: https://www.researchgate.net/profile/Yang-Hui-He - A Triumvirate of AI-Driven Theoretical Discovery (paper): https://arxiv.org/abs/2405.19973 - Edward Frenkel explains the Geometric Langlands Correspondence on TOE: https://www.youtube.com/watch?v=RX1tZv_Nv4Y - Stone Duality (Wiki): https://en.wikipedia.org/wiki/Stone_duality - Summer of Math Exposition: https://some.3b1b.co/ - Machine Learning meets Number Theory: The Data Science of Birch–Swinnerton-Dyer (paper): https://arxiv.org/pdf/1911.02008 - The L-functions and modular forms database: https://www.lmfdb.org/ - Epoch AI FrontierMath: https://epoch.ai/frontiermath/the-benchmark - Mathematical Beauty (article): https://www.quantamagazine.org/mathematical-beauty-truth-and-proof-in-the-age-of-ai-20250430/ SUPPORT: - Become a YouTube Member (Early Access Videos): https://www.youtube.com/channel/UCdWIQh9DGG6uhJk8eyIFl1w/join - Support me on Patreon: https://patreon.com/curtjaimungal - Support me on Crypto: https://commerce.coinbase.com/checkout/de803625-87d3-4300-ab6d-85d4258834a9 - Support me on PayPal: https://www.paypal.com/donate?hosted_button_id=XUBHNMFXUX5S4 SOCIALS: - Twitter: https://twitter.com/TOEwithCurt - Discord Invite: https://discord.com/invite/kBcnfNVwqs #science Learn more about your ad choices. Visit megaphone.fm/adchoices
The late biologist E.O. Wilson said that “the real problem of humanity is the following: We have Paleolithic emotions, medieval institutions, and god-like technology. And it is terrifically dangerous.” Wilson said that back in 2011, long before any of us were talking about large language models or GPTs. A little more than a decade later, artificial intelligence is already completely transforming our world. Practitioners and experts have compared A.I. to the advent of electricity and fire itself. “God-like” doesn't seem that far off. Even sober experts predict disease cures and radically expanded lifespans, real-time disaster prediction and response, the elimination of language barriers, and other earthly miracles. A.I. is amazing, in the truest sense of that word. It is also leading some to predict nothing less than a crisis in what it means to be human in an age of brilliant machines. Others—including some of the people creating this technology—predict our possible extinction as a species. But you don't have to go quite that far to imagine the way it will transform our relationship toward information and our ability to pursue the truth. For tens of thousands of years, since humans started to stand upright and talk to each other, we've found our way to wisdom through disagreement and debate. But in the age of A.I., our sources of truth are machines that spit out the information we already have, reflecting our biases and our blind spots. What happens to truth when we no longer wrestle with it—and only receive it passively? When disagreeable, complicated human beings are replaced with A.I. chatbots that just tell us what we want to hear? It makes today's concerns about misinformation and disinformation seem quaint. Our ability to detect whether something is real or an A.I.-generated fabrication is approaching zero. And unlike social media—a network of people that we instinctively know can be wrong—A.I. systems have a veneer of omniscience, despite being riddled with the biases of the humans who trained them. Meanwhile, a global arms race is underway, with the U.S. and China competing to decide who gets to control the authoritative information source of the future. So last week Bari traveled to San Francisco to host a debate on whether this remarkable, revolutionary technology will enhance our understanding of the world and bring us closer to the truth . . .or do just the opposite. The resolution: The Truth Will Survive Artificial Intelligence! Aravind Srinivas argued yes—the truth will survive A.I. Aravind is the CEO of one of the most exciting companies in this field, Perplexity, which he co-founded in 2022 after working at OpenAI, Google, and DeepMind. Aravind was joined by Dr. Fei-Fei Li. Fei-Fei is a professor of computer science at Stanford, the founding co-director of the Stanford Institute for Human-Centered A.I., and the CEO and co-founder of World Labs, an A.I. company focusing on spatial intelligence and generative A.I. Jaron Lanier argued that no, the truth will not survive A.I. Jaron is a computer scientist, best-selling author, and the founder of VPL Research, the first company to sell virtual reality products. Jaron was joined by Nicholas Carr, the author of countless best-selling books on the human consequences of technology, including Pulitzer Prize finalist The Shallows, The Glass Cage, and, most recently, Superbloom. He also writes the wonderful Substack New Cartographies. Learn more about your ad choices. Visit megaphone.fm/adchoices
Nabeel Qureshi is an entrepreneur, writer, researcher, and visiting scholar of AI policy at the Mercatus Center (alongside Tyler Cowen). Previously, he spent nearly eight years at Palantir, working as a forward-deployed engineer. His work at Palantir ranged from accelerating the Covid-19 response to applying AI to drug discovery to optimizing aircraft manufacturing at Airbus. Nabeel was also a founding employee and VP of business development at GoCardless, a leading European fintech unicorn.What you'll learn:• Why almost a third of all Palantir's PMs go on to start companies• How the “forward-deployed engineer” model works and why it creates exceptional product leaders• How Palantir transformed from a “sparkling Accenture” into a $200 billion data/software platform company with more than 80% margins• The unconventional hiring approach that screens for independent-minded, intellectually curious, and highly competitive people• Why the company intentionally avoids traditional titles and career ladders—and what they do instead• Why they built an ontology-first data platform that LLMs love• How Palantir's controversial “bat signal” recruiting strategy filtered for specific talent types• The moral case for working at a company like Palantir—Brought to you by:• WorkOS—Modern identity platform for B2B SaaS, free up to 1 million MAUs• Attio—The powerful, flexible CRM for fast-growing startups• OneSchema—Import CSV data 10x faster—Where to find Nabeel S. Qureshi:• X: https://x.com/nabeelqu• LinkedIn: https://www.linkedin.com/in/nabeelqu/• Website: https://nabeelqu.co/—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Introduction to Nabeel S. Qureshi(05:10) Palantir's unique culture and hiring(13:29) What Palantir looks for in people(16:14) Why they don't have titles(19:11) Forward-deployed engineers at Palantir(25:23) Key principles of Palantir's success(30:00) Gotham and Foundry(36:58) The ontology concept(38:02) Life as a forward-deployed engineer(41:36) Balancing custom solutions and product vision(46:36) Advice on how to implement forward-deployed engineers(50:41) The current state of forward-deployed engineers at Palantir(53:15) The power of ingesting, cleaning and analyzing data(59:25) Hiring for mission-driven startups(01:05:30) What makes Palantir PMs different(01:10:00) The moral question of Palantir(01:16:03) Advice for new startups(01:21:12) AI corner(01:24:00) Contrarian corner(01:25:42) Lightning round and final thoughts—Referenced:• Reflections on Palantir: https://nabeelqu.co/reflections-on-palantir• Palantir: https://www.palantir.com/• Intercom: https://www.intercom.com/• Which companies produce the best product managers: https://www.lennysnewsletter.com/p/which-companies-produce-the-best• Gotham: https://www.palantir.com/platforms/gotham/• Foundry: https://www.palantir.com/platforms/foundry/• Peter Thiel on X: https://x.com/peterthiel• Alex Karp: https://en.wikipedia.org/wiki/Alex_Karp• Stephen Cohen: https://en.wikipedia.org/wiki/Stephen_Cohen_(entrepreneur)• Joe Lonsdale on LinkedIn: https://www.linkedin.com/in/jtlonsdale/• Tyler Cowen's website: https://tylercowen.com/• This Scandinavian City Just Won the Internet With Its Hilarious New Tourism Ad: https://www.afar.com/magazine/oslos-new-tourism-ad-becomes-viral-hit• Safe Superintelligence: https://ssi.inc/• Mira Murati on X: https://x.com/miramurati• Stripe: https://stripe.com/• Building product at Stripe: craft, metrics, and customer obsession | Jeff Weinstein (Product lead): https://www.lennysnewsletter.com/p/building-product-at-stripe-jeff-weinstein• Airbus: https://www.airbus.com/en• NIH: https://www.nih.gov/• Jupyter Notebooks: https://jupyter.org/• Shyam Sankar on LinkedIn: https://www.linkedin.com/in/shyamsankar/• Palantir Gotham for Defense Decision Making: https://www.youtube.com/watch?v=rxKghrZU5w8• Foundry 2022 Operating System Demo: https://www.youtube.com/watch?v=uF-GSj-Exms• SQL: https://en.wikipedia.org/wiki/SQL• Airbus A350: https://en.wikipedia.org/wiki/Airbus_A350• SAP: https://www.sap.com/index.html• Barry McCardel on LinkedIn: https://www.linkedin.com/in/barrymccardel/• Understanding ‘Forward Deployed Engineering' and Why Your Company Probably Shouldn't Do It: https://www.barry.ooo/posts/fde-culture• David Hsu on LinkedIn: https://www.linkedin.com/in/dvdhsu/• Retool's Path to Product-Market Fit—Lessons for Getting to 100 Happy Customers, Faster: https://review.firstround.com/retools-path-to-product-market-fit-lessons-for-getting-to-100-happy-customers-faster/• How to foster innovation and big thinking | Eeke de Milliano (Retool, Stripe): https://www.lennysnewsletter.com/p/how-to-foster-innovation-and-big• Looker: https://cloud.google.com/looker• Sorry, that isn't an FDE: https://tedmabrey.substack.com/p/sorry-that-isnt-an-fde• Glean: https://www.glean.com/• Limited Engagement: Is Tech Becoming More Diverse?: https://www.bkmag.com/2017/01/31/limited-engagement-creating-diversity-in-the-tech-industry/• Operation Warp Speed: https://en.wikipedia.org/wiki/Operation_Warp_Speed• Mark Zuckerberg testifies: https://www.businessinsider.com/facebook-ceo-mark-zuckerberg-testifies-congress-libra-cryptocurrency-2019-10• Anduril: https://www.anduril.com/• SpaceX: https://www.spacex.com/• Principles: https://nabeelqu.co/principles• Wispr Flow: https://wisprflow.ai/• Claude code: https://docs.anthropic.com/en/docs/agents-and-tools/claude-code/overview• Gemini Pro 2.5: https://deepmind.google/technologies/gemini/pro/• DeepMind: https://deepmind.google/• Latent Space newsletter: https://www.latent.space/• Swyx on x: https://x.com/swyx• Neural networks in chess programs: https://www.chessprogramming.org/Neural_Networks• AlphaZero: https://en.wikipedia.org/wiki/AlphaZero• The top chess players in the world: https://www.chess.com/players• Decision to Leave: https://www.imdb.com/title/tt12477480/• Oldboy: https://www.imdb.com/title/tt0364569/• Christopher Alexander: https://en.wikipedia.org/wiki/Christopher_Alexander—Recommended books:• The Technological Republic: Hard Power, Soft Belief, and the Future of the West: https://www.amazon.com/Technological-Republic-Power-Belief-Future/dp/0593798694• Zero to One: Notes on Startups, or How to Build the Future: https://www.amazon.com/Zero-One-Notes-Startups-Future/dp/0804139296• Impro: Improvisation and the Theatre: https://www.amazon.com/Impro-Improvisation-Theatre-Keith-Johnstone/dp/0878301178/• William Shakespeare: Histories: https://www.amazon.com/Histories-Everymans-Library-William-Shakespeare/dp/0679433120/• High Output Management: https://www.amazon.com/High-Output-Management-Andrew-Grove/dp/0679762884• Anna Karenina: https://www.amazon.com/Anna-Karenina-Leo-Tolstoy/dp/0143035002—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.lennysnewsletter.com/subscribe
My Conversation with Gary begins at about 22 mins Stand Up is a daily podcast that I book,host,edit, post and promote new episodes with brilliant guests every day. Please subscribe now for as little as 5$ and gain access to a community of over 700 awesome, curious, kind, funny, brilliant, generous souls Check out StandUpwithPete.com to learn more Get Gary's new book! A veteran Pulitzer Prize-winning journalist shadows the top thinkers in the field of Artificial Intelligence, introducing the breakthroughs and developments that will change the way we live and work. Artificial Intelligence has been “just around the corner” for decades, continually disappointing those who long believed in its potential. But now, with the emergence and growing use of ChatGPT, Gemini, and a rapidly multiplying number of other AI tools, many are wondering: Has AI's moment finally arrived? In AI Valley, Pulitzer Prize-winning journalist Gary Rivlin brings us deep into the world of AI development in Silicon Valley. Over the course of more than a year, Rivlin closely follows founders and venture capitalists trying to capitalize on this AI moment. That includes LinkedIn founder Reid Hoffman, the legendary investor whom the Wall Street Journal once called, “the most connected person in Silicon Valley.” Through Hoffman, Rivlin is granted access to a number of companies on the cutting-edge of AI research, such as Inflection AI, the company Hoffman cofounded in 2022, and OpenAI, the San Francisco-based startup that sparked it all with its release at the end of that year of ChatGPT. In addition to Hoffman, Rivlin introduces us to other AI experts, including OpenAI cofounder Sam Altman and Mustafa Suleyman, the co-founder of DeepMind, an early AI startup that Google bought for $650 million in 2014. Rivlin also brings readers inside Microsoft, Meta, Google and other tech giants scrambling to keep pace. On this vast frontier, no one knows which of these companies will hit it big–or which will flame out spectacularly. In this riveting narrative marbled with familiar names such as Musk, Zuckerberg, and Gates, Rivlin chronicles breakthroughs as they happen, giving us a deep understanding of what's around the corner in AI development. An adventure story full of drama and unforgettable personalities, AI Valley promises to be the definitive story for anyone seeking to understand the latest phase of world-changing discoveries and the minds behind them. Join us Monday's and Thursday's at 8EST for our Bi Weekly Happy Hour Hangout's ! Pete on Blue Sky Pete on Threads Pete on Tik Tok Pete on YouTube Pete on Twitter Pete On Instagram Pete Personal FB page Stand Up with Pete FB page All things Jon Carroll Follow and Support Pete Coe Buy Ava's Art Hire DJ Monzyk to build your website or help you with Marketing Gift a Subscription https://www.patreon.com/PeteDominick/gift
In episode 1860, Jack and guest co-host Andrew Ti are joined by host of Worse Than You, Mo Fry Pasic, to discuss…REAL ID Isn’t Real, Cyber Trucks Just Totally Stop Selling, This AI Expert Thinks The AI Bubble’s About to Pop and more! What you need to know about the REAL ID requirements for air travel The Racist Origins of the Real ID Act Top Trump agency reveals key reason why REAL ID will be enforced 'Mass surveillance': Conservatives sound alarm over Trump admin's REAL ID rollout Trump’s Insistence on Real ID Has Become a Flashpoint for His Tinfoil Hat Fans You can get a free Krispy Kreme doughnut on May 7 for Real ID deadline: Here's how Homeland Security chief says travelers with no REAL ID can fly for now, but with likely extra steps Flying out of Indianapolis without REAL ID? Don't fret — the airport isn't turning people away Tesla’s Inventory of Unsold Cybertrucks Skyrockets, Despite Offering $10K Discounts and Concealing Listings The Silicon Valley sceptic warning tech’s new bubble is about to burst Deep Learning Is Hitting a Wall Microsoft’s £2.5bn investment in Britain at risk from creaking power grid Chess helped me win the Nobel Prize, says Google’s AI genius OpenAI overrode concerns of expert testers to release sycophantic GPT-4o The next British boom could be in the offing – if Starmer abandons net zero Finance worker pays out $25 million after video call with deepfake ‘chief financial officer’ LISTEN: Indeed by CruzaSee omnystudio.com/listener for privacy information.
Explains language models (LLMs) advancements. Scaling laws - the relationships among model size, data size, and compute - and how emergent abilities such as in-context learning, multi-step reasoning, and instruction following arise once certain scaling thresholds are crossed. The evolution of the transformer architecture with Mixture of Experts (MoE), describes the three-phase training process culminating in Reinforcement Learning from Human Feedback (RLHF) for model alignment, and explores advanced reasoning techniques such as chain-of-thought prompting which significantly improve complex task performance. Links Notes and resources at ocdevel.com/mlg/mlg34 Build the future of multi-agent software with AGNTCY Try a walking desk stay healthy & sharp while you learn & code Transformer Foundations and Scaling Laws Transformers: Introduced by the 2017 "Attention is All You Need" paper, transformers allow for parallel training and inference of sequences using self-attention, in contrast to the sequential nature of RNNs. Scaling Laws: Empirical research revealed that LLM performance improves predictably as model size (parameters), data size (training tokens), and compute are increased together, with diminishing returns if only one variable is scaled disproportionately. The "Chinchilla scaling law" (DeepMind, 2022) established the optimal model/data/compute ratio for efficient model performance: earlier large models like GPT-3 were undertrained relative to their size, whereas right-sized models with more training data (e.g., Chinchilla, LLaMA series) proved more compute and inference efficient. Emergent Abilities in LLMs Emergence: When trained beyond a certain scale, LLMs display abilities not present in smaller models, including: In-Context Learning (ICL): Performing new tasks based solely on prompt examples at inference time. Instruction Following: Executing natural language tasks not seen during training. Multi-Step Reasoning & Chain of Thought (CoT): Solving arithmetic, logic, or symbolic reasoning by generating intermediate reasoning steps. Discontinuity & Debate: These abilities appear abruptly in larger models, though recent research suggests that this could result from non-linearities in evaluation metrics rather than innate model properties. Architectural Evolutions: Mixture of Experts (MoE) MoE Layers: Modern LLMs often replace standard feed-forward layers with MoE structures. Composed of many independent "expert" networks specializing in different subdomains or latent structures. A gating network routes tokens to the most relevant experts per input, activating only a subset of parameters—this is called "sparse activation." Enables much larger overall models without proportional increases in compute per inference, but requires the entire model in memory and introduces new challenges like load balancing and communication overhead. Specialization & Efficiency: Experts learn different data/knowledge types, boosting model specialization and throughput, though care is needed to avoid overfitting and underutilization of specialists. The Three-Phase Training Process 1. Unsupervised Pre-Training: Next-token prediction on massive datasets—builds a foundation model capturing general language patterns. 2. Supervised Fine Tuning (SFT): Training on labeled prompt-response pairs to teach the model how to perform specific tasks (e.g., question answering, summarization, code generation). Overfitting and "catastrophic forgetting" are risks if not carefully managed. 3. Reinforcement Learning from Human Feedback (RLHF): Collects human preference data by generating multiple responses to prompts and then having annotators rank them. Builds a reward model (often PPO) based on these rankings, then updates the LLM to maximize alignment with human preferences (helpfulness, harmlessness, truthfulness). Introduces complexity and risk of reward hacking (specification gaming), where the model may exploit the reward system in unanticipated ways. Advanced Reasoning Techniques Prompt Engineering: The art/science of crafting prompts that elicit better model responses, shown to dramatically affect model output quality. Chain of Thought (CoT) Prompting: Guides models to elaborate step-by-step reasoning before arriving at final answers—demonstrably improves results on complex tasks. Variants include zero-shot CoT ("let's think step by step"), few-shot CoT with worked examples, self-consistency (voting among multiple reasoning chains), and Tree of Thought (explores multiple reasoning branches in parallel). Automated Reasoning Optimization: Frontier models selectively apply these advanced reasoning techniques, balancing compute costs with gains in accuracy and transparency. Optimization for Training and Inference Tradeoffs: The optimal balance between model size, data, and compute is determined not only for pretraining but also for inference efficiency, as lifetime inference costs may exceed initial training costs. Current Trends: Efficient scaling, model specialization (MoE), careful fine-tuning, RLHF alignment, and automated reasoning techniques define state-of-the-art LLM development.