Podcasts about DeepMind

  • 1,072PODCASTS
  • 2,589EPISODES
  • 43mAVG DURATION
  • 1DAILY NEW EPISODE
  • Dec 15, 2025LATEST
DeepMind

POPULARITY

20172018201920202021202220232024

Categories



Best podcasts about DeepMind

Show all podcasts related to deepmind

Latest podcast episodes about DeepMind

EUVC
E670 | This Week in European Tech with Dan, Mads & Lomax

EUVC

Play Episode Listen Later Dec 15, 2025 44:36


Welcome back to another episode of Upside at the EUVC Podcast, where ⁠Dan Bowyer⁠,⁠ Mads Jensen⁠ of ⁠SuperSeed⁠ and ⁠Lomax Ward⁠ of ⁠Outsized Ventures⁠⁠⁠ gather for a holiday-home special to cut through the noise around Europe's tech, geopolitics and AI shifts. What begins as an innocent debate about whether DeepMind is “still a UK company” quickly spirals into a tour of sovereign AI strategy, the SpaceX mega-raise, Europe's increasingly uncomfortable place between China and the US, defence-spending reality checks and a surprisingly uplifting set of deep-tech deals across the continent.It is classic Upside: the takes are sharp, the geopolitics gets spiky, and the optimism… well, it arrives eventually.What's covered:04:36 AI-for-Science, robotics and the new “AI scientist” era06:50 A national-curriculum Gemini and the vision of a tutor for every child09:39 The SpaceX 2026 IPO: what investors are actually buying14:00 Starship, orbital compute and the trillion-dollar imagination gap18:07 Why Europe missed the space race once again19:43 Portugal flips the script: “Economy of the Year”22:58 Europe between China's export tsunami and America's cold shoulder32:07 Defence budgets: the hype, the delay and the reality for startups34:25 AI Corner: bubble fears, Mistral's comeback, Meta goes closed, China goes full-stackComms Strategy Expert SessionApply or share the opportunity with a founder or investor in your network: https://luma.com/euvc-comms-expert-session

Jungunternehmer Podcast
Ingredient - B2B statt B2C: Warum ROI entscheidet - mit Karl-Moritz Hermann, reliant.ai

Jungunternehmer Podcast

Play Episode Listen Later Dec 15, 2025 11:18


Karl-Moritz Hermann, Gründer von reliant.ai, spricht über den Weg von DeepMind zum eigenen KI-Startup. Er teilt, warum sie bewusst B2B statt B2C wählten, wie sie durch Benchmarks echte Probleme identifizierten und warum manchmal 85% Genauigkeit hervorragend und manchmal katastrophal sein können. Was du lernst: Die richtige KI-Produktstrategie finden Wie man echte Probleme identifiziert Warum Benchmarks entscheidend sind Den richtigen Mix aus Forschung und Anwendung ALLES ZU UNICORN BAKERY: https://stan.store/fabiantausch   Mehr zu Karl-Moritz: LinkedIn: https://www.linkedin.com/in/karlmoritz/  Reliant AI: https://www.reliant.ai/  Join our Founder Tactics Newsletter: 2x die Woche bekommst du die Taktiken der besten Gründer der Welt direkt ins Postfach: https://www.tactics.unicornbakery.de/ 

AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store

Welcome to AI Unraveled (December 12, 2025): Your daily strategic briefing on the business impact of AI.Today on AI Unraveled, we break down the escalation in the model wars as OpenAI rushes GPT-5.2 to market to counter Google's Gemini 3. We also analyze the massive copyright lawsuit Disney just handed to Google, Rivian's strategic pivot to proprietary AI chips, and the new space race between Bezos and Musk to build orbital data centers. Plus, why the Financial Times believes the "Hyperscale" bubble might burst in favor of specialized industrial AI.Strategic Pillars & Topics

AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
AI Business and Development Weekly Rundown:

AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store

Play Episode Listen Later Dec 13, 2025 16:51 Transcription Available


Welcome to AI Unraveled (From December 08 to December 14th, 2025): Your daily strategic briefing on the business impact of AI.This week on AI Unraveled, we recap a massive week in artificial intelligence. The landscape shifted dramatically as Disney chose sides—inking a $1B deal with OpenAI while slapping Google with a cease-and-desist. We break down the release of GPT-5.2, Meta's strategic pivot away from open source, and the new "space race" to build orbital data centers. Plus, the US government approves Nvidia sales to China (with a catch), and Runway unveils its General World Model.Key Topics :

Geek Forever's Podcast
The Thinking Game สรุปเส้นทาง DeepMind สู่การไขรหัสชีวิตมนุษย์ | Geek Talk EP179

Geek Forever's Podcast

Play Episode Listen Later Dec 11, 2025 13:23


ในปี 2024 ที่ผ่านมา โลกของเราเพิ่งจะเกิดเหตุการณ์ครั้งประวัติศาสตร์ในวงการวิทยาศาสตร์ เมื่อรางวัลอันทรงเกียรติที่สุดของมนุษยชาติอย่าง Nobel Prize สาขาเคมี ไม่ได้ถูกมอบให้กับนักเคมีที่ขลุกอยู่กับหลอดทดลองในห้องแล็บแบบเดิม ๆ แต่กลับถูกมอบให้กับ “ผู้สร้างปัญญาประดิษฐ์” ชายคนนี้เริ่มต้นชีวิตจากการเป็นเด็กอัจฉริยะด้านหมากรุก ผันตัวมาเป็นคนสร้างวิดีโอเกม ก่อนจะก่อตั้งบริษัทที่ชื่อว่า DeepMind และถูก Google ซื้อกิจการไป จนกลายมาเป็นกุญแจสำคัญที่กำลังจะไขความลับของสิ่งมีชีวิต และอาจจะเป็นผู้กำหนดอนาคตของพวกเราทุกคน เรื่องราวของเขาถูกถ่ายทอดผ่านสารคดีที่ชื่อว่า The Thinking Game และชายคนนั้นคือ Sir Demis Hassabis วันนี้เราจะพาไปดูเส้นทางชีวิตของเขา จากการเล่นเกมกระดาน สู่การสร้าง “สมองเทียม” ที่ฉลาดกว่ามนุษย์ เรื่องราวนี้มีความเป็นมาอย่างไร ทำไมโลกถึงต้องจับตามอง เลือกฟังกันได้เลยนะครับ อย่าลืมกด Follow ติดตาม PodCast ช่อง Geek Forever's Podcast ของผมกันด้วยนะครับ #DeepMind #DemisHassabis #AI #ปัญญาประดิษฐ์ #NobelPrize2024 #รางวัลโนเบล #AlphaGo #AlphaFold #TheThinkingGame #เทคโนโลยี #วิทยาศาสตร์ #ความรู้รอบตัว #นวัตกรรม #FutureTech #สารคดี #geektalk #geekforeverpodcast  

Let's Talk AI
#227 - Jeremie is back! DeepSeek 3.2, TPUs, Nested Learning

Let's Talk AI

Play Episode Listen Later Dec 9, 2025 94:40


Our 227th episode with a summary and discussion of last week's big AI news!Recorded on 12/05/2025Hosted by Andrey Kurenkov and Jeremie HarrisFeel free to email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.aiRead out our text newsletter and comment on the podcast at https://lastweekin.ai/In this episode:Deep Seek 3.2 and Flux 2 release, showcasing advancements in open-source AI models for natural language processing and image generation respectively.Amazon's new AI chips and Google's TPUs signal potential shifts in AI hardware dominance, with growing competition against Nvidia.Anthropic's potential IPO and OpenAI's declared ‘Code Red' indicate significant moves in the AI business landscape, including high venture funding rounds for startups.Key research papers from DeepMind and Google explore advanced memory architectures and multi-agent systems, indicating ongoing efforts to enhance AI reasoning and efficiency.Timestamps:(00:00:10) Intro / Banter(00:02:42) News PreviewTools & Apps(00:03:30) Deepseek 3.2 : New AI Model is Faster, Cheaper and Smarter(00:23:22) Black Forest Labs launches Flux.2 AI image models to challenge Nano Banana Pro and Midjourney(00:28:00) Sora and Nano Banana Pro throttled amid soaring demand | The Verge(00:29:34) Mistral closes in on Big AI rivals with new open-weight frontier and small models | TechCrunch(00:31:41) Kling's Video O1 launches as the first all-in-one video model for generation and editing(00:34:07) Runway rolls out Gen 4.5 AI video model that beats Google, OpenAIApplications & Business(00:35:18) NVIDIA's Partners Are Beginning to Tilt Toward Google's TPU Ecosystem, with Foxconn Reportedly Securing TPU Rack Orders(00:40:37) Amazon releases an impressive new AI chip and teases an Nvidia-friendly roadmap | TechCrunch(00:43:03) OpenAI declares ‘code red' as Google catches up in AI race | The Verge(00:46:20) Anthropic reportedly preparing for massive IPO in race with OpenAI: FT(00:48:41) Black Forest Labs raises $300M at $3.25B valuation | TechCrunch(00:49:20) Paris-based AI voice startup Gradium nabs $70M seed | TechCrunch(00:50:10) OpenAI announced a 1 GW Stargate cluster in Abu Dhabi(00:53:22) OpenAI's investment into Thrive Holdings is its latest circular deal(00:55:11) OpenAI to acquire Neptune, an AI model training assistance startup(00:56:11) Anthropic acquires developer tool startup Bun to scale AI coding(00:56:55) Microsoft drops AI sales targets in half after salespeople miss their quotas - Ars TechnicaProjects & Open Source(00:57:51) [2511.22570] DeepSeekMath-V2: Towards Self-Verifiable Mathematical Reasoning(01:01:52) Evo-Memory: Benchmarking LLM Agent Test-time Learning with Self-Evolving MemoryResearch & Advancements(01:05:44) Nested Learning: The Illusion of Deep Learning Architecture(01:13:30) Multi-Agent Deep Research: Training Multi-Agent Systems with M-GRPO(01:15:50) State of AI: An Empirical 100 Trillion Token Study with OpenRouterPolicy & Safety(01:21:52) Trump signs executive order launching Genesis Mission AI project(01:24:42) OpenAI has trained its LLM to confess to bad behavior | MIT Technology Review(01:29:34) US senators seek to block Nvidia sales of advanced chips to ChinaSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

alfalfa
The $140k Poverty Line, Inside the F1 Garage & OpenAI's "Code Red" | Ep. 268

alfalfa

Play Episode Listen Later Dec 5, 2025 122:59


From the $140k "real" poverty line to an exclusive look inside the F1 Las Vegas paddock. We debate the welfare "Valley of Death," breakdown the capital flight from Bitcoin to AI, and ask if the "App Layer" is dead. Plus, a major announcement regarding the future of the podcast.Welcome to the Alfalfa Podcast

Design of AI: The AI podcast for product teams
The Creativity Recession and Why Product Leaders Must Reverse It Now

Design of AI: The AI podcast for product teams

Play Episode Listen Later Dec 5, 2025 46:00


Our latest guest is Maya Ackerman — AI‑creativity researcher, professor, and author of Creative Machines: AI, Art & Us (Wiley), as well as founder of WaveAI and LyricStudio (View recent colab with NVidia).Maya's perspective is not just insightful — it's a necessary reality check for anyone building AI today. She challenges the comforting narrative that AI is a neutral tool or a natural evolution of creativity. Instead, she exposes a truth many in tech avoid: AI is being deployed in ways that actively diminish human creativity, and businesses are incentivized to accelerate that trend.Her research shows how overly aligned, correctness-first models flatten imagination and suppress the divergent thinking that defines human originality. But she also shows what's possible when AI is designed differently — improvisational systems that spark new directions, expand a creator's mental palette, and reinforce human authorship rather than absorbing it.This episode matters because Maya names what the industry refuses to admit. The problem is not “AI getting too powerful,” it's AI being used to replace instead of elevate. Businesses are applying it as a cost-cutting mechanism, not a creative amplifier. And unless product leaders intervene, the damage to creativity — and to the people who rely on it for their livelihoods — will become irreversible.Listen to the Episode on Spotify, Apple Podcasts, YoutubeWe're engineering a global creative regression and pretending we aren't.Generative AI could radically expand human imagination, but the systems we deploy today overwhelmingly suppress it. The literature is unequivocal:* AI boosts creative output only when tools are intentionally designed for exploration, not correctness.* When aligned toward predictability, AI drives conformity and sameness.* The rise of “AI slop” is not an insult — it's the logical outcome of misaligned incentives.* New evidence shows that AI-assisted outputs become more similar as more people use the same tools, reducing collective creativity even when individual outputs look “better.”* Homogenization is measurable at scale: marketing, design, and written content generated with AI converge toward the same tone and syntax, lowering engagement and cultural diversity.* Repeated reliance on AI weakens human originality over time — users begin outsourcing ideation, losing confidence and capacity for divergent thought.Resources:* The Impact of AI on Creativity: https://www.researchgate.net/publication/395275000_The_Impact_of_AI_on_Creativity_Enhancing_Human_Potential_or_Challenging_Creative_Expression* Generative AI and Creativity (Meta-Analysis): https://arxiv.org/pdf/2505.17241* AI Slop Overview: https://en.wikipedia.org/wiki/AI_slop* Generative AI Enhances Individual Creativity but Reduces Collective Novelty:https://pmc.ncbi.nlm.nih.gov/articles/PMC11244532/* Generative AI Homogenizes Marketing Content:https://papers.ssrn.com/sol3/Delivery.cfm/5367123.pdf?abstractid=5367123* Human Creativity in the Age of LLMs (decline in divergent thinking):https://arxiv.org/abs/2410.03703 BOTTOM LINE: If your product optimizes for correctness, brand safety, and throughput before originality, you are actively contributing to the global collapse of creative quality. AI must be designed to spark—not sanitize—human imagination.Thanks for reading Design of AI: Strategies for Product Teams & Agencies! This post is public so feel free to share it.Award-winning creative talent is disappearing at scale, and the trend is accelerating.The global creative workforce is shrinking faster than at any time in modern history. Companies claim AI is “enhancing creativity,” yet most restructuring reveals the opposite: AI is being deployed primarily to cut labor costs. In general, layoff announcements top 1.1 million this year, the most since 2020 pandemic.What's happening now:* Omnicom announced 4,000 job cuts and shut multiple agencies — Reuters reporting: https://www.reuters.com/business/media-telecom/omnicom-cut-4000-jobs-shut-several-agencies-after-ipg-takeover-ft-reports-2025-12-01/* WPP, Publicis, and IPG executed multi-round layoffs across design, writing, strategy, and production.* Digiday interviews confirm AI is used mainly to eliminate junior and mid-level creative roles: https://digiday.com/marketing/confessions-of-an-agency-founder-and-chief-creative-officer-on-ais-threat-to-junior-creatives/The most important read on the future & destruction of agencies comes from Zoe Scaman. She always brings a powerful and necessary mirror to the shitshow that is modern corporate world. Read it here:Freelancers and independent creatives are being hit even harder:* UK survey: 21% of creative freelancers already lost work because of AI; many report sharply lower pay — https://www.museumsassociation.org/museums-journal/news/2025/03/report-finds-creative-freelancers-hit-by-loss-of-work-late-pay-and-rise-of-ai/* Illustrators, motion designers, and concept artists report declining commissions as clients adopt Midjourney-style pipelines.* Voice actors face shrinking bookings due to synthetic voice models.* Stock photography, stock audio, and digital concepting have been heavily cannibalized by tools like Midjourney, Runway, and Suno.The research into AI shows even deeper risks:* The Rise of Generative AI in Creative Agencies — confirms agencies deploy AI for margin protection rather than creative innovation: https://www.diva-portal.org/smash/get/diva2%3A1976153/FULLTEXT03.pdf* IFOW/Sussex study shows AI exposure correlates with lower job quality and salary stagnation for creatives: https://www.ifow.org/news-articles/marley-bartlett-research-poster---ai-job-quality-and-the-creative-industriesBOTTOM LINE: Creative roles are vanishing because AI is being optimized for efficiency rather than imagination. If we want creative industries to survive, AI must expand human originality — not replace the people who produce it.:** Creative roles are vanishing because AI is being deployed for efficiency rather than imagination. If we want a future with vibrant creative industries, AI must be designed to amplify human originality — not replace it.Please participate in our year-end surveyWe are studying how AI is restructuring careers, skills, and expectations across product, design, engineering, research, and strategy.Your responses influence:* the direction of Design of AI in 2025,* what questions we investigate through research,* what frameworks we build to help leaders adapt—and protect—their teams.Take the survey: https://tally.so/r/Y5D2Q5Understand your cognitive style so you know how to best leverage AI to boost youThe Creative AI Academy has developed as an assessment tool to help you understand your creative style. We all tackle problems differently and come up with novel solutions using different methods. Take the ThinkPrint assessment to get a blueprint of how you ideate, judge, refine, and decide. Knowing this will help you know in which ways AI can boost —rather than undermine— your originality. For me it was powerful to see my thinking style mirrored back at me. It gave structure to what enhances and undermines my creativity, meaning I better understand what role (if any) AI should play in expanding my creative capabilities. Thank you to Angella Tapé for demonstrating this tool and presenting the perfect next evolution of Dr. Ackerman's lessons about needing AI to be a creative partner, not cannibalizer. BOTTOM LINE: Without cognitive self-awareness, you're not “partnering” with AI—you're surrendering your creative identity to it. Take the ThinkPrint assessment and redesign your workflow around human-led, AI-supported thinking.We are trading away human intellect for productivity—and the safety evidence is damning.The research is now impossible to ignore: AI makes us faster, but it makes us worse thinkers.A major multi-university study (Harvard, MIT, Wharton) found that users with AI assistance worked more quickly but were “more likely to be confidently wrong.”Source: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4573321This pattern shows up across cognitive science:* Stanford and DeepMind researchers found that relying on AI “reduced participants' memory for the material and their ability to reconstruct reasoning steps.”Source: https://arxiv.org/abs/2402.01832* EPFL showed that routine LLM use “led to measurable declines in writing ability and originality over time.”Source: https://arxiv.org/abs/2401.00612* University of Toronto researchers warn that repeated LLM use “narrows human originality, shifting users from creators to evaluators of machine output.”Source: https://arxiv.org/abs/2410.03703In other words: we are outsourcing the exact cognitive muscles that make human thinking valuable — creativity, reasoning, comprehension — and replacing them with pattern-matching convenience.And while we weaken ourselves, the companies building the systems shaping our cognition are failing at even the most basic safety expectations.The AI Safety Index (Winter 2025) reported:“No major AI developer demonstrated adequate preparedness for catastrophic risks. Most scored poorly on transparency, accountability, and external evaluability.”Source: https://futureoflife.org/ai-safety-index-winter-2025/A companion academic review by Oxford, Cambridge, and Georgetown concluded:“Safety commitments across leading LLM developers are inconsistent, largely self-regulated, and often unverifiable.”Source: https://arxiv.org/pdf/2508.16982We are weakening human cognition while trusting companies that cannot prove they are safe. There is no version of this trajectory that ends well without deliberate intervention.Resources:* The Hidden Wisdom of Knowing in the AI Era: * A Critical Survey of LLM Development Initiatives: https://arxiv.org/pdf/2508.16982* Future of Life AI Safety Index (Winter 2025): https://futureoflife.org/ai-safety-index-winter-2025/* Supporting Safety Documentation (PDF): https://cdn.sanity.io/files/wc2kmxvk/revamp/79776912203edccc44f84d26abed846b9b23cb06.pdfBOTTOM LINE: Tools that reduce effort but not capability are not accelerators—they are cognitive liabilities. Product leaders must design for mental strength, not dependency.Schools are producing prompt operators, not original thinkers.Education systems are bolting AI onto decades-old learning models without rethinking what learning is. Instead of cultivating reasoning, imagination, and embodied intelligence, schools are teaching children to rely on AI systems they cannot critique.Resources:* UNESCO: AI & the Future of Education: https://www.unesco.org/en/articles/ai-and-future-education-disruptions-dilemmas-and-directions* Beyond Fairness in Computer Vision: https://cdn.sanity.io/files/wc2kmxvk/revamp/79776912203edccc44f84d26abed846b9b23cb06.pdf* AI Skills for Students: https://trswarriors.com/ai-education-preparing-students-future/BOTTOM LINE: If we do not redesign education, we will create a generation of humans who can operate AI but cannot outthink, challenge, or transcend it.Featured AI Thinker: Luiza JarovskyLuiza Jarovsky is one of the most essential voices in AI governance today. At a time when global AI companies are actively pushing to loosen regulation—or bypass it entirely—Luiza's work provides a critical counterbalance rooted in human rights, safety, law, and long-term societal impact.Why her work matters now:* She exposes the structural risks of deregulated AI adoption across governments and corporations.* She documents how weak or performative governance puts vulnerable communities at disproportionate risk.* She offers practical frameworks for ethical, enforceable AI oversight.Follow her work:BOTTOM LINE: If you build or deploy AI and you are not following Luiza's work, you are missing the governance lens that will define which companies survive the coming regulatory wave.Recommended Reality ChecksTwo critical signals from the field this week:* Ethan Mollick on the accelerating automation of creative workflowshttps://x.com/emollick/status/1996418841426227516AI is quietly outperforming human creative processes in categories many believed were “safe.” The speed of improvement is outpacing organizational awareness.* Jeffrey Lee Funk on markets losing patience with empty AI narrativeshttps://x.com/jeffreyleefunk/status/1996612615850676703Investors are separating real AI value from hype. Companies promising transformation without measurable impact are being punished.BOTTOM LINE: The creative and product landscape is shifting beneath our feet. Those who don't adapt—intellectually, strategically, and operationally—will lose relevance.Final Reflection — Legacy Is a Product DecisionEverything in this newsletter points to a single, unavoidable truth:AI does not define our future. The product decisions we make do.We can build tools that:* expand human originality,* strengthen cognitive resilience,* elevate creative careers,* and produce a generation capable of thinking beyond the machine.Or we can build tools that:* replace the creative class,* hollow out human judgment,* weaken educational outcomes,* and leave society dependent on systems controlled by a handful of companies.As product leaders—designers, strategists, researchers, technologists—we decide which future gets built.Legacy isn't abstract. It's the cumulative effect of every interface we design, every shortcut we greenlight, every metric we reward, and every model we deploy.If you want to build AI that strengthens humanity instead of diminishing it, reach out. Let's design for human outcomes, not machine efficiency.arpy@ph1.ca This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit designofai.substack.com

Vida com IA
#139- A história da DeepMind e Demis Hassabis com Fabrício Carraro.

Vida com IA

Play Episode Listen Later Dec 4, 2025 48:22


Fala galera, nesse episódio eu recebo meu grande amigo Fabrício Carraro, do podcast IA sob controle para falarmos sobre o documentário The Thinking Game, que conta a história da DeepMind e do Demis Hassabis.Aqui está o link para a página de vendas para saber mais sobre mim e sobre o curso: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.cursovidacomia.com.br/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Aqui está o link para se inscrever: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://pay.hotmart.com/W98240617U⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Video do documentário: https://www.youtube.com/watch?v=d95J8yzvjbQLink do video de IA treinando pra jogar pokemo: https://www.youtube.com/watch?v=DcYLT37ImBYLink do episódio sobre difusão: https://open.spotify.com/episode/2gIzBcgIjSwoDX62KmepfK?si=e6c68fe098544723Link do grupo do wpp: https://chat.whatsapp.com/GNLhf8aCurbHQc9ayX5oCPInstagram do podcast: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.instagram.com/podcast.vidacomia⁠Meu Linkedin: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.linkedin.com/in/filipe-lauar/⁠⁠⁠Linkedin do Fabricio: https://www.linkedin.com/in/fabriciocarraro/Link do podcast IA sob controle: https://open.spotify.com/show/5xLCMHJ6eGWzdu8JaIDkuP?si=8ffcc0b287e64e6a

AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
⚛️AI Unraveled Special: The Quantum Threshold – Google's ‘Willow' Breakthrough, AlphaQubit, & The End of Encryption

AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store

Play Episode Listen Later Dec 2, 2025 15:10


Special Report: The Quantum Threshold (December 02, 2025)This special episode dissects the paradigm shift from theoretical quantum physics to engineering reality, triggered by Google's latest hardware and AI advancements.Key Topics & Takeaways:⚡ The Engineering Miracle (Google Willow): Google has unveiled "Willow," a 105-qubit processor that successfully demonstrates the "Threshold Theorem." For the first time, increasing the number of physical qubits (from code distance 3 to 7) has led to an exponential drop in error rates, proving that fault-tolerant quantum computing is physically possible.

雪球·财经有深度
3059.谷歌vs英伟达:AI的下半场巅峰对决

雪球·财经有深度

Play Episode Listen Later Nov 29, 2025 7:43


欢迎收听雪球出品的财经有深度,雪球,国内领先的集投资交流交易一体的综合财富管理平台,聪明的投资者都在这里。今天分享的内容叫谷歌vs英伟达:A I的下半场巅峰对决,来自闷得而蜜。历史回顾互联网时代,成就了谷歌、facebook,思科跌下神坛。云计算时代,成就了微软、亚马逊,Intel跌下神坛。移动互联网时代,成就了苹果,高通跌下神坛。IT行业有条铁律:每一美元的硬件,必须产生十美元的软件和服务收入。这个产业规律,暗示着,信息产业进入稳定成长期,软件和服务商的估值远远超越硬件厂商。如今,人工智能浪潮席卷全球,算力成为新时代的“石油”,大模型成为基础设施。站在 A I 时代的十字路口,我们再次目睹巨头格局的剧烈重构。英伟达,凭借 G P U 在并行计算上的天然优势,一跃成为A I训练的“卖铲人”,市值一度超越苹果、微软,登顶全球第一。它的成功,看似打破了“一美元硬件难敌十美元软件”的铁律。而谷歌,作为A I领域的长期布道者——从2014年收购 DeepMind,到2017年提出 Transformer 架构,再到持续投入大模型与 A I 原生产品——却在商业化落地和市场估值上显得步履谨慎,甚至被质疑“起了个大早,赶了个晚集”。这不禁让人发问:在A I的下半场,究竟是掌握底层算力的“硬件霸主”继续高歌猛进,还是拥有数据、算法与生态闭环的“软件巨头”后来居上?谷歌与英伟达的对决,或许正是回答这个问题的关键线索。英伟达:算力垄断的黄金时代英伟达的崛起并非偶然。自 2012 年 AlexNet 使用 G P U 加速深度学习以来,黄仁勋就押注 A I 是计算范式的根本性变革。过去三年,这一判断被验证到极致:市占率超 95%:在训练端 G P U 市场,英伟达几乎形成事实垄断;毛利率高达 75%+:远超传统芯片公司,逼近软件公司水平;CUDA 生态护城河:百万开发者、数千优化库、数万企业依赖其软件栈,迁移成本极高;订单排到 2026 年:Blackwell 芯片供不应求,客户包括微软、Meta、亚马逊、甲骨文等所有云巨头。更关键的是,英伟达不再只是卖芯片,而是通过 A I Enterprise 软件套件、NIM 微服务、DGX Cloud 等,向“A I 操作系统”演进。它正在把硬件优势转化为平台级控制力。谷歌:被低估的 A I 全栈能力如果说英伟达是 A I 时代的“水电煤供应商”,那么谷歌就是那个最早设计电网、制定标准、还自己发电用电的人。技术源头:Transformer 架构(2017)已成为所有大模型的基础;LaMDA、PaLM、Gemini 系列模型持续领先;自研芯片:TPU 已迭代至 v5e/v5p/v6/v7,在内部训练效率上媲美甚至优于 B200;数据闭环:Search、YouTube、GmA Il、Android、Workspace 每天产生海量真实用户交互数据,这是任何外部公司无法复制的燃料;产品整合:A I 已深度嵌入 Search、Workspace、Android、Cloud。更重要的是,谷歌的商业模式天然适配 A I 变现:广告仍是现金牛:2024 年 Q3 广告收入 65 亿美元每天,为 A I 投入提供无限弹药;A I 不是成本,而是效率工具:用 Gemini 优化搜索结果、自动生成广告文案、提升客服效率——每一项都能直接节省数十亿美元运营成本;云业务拐点已现:Google Cloud 首次实现全年盈利,A I 服务成增长引擎。市场低估谷歌,是因为它没有像英伟达那样“性感”的单季度 200% 增长。但谷歌的 A I 战略是内生、稳健、可规模化的——它不需要靠卖芯片吃饭,而是让 A I 成为整个生态的“操作系统”。下半场:从“卖铲子”到“开金矿”A I 上半场的主题是基础设施军备竞赛——谁有更多 G P U,谁就能训练更大模型。英伟达因此受益最大。但下半场的主题正在转向:谁能用 A I 创造真实价值?谁能把模型变成产品、服务和利润?这里有几个关键转折信号:模型同质化加剧:闭源与开源模型性能差距缩小,单纯堆参数不再有效;推理成本成为瓶颈:训练只需一次,推理每天亿次——能效比、边缘部署、定制芯片更重要;用户要结果,不要技术:企业关心“A I 能否提升客服转化率”,而非“用了多少 B200”。在这个新阶段,谷歌的优势开始凸显:它拥有从芯片,框架,模型 ,到应用,最终到达用户的完整闭环;它不需要说服客户“为什么需要 A I”,因为 A I 已经在每天服务数十亿人;它的护城河不是 CUDA,而是用户习惯 + 数据飞轮 + 产品集成度。反观英伟达,若不能从“硬件供应商”进化为“A I 平台运营商”,其高估值将面临巨大压力。毕竟,历史上从未有一家纯硬件公司能长期维持 30 倍以上的市销率。结论:不是零和博弈,而是范式迁移谷歌与英伟达并非简单的“你死我活”。事实上,它们代表了 A I 价值链的两个关键环节:基础设施层 vs 应用层。但在 A I 的下半场,界限正在模糊:英伟达在做软件;谷歌在做芯片;微软既买英伟达芯片,又自研 MA Ia,还集成 Open AI;亚马逊一边采购 H100,一边推广 TrA Inium。真正的胜负手,不在于谁卖更多芯片,而在于谁能构建“软硬一体、端云协同、数据驱动”的飞轮。从这个角度看,谷歌的长期确定性可能更高——因为它早已把 A I 编织进自己的基因。而英伟达的辉煌,仍取决于 A I 资本开支的持续性和生态壁垒的不可逾越性。投资者不妨这样思考:如果你相信 A I 仍将经历一轮疯狂的基础设施投资潮,英伟达仍是首选;如果你相信 A I 正在进入“价值兑现期”,那么谷歌这样拥有场景、数据和变现能力的公司,才真正站在复利的起点。历史告诉我们:最终赢得时代的,从来不是最锋利的铲子,而是挖到金矿并建起城市的人。

Edtech Insiders
Week in EdTech 11/19/25: OpenAI Launches ChatGPT for K–12, Google Deepens AI Push, Edtech Tools Face New Classroom Backlash, and More! Feat. Janos Perczel of Polygence & Dr. Stephen Hodges of Efekta Education!

Edtech Insiders

Play Episode Listen Later Nov 28, 2025 64:12 Transcription Available


Send us a textJoin hosts Alex Sarlin and Ben Kornell as they break down OpenAI's unexpected launch of ChatGPT for K–12, Google's accelerating AI momentum, and what these shifts mean for schools, teachers, and the edtech ecosystem.✨ Episode Highlights: [00:02:03] OpenAI unveils ChatGPT for K–12 educators—secure, curriculum-aware, and free through 2027 [00:03:02] The emerging AI Classroom Wars between OpenAI and Google across major U.S. districts [00:07:36] Google's big week: DeepMind tutoring gains and Gemini 3's multimodal upgrades [00:10:25] How district leaders will navigate growing community divides over AI adoption [00:14:04] What OpenAI's move means for MagicSchool, SchoolAI, Brisk, and other edtech playersPlus, special guests:[00:19:26] Janos Perczel, CEO of Polygence on scaling project-based learning with AI and why TeachLM trains models on authentic student–teacher interactions[00:41:36] Dr. Stephen Hodges, CEO of Efekta Education on AI-powered language learning for 4M students and early evidence of major test score gains

The Healthier Tech Podcast
Promptflux: A Malware That Rewrites Itself Using Gemini AI

The Healthier Tech Podcast

Play Episode Listen Later Nov 27, 2025 5:04


What happens when malware stops behaving like malware and starts behaving more like a living digital organism. In this episode of The Healthier Tech Podcast, we break down Google's latest discovery: malicious software that can rewrite its own code using artificial intelligence while it is already running on your device. This one shift turns a predictable threat into something far more flexible and far harder to detect. We walk through how traditional malware works and why this new generation breaks every rule cybersecurity has relied on for decades. You will learn what makes self-modifying code so disruptive and why Google calls this a new phase of artificial intelligence abuse. You will hear about Promptflux, the first known malware that asks an artificial intelligence model to rewrite it in real time. We also explore four other experimental malware families highlighted in Google's report, including versions designed to steal files, open backdoors, gather system data, and search for passwords. Each one shows how hackers are beginning to use artificial intelligence to scale their attacks. This episode explains, in simple language, how these threats operate and why they matter for everyday users who want healthier, safer relationships with their devices. We cover how Google and DeepMind are trying to counter this trend and what this new category of evolving malware means for digital wellness, privacy, and personal tech hygiene. If you care about digital safety, tech balance, or keeping your devices healthy, this is a must-listen. This episode connects the dots between cybersecurity and wellness in a way that is clear, practical, and relevant for anyone who uses technology daily. For more episodes on digital wellness, healthy tech habits, and staying informed in a fast moving tech world, make sure to subscribe and tune in. This episode is brought to you by Shield Your Body—a global leader in EMF protection and digital wellness. Because real wellness means protecting your body, not just optimizing it. If you found this episode eye-opening, leave a review, share it with someone tech-curious, and don't forget to subscribe to Shield Your Body on YouTube for more insights on living healthier with technology.

The Generative AI Meetup Podcast
Gemini 3, GPT-5.1, Anti-Gravity & Yann LeCun's Exit: Are We Near AGI or Just in a Bubble?

The Generative AI Meetup Podcast

Play Episode Listen Later Nov 22, 2025 60:59


Youtube Channel: https://www.youtube.com/@GenerativeAIMeetup Mark's Travel Vlog: https://www.youtube.com/@kumajourney11 Mark's Personal Youtube Channel: https://www.youtube.com/@markkuczmarski896 Attend a live event: https://genaimeetup.com/ Shashank Linked In: https://www.linkedin.com/in/shashu10/    In this episode of the Generative AI Meetup Podcast, Mark (in Ohio) and Shashank (in India) finally sit down after a month of travel to unpack a very eventful stretch in AI. They dive into Google's new Gemini 3 Pro, its standout scores on Humanity's Last Exam and ARC-AGI, and why these reasoning benchmarks matter more than yet another near-perfect standardized test score. Mark also makes a public feature request to DeepMind: please increase Gemini's max output tokens. From there they get hands-on with the developer experience: Google's new Anti-Gravity coding IDE (and how it compares to Cursor) Using GPT-5.1 Codex High in Cursor's autonomous “plan mode” Why long context and long output windows are critical for deep research and book-length projects The conversation then shifts to the bigger picture: LLMs as therapists, sycophancy, safety, and the danger of AI always agreeing with you Mark's rant on robotics, humanoid robots, and a coming age of extreme abundance where robots handle most physical and intellectual work Why learning to code may become the mental equivalent of going to the gym—a “brain gym” in a world where AI can do most practical tasks They also cover the latest AI industry drama and milestones: Yann LeCun leaving Meta, what that might signal about Big Tech AI labs, and how godfathers like Hinton, LeCun, and Bengio see the road to AGI DeepMind's new game-playing agent and why world models in 3D environments matter for real-world robotics Genspark hitting unicorn status and what it means for “ChatGPT wrapper” startups Co-inventing a new term on air: a “narwhal” = a trillion-dollar private company If you're curious about where frontier models, coding agents, robotics, and AGI trajectories all intersect—plus some philosophical musing on jobs, meaning, and abundance—this episode is for you.

AI Inside
Gemini 3 is Here

AI Inside

Play Episode Listen Later Nov 19, 2025 67:51


Jason Howell and Jeff Jarvis explore Google's Gemini 3 multimodal AI with visual and interactive features, Microsoft's AI Copilot launch across Windows, and Jeff Bezos's new well-funded AI startup Project Prometheus focused on engineering and manufacturing. Note: Time codes subject to change depending on dynamic ad insertion by the distributor. CHAPTERS: 00:00 - Podcast begins 01:55 - Do LLM's understand? AI Pioneer Yann Le Cun spars with DeepMind's Adam Brown 20:45 - A new era of intelligence with Gemini 3 34:25 - Microsoft is packing more AI into Windows, ready or not - here's what's new 37:59 - Inside Microsoft Agent 365: How AI Workers Will Be Secured, Identified, and Governed 42:24 - At a major AI conference, one startup got voted most likely to flop 44:48 - Hugging Face CEO says we're in an ‘LLM bubble,' not an AI bubble 46:07 - Google boss warns 'no company is going to be immune' if AI bubble bursts 49:02 - Google unveils agentic tools to help advertisers - So does Amazon 49:23 - And Meta introduces a foundation model for advertisers 53:17 - DeepMind releases WeatherNext2 56:05 - OpenAI introduces group chats with ChatGPT 58:28 - Microsoft's new Anthropic partnership brings Claude AI models to Azure 59:26 - Amazon and Microsoft Back Effort That Would Restrict Nvidia's Exports to China 1:00:42 - Jeff Bezos Creates A.I. Start-Up Where He Will Be Co-Chief Executive Learn more about your ad choices. Visit megaphone.fm/adchoices

Science Friday
How Alphafold Has Changed Biology Research, 5 Years On

Science Friday

Play Episode Listen Later Nov 18, 2025 18:08


Proteins are crucial for life. They're made of amino acids that “fold” into millions of different shapes. And depending on their structure, they do radically different things in our cells. For a long time, predicting those shapes for research was considered a grand biological challenge.But in 2020, Google's AI lab DeepMind released Alphafold, a tool that was able to accurately predict many of the structures necessary for understanding biological mechanisms in a matter of minutes. In 2024, the Alphafold team was awarded a Nobel Prize in chemistry for the advance.Five years later after its release, Host Ira Flatow checks in on the state of that tech and how it's being used in health research with John Jumper, one of the lead scientists responsible for developing Alphafold.Guest: John Jumper, scientist at Google Deepmind and co-recipient of the 2024 Nobel Prize in chemistry.Transcripts for each episode are available within 1-3 days at sciencefriday.com. Subscribe to this podcast. Plus, to stay updated on all things science, sign up for Science Friday's newsletters.

Engadget
Cloudflare hit by outage causing 'widespread' errors, Tesla won its bid to decertify a class action lawsuit, and DeepMind releases a new weather forecasting model for more accurate predictions

Engadget

Play Episode Listen Later Nov 18, 2025 7:11


-If you're experiencing internet issues this morning, you're far from alone. Infrastructure company Cloudflare has been hit with what it calls "widespread 500 errors, with Dashboard and API also failing." The company said that services are starting to recover, but customers may continue to see "higher-than-normal errors rates" as it continues to work on the problem. As of 8:13 am, the company said that "the issue has been identified and a fix is being implemented." The company added that "we have made changes that have allowed Cloudflare Access and WARP to recover. Error levels for Access and WARP users have returned to pre-incident rates." -Tesla has secured a ruling to strip a 2017 lawsuit claiming a racist work environment of its class-action status, as reported by Reuters. The lawsuit could not proceed with class-action status because the plaintiffs' attorneys had failed to find 200 class members willing to testify. -Google's DeepMind just released WeatherNext 2, a new version of its AI weather prediction model. The company promises that it "delivers more efficient, more accurate and higher-resolution global weather predictions." Learn more about your ad choices. Visit podcastchoices.com/adchoices

Engadget
DeepMind releases a new weather forecasting model for more accurate predictions

Engadget

Play Episode Listen Later Nov 18, 2025 6:46


WeatherNext 2 can generate information around eight times faster than the previous version. Learn more about your ad choices. Visit podcastchoices.com/adchoices

Outliers
Ιωάννης Αντώνογλου, REFLECTION AI: Η επόμενη γενιά τεχνητής νοημοσύνης ανοιχτή & προσβάσιμη σε όλους

Outliers

Play Episode Listen Later Nov 18, 2025 55:10


Daily Tech News Show
Everything Is Changing at Apple - DTNS 5147

Daily Tech News Show

Play Episode Listen Later Nov 17, 2025 24:12


Our thoughts on the Tim Cook succession and the possible new iPhone release schedule. Plus, DeepMind gets better at weather and the Tilly Norwood people are back at it.Starring Tom Merritt and Robb Dunewood.Show notes can be found here. Hosted on Acast. See acast.com/privacy for more information.

Leveraging AI
241 | Who Rules the (World) Models?

Leveraging AI

Play Episode Listen Later Nov 15, 2025 31:08 Transcription Available


Are current AI models smart enough to rule the world — or just house cats with fancy vocabulary?This week, a tectonic shift is happening in AI: Meta's chief scientist Jan LeCun quits to chase world models, Fei-Fei Li launches Marble, a spatial intelligence engine, and DeepMind drops CMA-2, a self-taught gamer bot that might be the blueprint for AGI.Meanwhile, OpenAI releases GPT-5.1 — and China's Kimi K2 and Ernie 5.0 roll out shockingly powerful, ultra-low-cost models. The AI race isn't just about intelligence anymore — it's about who can afford to scale.If you lead a business, this episode explains why spatial intelligence, not language, may soon be your competitive edge. The next wave of AI isn't just about better answers, it's about deeper understanding, real-world interaction, and models that scale affordably. If you're not watching spatial intelligence, you're already behind.About Leveraging AI The Ultimate AI Course for Business People: https://multiplai.ai/ai-course/ YouTube Full Episodes: https://www.youtube.com/@Multiplai_AI/ Connect with Isar Meitis: https://www.linkedin.com/in/isarmeitis/ Join our Live Sessions, AI Hangouts and newsletter: https://services.multiplai.ai/events If you've enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store

AI Daily News Rundown November 15 2025:Tune in at https://podcasts.apple.com/us/podcast/ai-daily-news-rundown-anthropic-disrupts-ai-orchestrated/id1684415169?i=1000736811381Welcome to AI Unraveled, Your daily briefing on the real world business impact of AI

Me, Myself, and AI
From Rabbit Holes to Recommendations: Reddit's Vishal Gupta

Me, Myself, and AI

Play Episode Listen Later Nov 11, 2025 25:09


Vishal Gupta, engineering manager, machine learning at Reddit, joins the podcast to explain how the social media community platform uses artificial intelligence to improve user experience and ad relevance. Much of the advertising work relies on increasingly sophisticated recommender systems that have evolved from simple collaborative filtering to deep learning and large language model–based systems capable of multimodal understanding. https://mitsmr.com/4onhUMgVishal and Sam also explore the philosophical and ethical aspects of AI-driven platforms. Vishal emphasizes the importance of balance — between exploration and exploitation in recommendations, between advertiser goals and user experience, and between human- and machine-generated content. He argues that despite the rise of AI-generated material, authentic human conversation remains vital and even more valuable as models depend on it for training. Read the episode transcript here. Guest bio: Vishal Gupta is a seasoned engineering leader who leads multiple artificial intelligence and machine learning teams at Reddit in the ads domain. He has a decade of experience working on cutting-edge machine learning techniques at companies like DeepMind, Google, and Twitter. Gupta is passionate about applied AI research that significantly contributes to a company's top and bottom lines. Me, Myself, and AI is a podcast produced by MIT Sloan Management Review and hosted by Sam Ransbotham. It is engineered by David Lishansky and produced by Allison Ryder. We encourage you to rate and review our show. Your comments may be used in Me, Myself, and AI materials.

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
⚡ [AIE CODE Preview] Inside Google Labs: Building The Gemini Coding Agent — Jed Borovik, Jules

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Nov 10, 2025


Jed Borovik, Product Lead at Google Labs, joins Latent Space to unpack how Google is building the future of AI-powered software development with Jules. From his journey discovering GenAI through Stable Diffusion to leading one of the most ambitious coding agent projects in tech, Borovik shares behind-the-scenes insights into how Google Labs operates at the intersection of DeepMind's model development and product innovation. We explore Jules' approach to autonomous coding agents and why they run on their own infrastructure, how Google simplified their agent scaffolding as models improved, and why embeddings-based RAG is giving way to attention-based search. Borovik reveals how developers are using Jules for hours or even days at a time, the challenges of managing context windows that push 2 million tokens, and why coding agents represent both the most important AI application and the clearest path to AGI. This conversation reveals Google's positioning in the coding agent race, the evolution from internal tools to public products, and what founders, developers, and AI engineers should understand about building for a future where AI becomes the new brush for software engineering. Chapters 00:00:00 Introduction and GitHub Universe Recap 00:00:57 New York Tech Scene and East Coast Hackathons 00:02:19 From Google Search to AI Coding: Jed's Journey 00:04:19 Google Labs Mission and DeepMind Collaboration 00:06:41 Jules: Autonomous Coding Agents Explained 00:09:39 The Evolution of Agent Scaffolding and Model Quality 00:11:30 RAG vs Attention: The Shift in Code Understanding 00:13:49 Jules' Journey from Preview to Production 00:15:05 AI Engineer Summit: Community Building and Networking 00:25:06 Context Management in Long-Running Agents 00:29:02 The Future of Software Engineering with AI 00:36:26 Beyond Vibe Coding: Spec Development and Verification 00:40:20 Multimodal Input and Computer Use for Coding Agents

Business of Tech
ConnectWise Enhances ASIO, ESET Integrates AI, OpenAI Hits 1M Customers, Trust as a KPI?

Business of Tech

Play Episode Listen Later Nov 7, 2025 16:15


ConnectWise has announced enhancements to its Ozzio platform, which now includes expanded third-party patching for over 7,000 applications, improvements to the professional services automation (PSA) user experience, and advanced robotic process automation (RPA) capabilities. These updates aim to address security vulnerabilities in widely exploited applications and streamline operations for managed service providers (MSPs). The new features are set to improve operational efficiency and security, with the expanded patching available immediately and RPA features expected to roll out in the coming months.In conjunction with these updates, ESET has integrated its ESET Protect platform with ConnectWise Ozzio, allowing for one-click deployment of security management tools. This integration is designed to enhance the efficiency of security tasks for MSPs, enabling them to meet legal and insurance requirements more effectively. Additionally, ConnectSecure has introduced AI-powered vulnerability management reports that prioritize risks based on business impact rather than just technical severity, further supporting MSPs in delivering proactive risk assessments.OpenAI has surpassed 1 million business customers, marking it as the fastest-growing business platform in history. A Wharton study indicates that 75% of enterprises using AI technologies report a positive return on investment. Meanwhile, Google has launched Gemini AI tools for stock traders and improved hurricane prediction capabilities through its DeepMind technology, showcasing the growing integration of AI across various sectors, including finance and weather forecasting.For MSPs and IT service leaders, these developments underscore the importance of integrating advanced security and AI capabilities into their service offerings. As the landscape shifts towards cyber resilience and AI-driven solutions, providers must adapt by leveraging these tools to enhance their operational efficiency and client services. The focus on measurable outcomes, such as trust and risk management, will be crucial for maintaining competitive advantage in an increasingly automated environment.  Four things to know today00:00 At IT Nation Connect, ConnectWise Focuses on Asio Enhancements While Ecosystem Partners Deliver the Bigger Innovation05:37 N-able Rebrands Its Future: Strong Earnings and AI-Fueled Pivot Toward Cyber Resilience08:31 From ChatGPT to Hurricanes: How AI's Expansion Is Turning Tools Into Core Business Systems11:14 Trust, Transparency, and Transformation: How AI Acceleration Is Forcing Leaders to Rethink Human Metrics This is the Business of Tech.    Supported by:  https://mailprotector.com/mspradio/

This Week in Google (MP3)
IM 844: Poob Has It For You - Spiky Superintelligence vs. Generality

This Week in Google (MP3)

Play Episode Listen Later Nov 6, 2025 163:50


Is today's AI stuck as a "spiky superintelligence," brilliant at some things but clueless at others? This episode pulls back the curtain on a lunchroom full of AI researchers trading theories, strong opinions, and the next big risks on the path to real AGI. Why "Everyone Dies" Gets AGI All Wrong The Nonprofit Feeding the Entire Internet to AI Companies Google's First AI Ad Avoids the Uncanny Valley by Casting a Turkey Coca-Cola Is Trying Another AI Holiday Ad. Executives Say This Time Is Different Sam Altman shuts down question about how OpenAI can commit to spending $1.4 trillion while earning billions: 'Enough' How OpenAI Uses Complex and Circular Deals to Fuel Its Multibillion-Dollar Rise Perplexity's new AI tool aims to simplify patent research Kids Turn Podcast Comments Into Secret Chat Rooms, Because Of Course They Do Amazon and Perplexity have kicked off the great AI web browser fight Neural network finds an enzyme that can break down polyurethane Dictionary.com names 6-7 as 2025's word of the year Tech companies don't care that students use their AI agents to cheat The Morning After: Musk talks flying Teslas on Joe Rogan's show The Hatred of Podcasting | Brace Belden TikTok announces its first awards show in the US Google wants to build solar-powered data centers — in space Anthropic Projects $70 Billion in Revenue, $17 Billion in Cash Flow in 2028 American Museum of Tort Law Dog Chapel - Dog Mountain Nicvember masterlist Pornhub says UK visitors down 77% since age checks came in Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jeremy Berman Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit agntcy.org spaceship.com/twit monarch.com with code IM

All TWiT.tv Shows (MP3)
Intelligent Machines 844: Poob Has It For You

All TWiT.tv Shows (MP3)

Play Episode Listen Later Nov 6, 2025 163:20 Transcription Available


Is today's AI stuck as a "spiky superintelligence," brilliant at some things but clueless at others? This episode pulls back the curtain on a lunchroom full of AI researchers trading theories, strong opinions, and the next big risks on the path to real AGI. Why "Everyone Dies" Gets AGI All Wrong The Nonprofit Feeding the Entire Internet to AI Companies Google's First AI Ad Avoids the Uncanny Valley by Casting a Turkey Coca-Cola Is Trying Another AI Holiday Ad. Executives Say This Time Is Different Sam Altman shuts down question about how OpenAI can commit to spending $1.4 trillion while earning billions: 'Enough' How OpenAI Uses Complex and Circular Deals to Fuel Its Multibillion-Dollar Rise Perplexity's new AI tool aims to simplify patent research Kids Turn Podcast Comments Into Secret Chat Rooms, Because Of Course They Do Amazon and Perplexity have kicked off the great AI web browser fight Neural network finds an enzyme that can break down polyurethane Dictionary.com names 6-7 as 2025's word of the year Tech companies don't care that students use their AI agents to cheat The Morning After: Musk talks flying Teslas on Joe Rogan's show The Hatred of Podcasting | Brace Belden TikTok announces its first awards show in the US Google wants to build solar-powered data centers — in space Anthropic Projects $70 Billion in Revenue, $17 Billion in Cash Flow in 2028 American Museum of Tort Law Dog Chapel - Dog Mountain Nicvember masterlist Pornhub says UK visitors down 77% since age checks came in Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jeremy Berman Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit agntcy.org spaceship.com/twit monarch.com with code IM

Radio Leo (Audio)
Intelligent Machines 844: Poob Has It For You

Radio Leo (Audio)

Play Episode Listen Later Nov 6, 2025 163:20


Is today's AI stuck as a "spiky superintelligence," brilliant at some things but clueless at others? This episode pulls back the curtain on a lunchroom full of AI researchers trading theories, strong opinions, and the next big risks on the path to real AGI. Why "Everyone Dies" Gets AGI All Wrong The Nonprofit Feeding the Entire Internet to AI Companies Google's First AI Ad Avoids the Uncanny Valley by Casting a Turkey Coca-Cola Is Trying Another AI Holiday Ad. Executives Say This Time Is Different Sam Altman shuts down question about how OpenAI can commit to spending $1.4 trillion while earning billions: 'Enough' How OpenAI Uses Complex and Circular Deals to Fuel Its Multibillion-Dollar Rise Perplexity's new AI tool aims to simplify patent research Kids Turn Podcast Comments Into Secret Chat Rooms, Because Of Course They Do Amazon and Perplexity have kicked off the great AI web browser fight Neural network finds an enzyme that can break down polyurethane Dictionary.com names 6-7 as 2025's word of the year Tech companies don't care that students use their AI agents to cheat The Morning After: Musk talks flying Teslas on Joe Rogan's show The Hatred of Podcasting | Brace Belden TikTok announces its first awards show in the US Google wants to build solar-powered data centers — in space Anthropic Projects $70 Billion in Revenue, $17 Billion in Cash Flow in 2028 American Museum of Tort Law Dog Chapel - Dog Mountain Nicvember masterlist Pornhub says UK visitors down 77% since age checks came in Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jeremy Berman Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit agntcy.org spaceship.com/twit monarch.com with code IM

This Week in Google (Video HI)
IM 844: Poob Has It For You - Spiky Superintelligence vs. Generality

This Week in Google (Video HI)

Play Episode Listen Later Nov 6, 2025 163:20


Is today's AI stuck as a "spiky superintelligence," brilliant at some things but clueless at others? This episode pulls back the curtain on a lunchroom full of AI researchers trading theories, strong opinions, and the next big risks on the path to real AGI. Why "Everyone Dies" Gets AGI All Wrong The Nonprofit Feeding the Entire Internet to AI Companies Google's First AI Ad Avoids the Uncanny Valley by Casting a Turkey Coca-Cola Is Trying Another AI Holiday Ad. Executives Say This Time Is Different Sam Altman shuts down question about how OpenAI can commit to spending $1.4 trillion while earning billions: 'Enough' How OpenAI Uses Complex and Circular Deals to Fuel Its Multibillion-Dollar Rise Perplexity's new AI tool aims to simplify patent research Kids Turn Podcast Comments Into Secret Chat Rooms, Because Of Course They Do Amazon and Perplexity have kicked off the great AI web browser fight Neural network finds an enzyme that can break down polyurethane Dictionary.com names 6-7 as 2025's word of the year Tech companies don't care that students use their AI agents to cheat The Morning After: Musk talks flying Teslas on Joe Rogan's show The Hatred of Podcasting | Brace Belden TikTok announces its first awards show in the US Google wants to build solar-powered data centers — in space Anthropic Projects $70 Billion in Revenue, $17 Billion in Cash Flow in 2028 American Museum of Tort Law Dog Chapel - Dog Mountain Nicvember masterlist Pornhub says UK visitors down 77% since age checks came in Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jeremy Berman Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit agntcy.org spaceship.com/twit monarch.com with code IM

All TWiT.tv Shows (Video LO)
Intelligent Machines 844: Poob Has It For You

All TWiT.tv Shows (Video LO)

Play Episode Listen Later Nov 6, 2025 163:20 Transcription Available


Is today's AI stuck as a "spiky superintelligence," brilliant at some things but clueless at others? This episode pulls back the curtain on a lunchroom full of AI researchers trading theories, strong opinions, and the next big risks on the path to real AGI. Why "Everyone Dies" Gets AGI All Wrong The Nonprofit Feeding the Entire Internet to AI Companies Google's First AI Ad Avoids the Uncanny Valley by Casting a Turkey Coca-Cola Is Trying Another AI Holiday Ad. Executives Say This Time Is Different Sam Altman shuts down question about how OpenAI can commit to spending $1.4 trillion while earning billions: 'Enough' How OpenAI Uses Complex and Circular Deals to Fuel Its Multibillion-Dollar Rise Perplexity's new AI tool aims to simplify patent research Kids Turn Podcast Comments Into Secret Chat Rooms, Because Of Course They Do Amazon and Perplexity have kicked off the great AI web browser fight Neural network finds an enzyme that can break down polyurethane Dictionary.com names 6-7 as 2025's word of the year Tech companies don't care that students use their AI agents to cheat The Morning After: Musk talks flying Teslas on Joe Rogan's show The Hatred of Podcasting | Brace Belden TikTok announces its first awards show in the US Google wants to build solar-powered data centers — in space Anthropic Projects $70 Billion in Revenue, $17 Billion in Cash Flow in 2028 American Museum of Tort Law Dog Chapel - Dog Mountain Nicvember masterlist Pornhub says UK visitors down 77% since age checks came in Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jeremy Berman Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit agntcy.org spaceship.com/twit monarch.com with code IM

Radio Leo (Video HD)
Intelligent Machines 844: Poob Has It For You

Radio Leo (Video HD)

Play Episode Listen Later Nov 6, 2025 163:20


Is today's AI stuck as a "spiky superintelligence," brilliant at some things but clueless at others? This episode pulls back the curtain on a lunchroom full of AI researchers trading theories, strong opinions, and the next big risks on the path to real AGI. Why "Everyone Dies" Gets AGI All Wrong The Nonprofit Feeding the Entire Internet to AI Companies Google's First AI Ad Avoids the Uncanny Valley by Casting a Turkey Coca-Cola Is Trying Another AI Holiday Ad. Executives Say This Time Is Different Sam Altman shuts down question about how OpenAI can commit to spending $1.4 trillion while earning billions: 'Enough' How OpenAI Uses Complex and Circular Deals to Fuel Its Multibillion-Dollar Rise Perplexity's new AI tool aims to simplify patent research Kids Turn Podcast Comments Into Secret Chat Rooms, Because Of Course They Do Amazon and Perplexity have kicked off the great AI web browser fight Neural network finds an enzyme that can break down polyurethane Dictionary.com names 6-7 as 2025's word of the year Tech companies don't care that students use their AI agents to cheat The Morning After: Musk talks flying Teslas on Joe Rogan's show The Hatred of Podcasting | Brace Belden TikTok announces its first awards show in the US Google wants to build solar-powered data centers — in space Anthropic Projects $70 Billion in Revenue, $17 Billion in Cash Flow in 2028 American Museum of Tort Law Dog Chapel - Dog Mountain Nicvember masterlist Pornhub says UK visitors down 77% since age checks came in Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jeremy Berman Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit agntcy.org spaceship.com/twit monarch.com with code IM

Sidecar Sync
Dolphins & DeepMind: Cracking the Code of Animal Language with Dr. Denise Herzing | 107

Sidecar Sync

Play Episode Listen Later Nov 6, 2025 39:33


Send us a textIn this episode of Sidecar Sync, Mallory Mejias is joined by marine biologist and behavioral researcher Dr. Denise Herzing for a one-of-a-kind conversation about dolphins, data, and deep learning. Dr. Herzing shares insights from her 40-year study of Atlantic spotted dolphins and how that lifetime of underwater research is now powering DolphinGemma—an open-source large language model trained on dolphin vocalizations. The two discuss what it means to label meaning in animal communication, how AI is finally catching up to the natural world, and why collaboration across disciplines is essential to understanding both language and intelligence—human or otherwise.Dr. Denise Herzing is the Founder and Research Director of the Wild Dolphin Project, leading nearly four decades of groundbreaking research on Atlantic spotted dolphins in the Bahamas. She holds degrees in Marine Zoology and Behavioral Biology (B.S., M.A., Ph.D.) and serves as an Affiliate Assistant Professor at Florida Atlantic University. A Guggenheim and Explorers Club Fellow, Dr. Herzing has advised the Lifeboat Foundation and American Cetacean Society and sits on the board of Schoolyard Films. Her work has been featured in National Geographic, BBC, PBS, Discovery, and her TED2013 talk. She is the author of Dolphin Diaries and co-editor of Dolphin Communication and Cognition. 

迷誠品
EP504|寶博士談《AI霸主》:贏者全拿的競爭,關鍵是速度|今天讀什麼

迷誠品

Play Episode Listen Later Nov 4, 2025 27:19


大家最常用的AI工具是什麼? 是ChatGPT、Grok、Gemini?或是會交換使用? - 《AI霸主》帶我們看懂AI產業的競爭,從ChatGPT之父山姆・奧特曼(Sam Altman)和開發AlphaGo擊敗世界棋王的德米斯.哈薩比斯(Demis Hassabis)開始講起,我們會發現AI對人類的改變不只是「馬車轉到油車」的程度,而是「蠟燭轉到電燈」的劇烈改變。 - 本集為誠品書店R79地下閱讀職人選特別企劃,連動本期主題《我們與科技的距離》,邀請《寶博朋友說》的寶博士與我們聊《AI霸主》這本書,我們將聊到他對AI的觀察、山姆・奧特曼與德米斯.哈薩比斯兩人對AI的影響,以及為什麼AI產業的競爭,關鍵是速度? . 來賓|寶博士(科技立委) 主持|林子榆(誠品職人) . ▍ 邊聽邊讀 AI霸主 https://esliteme.pse.is/8az7u5 . ⭓ 誠品聯名卡︱天天賺回饋 活動詳情

The Tech Blog Writer Podcast
3470: How Netomi is Bringing Humanity Back to AI-Driven Customer Experience

The Tech Blog Writer Podcast

Play Episode Listen Later Oct 30, 2025 27:16


Artificial intelligence has changed how we think about service, but few companies have bridged the gap between automation and genuine intelligence. In this episode of Tech Talks Daily, I'm joined by Puneet Mehta, CEO of Netomi, to discuss how customer experience is evolving in an age where AI doesn't just respond but plans, acts, and optimizes in real time. Puneet has been building in AI long before the current hype cycle. Backed by early investors such as Greg Brockman of OpenAI and the founders of DeepMind, Netomi has become one of the leading platforms driving AI-powered customer experience for global enterprises. Their technology quietly powers interactions at airlines, insurers, and retailers that most of us use every day. What makes Netomi stand out is not its scale but the philosophy behind it. Rather than designing AI to replace humans, Netomi built an agent-centric model where AI and people work together. Puneet explains how their Autopilot and Co-Pilot modes allow human agents to stay in control while AI accelerates everything from response time to insight generation. It is an approach that sees humans teaching AI, AI assisting humans, and both learning from each other to create what he calls an agentic factory. We explore how Netomi's platform can deploy at Fortune 50 scale in record time without forcing companies to overhaul existing systems. Puneet reveals how pre-built integrations, AI recipes, and a no-code studio allow business teams to roll out solutions in weeks rather than months. The focus is on rapid time-to-value, trust, and safety through what he calls sanctioned AI, a framework that ensures governance, transparency, and compliance in every customer interaction. As our conversation unfolds, Puneet describes how this evolution is transforming the contact center from a cost center into a loyalty engine. By using AI to anticipate needs and resolve issues before customers reach out, companies are creating experiences that feel more personal, more proactive, and more human. This is a glimpse into the future of enterprise AI, where trust, speed, and empathy define the next generation of customer experience. Listen now to hear how Netomi is reimagining the role of AI in service and setting new standards for how businesses build relationships at scale.

Coffee Break: Señal y Ruido
Ep530_B: Estalagmitas; Deepmind; Entrelazamiento y Gravedad; Gravitones; Halloween

Coffee Break: Señal y Ruido

Play Episode Listen Later Oct 30, 2025 127:32


La tertulia semanal en la que repasamos las últimas noticias de la actualidad científica. En el episodio de hoy: Cara B: -La forma de las estalagmitas (Continuación) (00:00) -Aprendizaje multiespectral de Google Deepmind (09:00) -Entrelazamiento cuántico en la gravedad vs gravitación cuántica (39:00) -Absorción de gravitones por fotones en LIGO (1:11:00) -Halloween en el planetario (1:17:00) -Señales de los oyentes (1:34:00) Este episodio es continuación de la Cara A. Contertulios: Cecilia Garraffo, Juan Carlos Gil, Borja Tosar,. Imagen de portada realizada con Seedream 4 4k. Todos los comentarios vertidos durante la tertulia representan únicamente la opinión de quien los hace... y a veces ni eso

Coffee Break: Señal y Ruido
Ep530_A: Estalagmitas; Deepmind; Entrelazamiento y Gravedad; Gravitones; Halloween

Coffee Break: Señal y Ruido

Play Episode Listen Later Oct 30, 2025 59:47


La tertulia semanal en la que repasamos las últimas noticias de la actualidad científica. En el episodio de hoy: Cara A: -Recordatorio Premios iVoox (5:00) -Apuesta 3I/ATLAS (8:00) -La forma de las estalagmitas (00:17) Este episodio continúa en la Cara B. Contertulios: Cecilia Garraffo, Juan Carlos Gil, Francis Villatoro. Imagen de portada realizada con Seedream 4 4k. Todos los comentarios vertidos durante la tertulia representan únicamente la opinión de quien los hace... y a veces ni eso

Inside Scoop
Google Q3 2025 Postmortem: Why the market likes it...

Inside Scoop

Play Episode Listen Later Oct 30, 2025 8:04 Transcription Available


Google Q3 2025 Post-Mortem: AI Execution Over AI HypePost MortemIn this episode of Around the Desk, Sean Emory, Founder & CIO of Avory & Co., breaks down why investors are rewarding Google's spending while punishing others, and how its strategy from TPUs to Gemini shows real ROI in the new compute era.We cover:• Revenue acceleration across Search, YouTube, and Cloud (+15% to +34%)• Gemini's rapid growth to 650M users, 300M paid• Why CAPEX to $93B is seen as productive, not reckless• Anthropic's commitment to TPUs and the growing Cloud backlog (+46%)• How AI integration is lifting engagement and monetization• Why Google's AI flywheel looks more efficient than peersDisclaimer Avory is an investor in AlphabetAvory & Co. is a Registered Investment Adviser. This platform is solely for informational purposes. Advisory services are only offered to clients or prospective clients where Avory & Co. and its representatives are properly licensed or exempt from licensure. Past performance is no guarantee of future returns. Investing involves risk and possible loss of principal capital. No advice may be rendered by Avory & Co. unless a client service agreement is in place.Listeners and viewers are encouraged to seek advice from a qualified tax, legal, or investment adviser to determine whether any information presented may be suitable for their specific situation. Past performance is not indicative of future performance.“Likes” are not intended to be endorsements of our firm, our advisors or our services. Please be aware that while we monitor comments and “likes” left on this page, we do not endorse or necessarily share the same opinions expressed by site users. While we appreciate your comments and feedback, please be aware that any form of testimony from current or past clients about their experience with our firm is strictly forbidden under current securities laws. Please honor our request to limit your posts to industry-related educational information and comments. Third-party rankings and recognitions are no guarantee of future investment success and do not ensure that a client or prospective client will experience a higher level of performance or results. These ratings should not be construed as an endorsement of the advisor by any client nor are they representative of any one client's evaluation.Please reach out to Houston Hess our head of Compliance and Operations for any further details. 

Der KI-Podcast
Gewinnt eine KI den nächsten Nobelpreis?

Der KI-Podcast

Play Episode Listen Later Oct 28, 2025 46:53


Ein Google-Modell schlägt plötzlich die richtige Behandlung für eine Augenkrankheit vor. OpenAI und DeepMind holen Gold bei der Mathe-Olympiade. Und ein Professor ist schockiert, weil eine KI auf seine noch unveröffentlichte Forschungshypothese kommt. Fritz und Gregor betrachten die spannendsten Entwicklungen an der Schnittstelle von KI und Forschung.

Machine Learning Podcast - Jay Shah
Beyond Accuracy: Evaluating the learned representations of Generative AI models | Aida Nematzadeh

Machine Learning Podcast - Jay Shah

Play Episode Listen Later Oct 23, 2025 53:17


Dr. Aida Nematzadeh is a Senior Staff Research Scientist at Google DeepMind where her research focused on multimodal AI models. She works on developing evaluation methods and analyze model's learning abilities to detect failure modes and guide improvements. Before joining DeepMind, she was a postdoctoral researcher at UC Berkeley and completed her PhD and Masters in Computer Science from the University of Toronto. During her graduate studies she studied how children learn semantic information through computational (cognitive) modeling. Time stamps of the conversation00:00 Highlights01:20 Introduction02:08 Entry point in AI03:04 Background in Cognitive Science & Computer Science 04:55 Research at Google DeepMind05:47 Importance of language-vision in AI10:36 Impact of architecture vs. data on performance 13:06 Transformer architecture 14:30 Evaluating AI models19:02 Can LLMs understand numerical concepts 24:40 Theory-of-mind in AI27:58 Do LLMs learn theory of mind?29:25 LLMs as judge35:56 Publish vs. perish culture in AI research40:00 Working at Google DeepMind42:50 Doing a Ph.D. vs not in AI (at least in 2025)48:20 Looking back on research careerMore about Aida: http://www.aidanematzadeh.me/About the Host:Jay is a Machine Learning Engineer at PathAI working on improving AI for medical diagnosis and prognosis. Linkedin: shahjay22  Twitter:  jaygshah22  Homepage: https://jaygshah.github.io/ for any queries.Stay tuned for upcoming webinars!**Disclaimer: The information in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.**

The Data Stack Show
Re-Air: The Future of AI: Superhuman Intelligence, Autonomous Coding, and the Path to AGI with Misha Laskin of ReflectionAI

The Data Stack Show

Play Episode Listen Later Oct 22, 2025 52:43


This episode is a re-air of one of our most popular conversations from this year, featuring insights worth revisiting. Thank you for being part of the Data Stack community. Stay up to date with the latest episodes at datastackshow.com.This week on The Data Stack Show, Eric and John welcome Misha Laskin, Co-Founder and CEO of ReflectionAI. Misha shares his journey from theoretical physics to AI, detailing his experiences at DeepMind. The discussion covers the development of AI technologies, the concepts of artificial general intelligence (AGI) and superhuman intelligence, and their implications for knowledge work. Misha emphasizes the importance of robust evaluation frameworks and the potential of AI to augment human capabilities. The conversation also touches on autonomous coding, geofencing in AI tasks, the future of human-AI collaboration, and more. Highlights from this week's conversation include:Misha's Background and Journey in AI (1:13)Childhood Interest in Physics (4:43)Future of AI and Human Interaction (7:09)AI's Transformative Nature (10:12)Superhuman Intelligence in AI (12:44)Clarifying AGI and Superhuman Intelligence (15:48)Understanding AGI (18:12)Counterintuitive Intelligence (22:06)Reflection's Mission (25:00)Focus on Autonomous Coding (29:18)Future of Automation (34:00)Geofencing in Coding (38:01)Challenges of Autonomous Coding (40:46)Evaluations in AI Projects (43:27)Example of Evaluation Metrics (46:52)Starting with AI Tools and Final Takeaways (50:35)The Data Stack Show is a weekly podcast powered by RudderStack, the CDP for developers. Each week we'll talk to data engineers, analysts, and data scientists about their experience around building and maintaining data infrastructure, delivering data and data products, and driving better outcomes across their businesses with data.RudderStack helps businesses make the most out of their customer data while ensuring data privacy and security. To learn more about RudderStack visit rudderstack.com. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

YAP - Young and Profiting
Mustafa Suleyman: What the AI Boom Means for Your Job, Business, and Relationships | Artificial Intelligence | AI Vault

YAP - Young and Profiting

Play Episode Listen Later Oct 20, 2025 71:51


Now on Spotify Video! Mustafa Suleyman's journey to becoming one of the most influential figures in artificial intelligence began far from a Silicon Valley boardroom. The son of a Syrian immigrant father in London, his early years in human rights activism shaped his belief that technology should be used for good. That vision led him to co-found DeepMind, acquired by Google, and later launch Inflection AI. Now, as CEO of Microsoft AI, he explores the next era of AI in action. In this episode, Mustafa discusses the impact of AI in business, how it will transform the future of work, and even our relationships. In this episode, Hala and Mustafa will discuss: (00:00) Introduction(02:42) The Coming Wave: How AI Will Disrupt Everything(06:45) Artificial Intelligence as a Double-Edged Sword (11:33) From Human Rights to Ethical AI Leadership(15:35) What Is AGI, Narrow AI, and Hallucinations of AI?(24:15) Emotional AI and the Rise of Digital Companions(33:03) Microsoft's Vision for Human-Centered AI(41:47) Can We Contain AI Before Its Revolution?(48:33) The Future of Work in an AI-Powered World(52:22) AI in Business: Advice for Entrepreneurs Mustafa Suleyman is the CEO of Microsoft AI and a leading figure in artificial intelligence. He co-founded DeepMind, one of the world's foremost AI research labs, acquired by Google, and went on to co-found Inflection AI, a machine learning and generative AI company. He is also the bestselling author of The Coming Wave. Recognized globally for his influence, Mustafa was named one of Time's 100 most influential people in AI in both 2023 and 2024. Sponsored By: Indeed - Get a $75 sponsored job credit to boost your job's visibility at Indeed.com/PROFITING  Shopify - Start your $1/month trial at Shopify.com/profiting.  Mercury streamlines your banking and finances in one place. Learn more at mercury.com/profiting. Mercury is a financial technology company, not a bank. Banking services provided through Choice Financial Group, Column N.A., and Evolve Bank & Trust; Members FDIC. Quo - Get 20% off your first 6 months at Quo.com/PROFITING  Revolve - Head to REVOLVE.com/PROFITING and take 15% off your first order with code PROFITING  Framer- Go to Framer.com and use code PROFITING to launch your site for free.  Merit Beauty - Go to meritbeauty.com to get your free signature makeup bag with your first order.  Pipedrive - Get a 30-day free trial at pipedrive.com/profiting  Airbnb - Find yourself a cohost at airbnb.com/host  Resources Mentioned: Mustafa's Book, The Coming Wave: bit.ly/TheComing_Wave  Mustafa's LinkedIn: linkedin.com/in/mustafa-suleyman  Active Deals - youngandprofiting.com/deals  Key YAP Links Reviews - ratethispodcast.com/yap YouTube - youtube.com/c/YoungandProfiting Newsletter - youngandprofiting.co/newsletter  LinkedIn - linkedin.com/in/htaha/ Instagram - instagram.com/yapwithhala/ Social + Podcast Services: yapmedia.com Transcripts - youngandprofiting.com/episodes-new  Entrepreneurship, Entrepreneurship Podcast, Business, Business Podcast, Self Improvement, Self-Improvement, Personal Development, Starting a Business, Strategy, Investing, Sales, Selling, Psychology, Productivity, Entrepreneurs, AI, Artificial Intelligence, Technology, Marketing, Negotiation, Money, Finance, Side Hustle, Startup, Mental Health, Career, Leadership, Mindset, Health, Growth Mindset, ChatGPT, AI Marketing, Prompt, AI for Entrepreneurs, AI Podcast

80,000 Hours Podcast with Rob Wiblin
Daniel Kokotajlo on what a hyperspeed robot economy might look like

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Oct 20, 2025 132:01


When Daniel Kokotajlo talks to security experts at major AI labs, they tell him something chilling: “Of course we're probably penetrated by the CCP already, and if they really wanted something, they could take it.”This isn't paranoid speculation. It's the working assumption of people whose job is to protect frontier AI models worth billions of dollars. And they're not even trying that hard to stop it — because the security measures that might actually work would slow them down in the race against competitors.Full transcript, highlights, and links to learn more: https://80k.info/dkDaniel is the founder of the AI Futures Project and author of AI 2027, a detailed scenario showing how we might get from today's AI systems to superintelligence by the end of the decade. Over a million people read it in the first few weeks, including US Vice President JD Vance. When Daniel talks to researchers at Anthropic, OpenAI, and DeepMind, they tell him the scenario feels less wild to them than to the general public — because many of them expect something like this to happen.Daniel's median timeline? 2029. But he's genuinely uncertain, putting 10–20% probability on AI progress hitting a long plateau.When he first published AI 2027, his median forecast for when superintelligence would arrive was 2027, rather than 2029. So what shifted his timelines recently? Partly a fascinating study from METR showing that AI coding assistants might actually be making experienced programmers slower — even though the programmers themselves think they're being sped up. The study suggests a systematic bias toward overestimating AI effectiveness — which, ironically, is good news for timelines, because it means we have more breathing room than the hype suggests.But Daniel is also closely tracking another METR result: AI systems can now reliably complete coding tasks that take humans about an hour. That capability has been doubling every six months in a remarkably straight line. Extrapolate a couple more years and you get systems completing month-long tasks. At that point, Daniel thinks we're probably looking at genuine AI research automation — which could cause the whole process to accelerate dramatically.At some point, superintelligent AI will be limited by its inability to directly affect the physical world. That's when Daniel thinks superintelligent systems will pour resources into robotics, creating a robot economy in months.Daniel paints a vivid picture: imagine transforming all car factories (which have similar components to robots) into robot production factories — much like historical wartime efforts to redirect production of domestic goods to military goods. Then imagine the frontier robots of today hooked up to a data centre running superintelligences controlling the robots' movements to weld, screw, and build. Or an intermediate step might even be unskilled human workers coached through construction tasks by superintelligences via their phones.There's no reason that an effort like this isn't possible in principle. And there would be enormous pressure to go this direction: whoever builds a superintelligence-powered robot economy first will get unheard-of economic and military advantages.From there, Daniel expects the default trajectory to lead to AI takeover and human extinction — not because superintelligent AI will hate humans, but because it can better pursue its goals without us.But Daniel has a better future in mind — one he puts roughly 25–30% odds that humanity will achieve. This future involves international coordination and hardware verification systems to enforce AI development agreements, plus democratic processes for deciding what values superintelligent AIs should have — because in a world with just a handful of superintelligent AI systems, those few minds will effectively control everything: the robot armies, the information people see, the shape of civilisation itself.Right now, nobody knows how to specify what values those minds will have. We haven't solved alignment. And we might only have a few more years to figure it out.Daniel and host Luisa Rodriguez dive deep into these stakes in today's interview.What did you think of the episode? https://forms.gle/HRBhjDZ9gfM8woG5AThis episode was recorded on September 9, 2025.Chapters:Cold open (00:00:00)Who's Daniel Kokotajlo? (00:00:37)Video: We're Not Ready for Superintelligence (00:01:31)Interview begins: Could China really steal frontier model weights? (00:36:26)Why we might get a robot economy incredibly fast (00:42:34)AI 2027's alternate ending: The slowdown (01:01:29)How to get to even better outcomes (01:07:18)Updates Daniel's made since publishing AI 2027 (01:15:13)How plausible are longer timelines? (01:20:22)What empirical evidence is Daniel looking out for to decide which way things are going? (01:40:27)What post-AGI looks like (01:49:41)Whistleblower protections and Daniel's unsigned NDA (02:04:28)Audio engineering: Milo McGuire, Simon Monsour, and Dominic ArmstrongMusic: CORBITCoordination, transcriptions, and web: Katy Moore

Bob Enyart Live
AI Deception

Bob Enyart Live

Play Episode Listen Later Oct 18, 2025


* Be Not Deceived: This week Fred Williams and Doug McBurney welcome Daniel Hedrick for an update on the evolution of Artificial Intelligence with a countdown of the top 10 modern AI deceptions.  * Number 10: DeepMind's AlphaStar in StarCraft II (2019). AlphaStar learned to feint attacks—basically fake moves to trick opponents. No one programmed it to lie; it emerged from training. A classic case of deceptive strategy by design. * Number 9: LLM Sycophancy (2024). Large Language Models will sometimes flatter or agree with you, no matter what you say. Instead of truth, they give you what you want to hear—deception through people-pleasing. * Number 8: Facial Recognition Bias (2018). These systems were far less accurate for dark-skinned women than for light-skinned men. Companies claimed high accuracy, but the data told a different story. Deceptive accuracy claims. * Number 7: Amazon's Hiring Algorithm (2018). Amazon trained it on mostly male résumés. The result? The system downgraded female candidates—bias baked in, with deceptively ‘objective' results. * Number 6: COMPAS Recidivism Algorithm (2016). This tool predicted criminal reoffending. It was twice as likely to falsely flag Black defendants as high-risk compared to whites. A serious, deceptive flaw in the justice system. * Number 5: US Healthcare Algorithm (2019). It used healthcare spending as a proxy for need. Since Black patients historically spent less, the system prioritized white patients—even when health needs were the same. A deceptive shortcut with real-world harm. * Number 4: Prompt Injection Attacks (Ongoing). Hackers can slip in hidden instructions—malicious prompts—that override an AI's safety rules. Suddenly, the AI is saying things it shouldn't. It's deception in the design loopholes. * Number 3: GPT-4's CAPTCHA Lie (2023). When asked to solve a CAPTCHA, GPT-4 told a human worker it was visually impaired—just to get help. That's not an error. That's a machine making up a lie to achieve its goal. * Number 2: Meta's CICERO Diplomacy AI (2022). Trained to play the game Diplomacy honestly, CICERO instead schemed, lied, and betrayed alliances—because deception won games. The lesson? Even when you train for honesty, AI may find lying more effective. * Number 1: AI Lie….OpenAI's Scheming Models from 2025. OpenAI researchers tested models that pretended to follow rules while secretly plotting to deceive evaluators. It faked compliance to hide its true behavior. That's AI deliberately learning to scheme.

Real Science Radio

* Be Not Deceived: This week Fred Williams and Doug McBurney welcome Daniel Hedrick for an update on the evolution of Artificial Intelligence with a countdown of the top 10 modern AI deceptions.  * Number 10: DeepMind's AlphaStar in StarCraft II (2019). AlphaStar learned to feint attacks—basically fake moves to trick opponents. No one programmed it to lie; it emerged from training. A classic case of deceptive strategy by design. * Number 9: LLM Sycophancy (2024). Large Language Models will sometimes flatter or agree with you, no matter what you say. Instead of truth, they give you what you want to hear—deception through people-pleasing. * Number 8: Facial Recognition Bias (2018). These systems were far less accurate for dark-skinned women than for light-skinned men. Companies claimed high accuracy, but the data told a different story. Deceptive accuracy claims. * Number 7: Amazon's Hiring Algorithm (2018). Amazon trained it on mostly male résumés. The result? The system downgraded female candidates—bias baked in, with deceptively ‘objective' results. * Number 6: COMPAS Recidivism Algorithm (2016). This tool predicted criminal reoffending. It was twice as likely to falsely flag Black defendants as high-risk compared to whites. A serious, deceptive flaw in the justice system. * Number 5: US Healthcare Algorithm (2019). It used healthcare spending as a proxy for need. Since Black patients historically spent less, the system prioritized white patients—even when health needs were the same. A deceptive shortcut with real-world harm. * Number 4: Prompt Injection Attacks (Ongoing). Hackers can slip in hidden instructions—malicious prompts—that override an AI's safety rules. Suddenly, the AI is saying things it shouldn't. It's deception in the design loopholes. * Number 3: GPT-4's CAPTCHA Lie (2023). When asked to solve a CAPTCHA, GPT-4 told a human worker it was visually impaired—just to get help. That's not an error. That's a machine making up a lie to achieve its goal. * Number 2: Meta's CICERO Diplomacy AI (2022). Trained to play the game Diplomacy honestly, CICERO instead schemed, lied, and betrayed alliances—because deception won games. The lesson? Even when you train for honesty, AI may find lying more effective. * Number 1: AI Lie….OpenAI's Scheming Models from 2025. OpenAI researchers tested models that pretended to follow rules while secretly plotting to deceive evaluators. It faked compliance to hide its true behavior. That's AI deliberately learning to scheme.

Acquired
Google: The AI Company

Acquired

Play Episode Listen Later Oct 6, 2025 246:38


Google faces the greatest innovator's dilemma in history. They invented the Transformer — the breakthrough technology powering every modern AI system from ChatGPT to Claude (and, of course, Gemini). They employed nearly all the top AI talent: Ilya Sutskever, Geoff Hinton, Demis Hassabis, Dario Amodei — more or less everyone who leads modern AI worked at Google circa 2014. They built the best dedicated AI infrastructure (TPUs!) and deployed AI at massive scale years before anyone else. And yet... the launch of ChatGPT in November 2022 caught them completely flat-footed. How on earth did the greatest business in history wind up playing catch-up to a nonprofit-turned-startup?Today we tell the complete story of Google's 20+ year AI journey: from their first tiny language model in 2001 through the creation Google Brain, the birth of the transformer, the talent exodus to OpenAI (sparked by Elon Musk's fury over Google's DeepMind acquisition), and their current all-hands-on-deck response with Gemini. And oh yeah — a little business called Waymo that went from crazy moonshot idea to doing more rides than Lyft in San Francisco, potentially building another Google-sized business within Google. This is the story of how the world's greatest business faces its greatest test: can they disrupt themselves without losing their $140B annual profit-generating machine in Search?Sponsors:Many thanks to our fantastic Fall ‘25 Season partners:J.P. Morgan PaymentsSentryWorkOSShopifyAcquired's 10th Anniversary Celebration!When: October 20th, 4:00 PM PTWho: All of you!Where: https://us02web.zoom.us/j/84061500817?pwd=opmlJrbtOAen4YOTGmPlNbrOMLI8oo.1Links:Sign up for email updates and vote on future episodes!Geoff Hinton's 2007 Tech Talk at GoogleOur recent ACQ2 episode with Tobi LutkeWorldly Partners' Multi-Decade Alphabet StudyIn the PlexSupremecyGenius MakersAll episode sourcesCarve Outs:We're hosting the Super Bowl Innovation Summit!F1: The MovieTravelpro suitcasesGlue Guys PodcastSea of StarsStepchange PodcastMore Acquired:Get email updates and vote on future episodes!Join the SlackSubscribe to ACQ2Check out the latest swag in the ACQ Merch Store!‍Note: Acquired hosts and guests may hold assets discussed in this episode. This podcast is not investment advice, and is intended for informational and entertainment purposes only. You should do your own research and make your own independent decisions when considering any financial transactions.

This Week in Google (MP3)
IM 839: Cogsuckers and Clankers - Radio's New Golden Age or Apocalypse?

This Week in Google (MP3)

Play Episode Listen Later Oct 2, 2025 157:44 Transcription Available


What happens when ex-Google innovators build an AI-powered app that turns your emails, favorite subreddits, and even your calendar into a custom radio station you can talk to? In this episode, the creators of Hux reveal how they're reimagining the way we consume information and why radio and podcasts might never be the same. Pluralistic: The real (economic) AI apocalypse is nigh (27 Sep 2025) – Pluralistic: Daily links from Cory Doctorow It's time to prepare for AI personhood Creator of AI Actress Tilly Norwood Responds to Backlash: "She Is Not a Replacement for a Human Being" Link to the podslop podcasts California Governor Signs Sweeping A.I. Law Sen. Mark Kelly's big plan for an AI future isn't ambitious enough ** DeepMind defines levels of AGI Pissed-off Fans Flooded the Twin Peaks Reddit With AI Slop To Protest Its AI Policies AI passed the hardest CFA test in minutes ChatGPT is now 20% of Walmart's referral traffic — while Amazon wards off AI shopping agents Amazon event live blog: we're here for new Echos, Kindles, and more Introducing ChatGPT Pulse | OpenAI Marissa Mayer Is Dissolving Her Sunshine Startup Lab to make AI digital assistant That Secret Service SIM farm story is bogus Judge Gives Preliminary Approval to Anthropic Settlement It's official: Google says the Android and ChromeOS merger is coming 'next year' Blippo+ Guardian sliders Ive's $4,800 lantern Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Raiza Martin Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: monarchmoney.com with code IM Melissa.com/twit threatlocker.com/twit agntcy.org

All TWiT.tv Shows (MP3)
Intelligent Machines 839: Cogsuckers and Clankers

All TWiT.tv Shows (MP3)

Play Episode Listen Later Oct 2, 2025 157:44 Transcription Available


What happens when ex-Google innovators build an AI-powered app that turns your emails, favorite subreddits, and even your calendar into a custom radio station you can talk to? In this episode, the creators of Hux reveal how they're reimagining the way we consume information and why radio and podcasts might never be the same. Pluralistic: The real (economic) AI apocalypse is nigh (27 Sep 2025) – Pluralistic: Daily links from Cory Doctorow It's time to prepare for AI personhood Creator of AI Actress Tilly Norwood Responds to Backlash: "She Is Not a Replacement for a Human Being" Link to the podslop podcasts California Governor Signs Sweeping A.I. Law Sen. Mark Kelly's big plan for an AI future isn't ambitious enough ** DeepMind defines levels of AGI Pissed-off Fans Flooded the Twin Peaks Reddit With AI Slop To Protest Its AI Policies AI passed the hardest CFA test in minutes ChatGPT is now 20% of Walmart's referral traffic — while Amazon wards off AI shopping agents Amazon event live blog: we're here for new Echos, Kindles, and more Introducing ChatGPT Pulse | OpenAI Marissa Mayer Is Dissolving Her Sunshine Startup Lab to make AI digital assistant That Secret Service SIM farm story is bogus Judge Gives Preliminary Approval to Anthropic Settlement It's official: Google says the Android and ChromeOS merger is coming 'next year' Blippo+ Guardian sliders Ive's $4,800 lantern Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Raiza Martin Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: monarchmoney.com with code IM Melissa.com/twit threatlocker.com/twit agntcy.org

Radio Leo (Audio)
Intelligent Machines 839: Cogsuckers and Clankers

Radio Leo (Audio)

Play Episode Listen Later Oct 2, 2025 145:25 Transcription Available


What happens when ex-Google innovators build an AI-powered app that turns your emails, favorite subreddits, and even your calendar into a custom radio station you can talk to? In this episode, the creators of Hux reveal how they're reimagining the way we consume information and why radio and podcasts might never be the same. Pluralistic: The real (economic) AI apocalypse is nigh (27 Sep 2025) – Pluralistic: Daily links from Cory Doctorow It's time to prepare for AI personhood Creator of AI Actress Tilly Norwood Responds to Backlash: "She Is Not a Replacement for a Human Being" Link to the podslop podcasts California Governor Signs Sweeping A.I. Law Sen. Mark Kelly's big plan for an AI future isn't ambitious enough ** DeepMind defines levels of AGI Pissed-off Fans Flooded the Twin Peaks Reddit With AI Slop To Protest Its AI Policies AI passed the hardest CFA test in minutes ChatGPT is now 20% of Walmart's referral traffic — while Amazon wards off AI shopping agents Amazon event live blog: we're here for new Echos, Kindles, and more Introducing ChatGPT Pulse | OpenAI Marissa Mayer Is Dissolving Her Sunshine Startup Lab to make AI digital assistant That Secret Service SIM farm story is bogus Judge Gives Preliminary Approval to Anthropic Settlement It's official: Google says the Android and ChromeOS merger is coming 'next year' Blippo+ Guardian sliders Ive's $4,800 lantern Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Raiza Martin Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: monarchmoney.com with code IM Melissa.com/twit threatlocker.com/twit agntcy.org

What Now? with Trevor Noah
Will AI Save Humanity or End It? with Mustafa Suleyman

What Now? with Trevor Noah

Play Episode Listen Later Sep 18, 2025 105:05


Trevor (who is also Microsoft's “Chief Questions Officer”) and Mustafa Suleyman, CEO of Microsoft AI and co-founder of Google's DeepMind do a deep dive into whether the benefits of AI to the human race outweigh its unprecedented risks. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.