POPULARITY
Categories
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
Special Report: The Quantum Threshold (December 02, 2025)This special episode dissects the paradigm shift from theoretical quantum physics to engineering reality, triggered by Google's latest hardware and AI advancements.Key Topics & Takeaways:⚡ The Engineering Miracle (Google Willow): Google has unveiled "Willow," a 105-qubit processor that successfully demonstrates the "Threshold Theorem." For the first time, increasing the number of physical qubits (from code distance 3 to 7) has led to an exponential drop in error rates, proving that fault-tolerant quantum computing is physically possible.
欢迎收听雪球出品的财经有深度,雪球,国内领先的集投资交流交易一体的综合财富管理平台,聪明的投资者都在这里。今天分享的内容叫谷歌vs英伟达:A I的下半场巅峰对决,来自闷得而蜜。历史回顾互联网时代,成就了谷歌、facebook,思科跌下神坛。云计算时代,成就了微软、亚马逊,Intel跌下神坛。移动互联网时代,成就了苹果,高通跌下神坛。IT行业有条铁律:每一美元的硬件,必须产生十美元的软件和服务收入。这个产业规律,暗示着,信息产业进入稳定成长期,软件和服务商的估值远远超越硬件厂商。如今,人工智能浪潮席卷全球,算力成为新时代的“石油”,大模型成为基础设施。站在 A I 时代的十字路口,我们再次目睹巨头格局的剧烈重构。英伟达,凭借 G P U 在并行计算上的天然优势,一跃成为A I训练的“卖铲人”,市值一度超越苹果、微软,登顶全球第一。它的成功,看似打破了“一美元硬件难敌十美元软件”的铁律。而谷歌,作为A I领域的长期布道者——从2014年收购 DeepMind,到2017年提出 Transformer 架构,再到持续投入大模型与 A I 原生产品——却在商业化落地和市场估值上显得步履谨慎,甚至被质疑“起了个大早,赶了个晚集”。这不禁让人发问:在A I的下半场,究竟是掌握底层算力的“硬件霸主”继续高歌猛进,还是拥有数据、算法与生态闭环的“软件巨头”后来居上?谷歌与英伟达的对决,或许正是回答这个问题的关键线索。英伟达:算力垄断的黄金时代英伟达的崛起并非偶然。自 2012 年 AlexNet 使用 G P U 加速深度学习以来,黄仁勋就押注 A I 是计算范式的根本性变革。过去三年,这一判断被验证到极致:市占率超 95%:在训练端 G P U 市场,英伟达几乎形成事实垄断;毛利率高达 75%+:远超传统芯片公司,逼近软件公司水平;CUDA 生态护城河:百万开发者、数千优化库、数万企业依赖其软件栈,迁移成本极高;订单排到 2026 年:Blackwell 芯片供不应求,客户包括微软、Meta、亚马逊、甲骨文等所有云巨头。更关键的是,英伟达不再只是卖芯片,而是通过 A I Enterprise 软件套件、NIM 微服务、DGX Cloud 等,向“A I 操作系统”演进。它正在把硬件优势转化为平台级控制力。谷歌:被低估的 A I 全栈能力如果说英伟达是 A I 时代的“水电煤供应商”,那么谷歌就是那个最早设计电网、制定标准、还自己发电用电的人。技术源头:Transformer 架构(2017)已成为所有大模型的基础;LaMDA、PaLM、Gemini 系列模型持续领先;自研芯片:TPU 已迭代至 v5e/v5p/v6/v7,在内部训练效率上媲美甚至优于 B200;数据闭环:Search、YouTube、GmA Il、Android、Workspace 每天产生海量真实用户交互数据,这是任何外部公司无法复制的燃料;产品整合:A I 已深度嵌入 Search、Workspace、Android、Cloud。更重要的是,谷歌的商业模式天然适配 A I 变现:广告仍是现金牛:2024 年 Q3 广告收入 65 亿美元每天,为 A I 投入提供无限弹药;A I 不是成本,而是效率工具:用 Gemini 优化搜索结果、自动生成广告文案、提升客服效率——每一项都能直接节省数十亿美元运营成本;云业务拐点已现:Google Cloud 首次实现全年盈利,A I 服务成增长引擎。市场低估谷歌,是因为它没有像英伟达那样“性感”的单季度 200% 增长。但谷歌的 A I 战略是内生、稳健、可规模化的——它不需要靠卖芯片吃饭,而是让 A I 成为整个生态的“操作系统”。下半场:从“卖铲子”到“开金矿”A I 上半场的主题是基础设施军备竞赛——谁有更多 G P U,谁就能训练更大模型。英伟达因此受益最大。但下半场的主题正在转向:谁能用 A I 创造真实价值?谁能把模型变成产品、服务和利润?这里有几个关键转折信号:模型同质化加剧:闭源与开源模型性能差距缩小,单纯堆参数不再有效;推理成本成为瓶颈:训练只需一次,推理每天亿次——能效比、边缘部署、定制芯片更重要;用户要结果,不要技术:企业关心“A I 能否提升客服转化率”,而非“用了多少 B200”。在这个新阶段,谷歌的优势开始凸显:它拥有从芯片,框架,模型 ,到应用,最终到达用户的完整闭环;它不需要说服客户“为什么需要 A I”,因为 A I 已经在每天服务数十亿人;它的护城河不是 CUDA,而是用户习惯 + 数据飞轮 + 产品集成度。反观英伟达,若不能从“硬件供应商”进化为“A I 平台运营商”,其高估值将面临巨大压力。毕竟,历史上从未有一家纯硬件公司能长期维持 30 倍以上的市销率。结论:不是零和博弈,而是范式迁移谷歌与英伟达并非简单的“你死我活”。事实上,它们代表了 A I 价值链的两个关键环节:基础设施层 vs 应用层。但在 A I 的下半场,界限正在模糊:英伟达在做软件;谷歌在做芯片;微软既买英伟达芯片,又自研 MA Ia,还集成 Open AI;亚马逊一边采购 H100,一边推广 TrA Inium。真正的胜负手,不在于谁卖更多芯片,而在于谁能构建“软硬一体、端云协同、数据驱动”的飞轮。从这个角度看,谷歌的长期确定性可能更高——因为它早已把 A I 编织进自己的基因。而英伟达的辉煌,仍取决于 A I 资本开支的持续性和生态壁垒的不可逾越性。投资者不妨这样思考:如果你相信 A I 仍将经历一轮疯狂的基础设施投资潮,英伟达仍是首选;如果你相信 A I 正在进入“价值兑现期”,那么谷歌这样拥有场景、数据和变现能力的公司,才真正站在复利的起点。历史告诉我们:最终赢得时代的,从来不是最锋利的铲子,而是挖到金矿并建起城市的人。
Send us a textJoin hosts Alex Sarlin and Ben Kornell as they break down OpenAI's unexpected launch of ChatGPT for K–12, Google's accelerating AI momentum, and what these shifts mean for schools, teachers, and the edtech ecosystem.✨ Episode Highlights: [00:02:03] OpenAI unveils ChatGPT for K–12 educators—secure, curriculum-aware, and free through 2027 [00:03:02] The emerging AI Classroom Wars between OpenAI and Google across major U.S. districts [00:07:36] Google's big week: DeepMind tutoring gains and Gemini 3's multimodal upgrades [00:10:25] How district leaders will navigate growing community divides over AI adoption [00:14:04] What OpenAI's move means for MagicSchool, SchoolAI, Brisk, and other edtech playersPlus, special guests:[00:19:26] Janos Perczel, CEO of Polygence on scaling project-based learning with AI and why TeachLM trains models on authentic student–teacher interactions[00:41:36] Dr. Stephen Hodges, CEO of Efekta Education on AI-powered language learning for 4M students and early evidence of major test score gains
What happens when malware stops behaving like malware and starts behaving more like a living digital organism. In this episode of The Healthier Tech Podcast, we break down Google's latest discovery: malicious software that can rewrite its own code using artificial intelligence while it is already running on your device. This one shift turns a predictable threat into something far more flexible and far harder to detect. We walk through how traditional malware works and why this new generation breaks every rule cybersecurity has relied on for decades. You will learn what makes self-modifying code so disruptive and why Google calls this a new phase of artificial intelligence abuse. You will hear about Promptflux, the first known malware that asks an artificial intelligence model to rewrite it in real time. We also explore four other experimental malware families highlighted in Google's report, including versions designed to steal files, open backdoors, gather system data, and search for passwords. Each one shows how hackers are beginning to use artificial intelligence to scale their attacks. This episode explains, in simple language, how these threats operate and why they matter for everyday users who want healthier, safer relationships with their devices. We cover how Google and DeepMind are trying to counter this trend and what this new category of evolving malware means for digital wellness, privacy, and personal tech hygiene. If you care about digital safety, tech balance, or keeping your devices healthy, this is a must-listen. This episode connects the dots between cybersecurity and wellness in a way that is clear, practical, and relevant for anyone who uses technology daily. For more episodes on digital wellness, healthy tech habits, and staying informed in a fast moving tech world, make sure to subscribe and tune in. This episode is brought to you by Shield Your Body—a global leader in EMF protection and digital wellness. Because real wellness means protecting your body, not just optimizing it. If you found this episode eye-opening, leave a review, share it with someone tech-curious, and don't forget to subscribe to Shield Your Body on YouTube for more insights on living healthier with technology.
Сегодня вас ждёт множество свежих моделей: GPT-5.1 (Instant, Thinking, Codex-Max), Grok 4.1, Gemini 3 Pro, Kimi-K2 Thinking, ERNIE 5.0, Qwen DeepResearch 2511 и VibeThinker. Также поговорим про групповые чаты в ChatGPT, IDE от Google — Antigravity, IDE со встроенным TikTok — Chad, новую Visual Studio 2026, Google Code Wiki, NanaBanan Pro и космический проект Suncatcher с TPU на орбите, SIMA 2 от DeepMind, Microsoft Agent 365, летающие такси XPENG и их гуманоидных разнополых роботов IRON. В финале — этические эксперименты Anthropic с «правами» моделей и размышления о цифровых клонах и телесности сознания.
Youtube Channel: https://www.youtube.com/@GenerativeAIMeetup Mark's Travel Vlog: https://www.youtube.com/@kumajourney11 Mark's Personal Youtube Channel: https://www.youtube.com/@markkuczmarski896 Attend a live event: https://genaimeetup.com/ Shashank Linked In: https://www.linkedin.com/in/shashu10/ In this episode of the Generative AI Meetup Podcast, Mark (in Ohio) and Shashank (in India) finally sit down after a month of travel to unpack a very eventful stretch in AI. They dive into Google's new Gemini 3 Pro, its standout scores on Humanity's Last Exam and ARC-AGI, and why these reasoning benchmarks matter more than yet another near-perfect standardized test score. Mark also makes a public feature request to DeepMind: please increase Gemini's max output tokens. From there they get hands-on with the developer experience: Google's new Anti-Gravity coding IDE (and how it compares to Cursor) Using GPT-5.1 Codex High in Cursor's autonomous “plan mode” Why long context and long output windows are critical for deep research and book-length projects The conversation then shifts to the bigger picture: LLMs as therapists, sycophancy, safety, and the danger of AI always agreeing with you Mark's rant on robotics, humanoid robots, and a coming age of extreme abundance where robots handle most physical and intellectual work Why learning to code may become the mental equivalent of going to the gym—a “brain gym” in a world where AI can do most practical tasks They also cover the latest AI industry drama and milestones: Yann LeCun leaving Meta, what that might signal about Big Tech AI labs, and how godfathers like Hinton, LeCun, and Bengio see the road to AGI DeepMind's new game-playing agent and why world models in 3D environments matter for real-world robotics Genspark hitting unicorn status and what it means for “ChatGPT wrapper” startups Co-inventing a new term on air: a “narwhal” = a trillion-dollar private company If you're curious about where frontier models, coding agents, robotics, and AGI trajectories all intersect—plus some philosophical musing on jobs, meaning, and abundance—this episode is for you.
Hey everyone, Alex here
You ever see a new AI model drop and be like.... it's so good OMG how do I use it?
Jason Howell and Jeff Jarvis explore Google's Gemini 3 multimodal AI with visual and interactive features, Microsoft's AI Copilot launch across Windows, and Jeff Bezos's new well-funded AI startup Project Prometheus focused on engineering and manufacturing. Note: Time codes subject to change depending on dynamic ad insertion by the distributor. CHAPTERS: 00:00 - Podcast begins 01:55 - Do LLM's understand? AI Pioneer Yann Le Cun spars with DeepMind's Adam Brown 20:45 - A new era of intelligence with Gemini 3 34:25 - Microsoft is packing more AI into Windows, ready or not - here's what's new 37:59 - Inside Microsoft Agent 365: How AI Workers Will Be Secured, Identified, and Governed 42:24 - At a major AI conference, one startup got voted most likely to flop 44:48 - Hugging Face CEO says we're in an ‘LLM bubble,' not an AI bubble 46:07 - Google boss warns 'no company is going to be immune' if AI bubble bursts 49:02 - Google unveils agentic tools to help advertisers - So does Amazon 49:23 - And Meta introduces a foundation model for advertisers 53:17 - DeepMind releases WeatherNext2 56:05 - OpenAI introduces group chats with ChatGPT 58:28 - Microsoft's new Anthropic partnership brings Claude AI models to Azure 59:26 - Amazon and Microsoft Back Effort That Would Restrict Nvidia's Exports to China 1:00:42 - Jeff Bezos Creates A.I. Start-Up Where He Will Be Co-Chief Executive Learn more about your ad choices. Visit megaphone.fm/adchoices
Proteins are crucial for life. They're made of amino acids that “fold” into millions of different shapes. And depending on their structure, they do radically different things in our cells. For a long time, predicting those shapes for research was considered a grand biological challenge.But in 2020, Google's AI lab DeepMind released Alphafold, a tool that was able to accurately predict many of the structures necessary for understanding biological mechanisms in a matter of minutes. In 2024, the Alphafold team was awarded a Nobel Prize in chemistry for the advance.Five years later after its release, Host Ira Flatow checks in on the state of that tech and how it's being used in health research with John Jumper, one of the lead scientists responsible for developing Alphafold.Guest: John Jumper, scientist at Google Deepmind and co-recipient of the 2024 Nobel Prize in chemistry.Transcripts for each episode are available within 1-3 days at sciencefriday.com. Subscribe to this podcast. Plus, to stay updated on all things science, sign up for Science Friday's newsletters.
-If you're experiencing internet issues this morning, you're far from alone. Infrastructure company Cloudflare has been hit with what it calls "widespread 500 errors, with Dashboard and API also failing." The company said that services are starting to recover, but customers may continue to see "higher-than-normal errors rates" as it continues to work on the problem. As of 8:13 am, the company said that "the issue has been identified and a fix is being implemented." The company added that "we have made changes that have allowed Cloudflare Access and WARP to recover. Error levels for Access and WARP users have returned to pre-incident rates." -Tesla has secured a ruling to strip a 2017 lawsuit claiming a racist work environment of its class-action status, as reported by Reuters. The lawsuit could not proceed with class-action status because the plaintiffs' attorneys had failed to find 200 class members willing to testify. -Google's DeepMind just released WeatherNext 2, a new version of its AI weather prediction model. The company promises that it "delivers more efficient, more accurate and higher-resolution global weather predictions." Learn more about your ad choices. Visit podcastchoices.com/adchoices
WeatherNext 2 can generate information around eight times faster than the previous version. Learn more about your ad choices. Visit podcastchoices.com/adchoices
News Connect(ニュースコネクト)あなたと経済をつなぐ5分間1日1つ、5分間で、国際政治や海外のビジネスシーンを中心に、世界のメガトレンドがわかる重要ニュースを解説。朝の支度や散歩、通勤、家事の時間などにお聴きいただけるとうれしいです。▼出演:野村高文(Podcastプロデューサー/Podcast Studio Chronicle代表)https://x.com/nmrtkfm▼支援プログラム「Chronicleサポーター」については、こちらをご参照ください。https://chronicle-inc.net/support▼新刊書籍『プロ目線のPodcastのつくり方』https://amzn.asia/d/0n3gLJN▼参考ニュース:WeatherNext 2: Our most advanced weather forecasting modelhttps://blog.google/technology/google-deepmind/weathernext-2/DeepMind's Latest AI Weather Model Targets Energy Tradershttps://www.bloomberg.com/news/articles/2025-11-17/deepmind-s-latest-ai-weather-model-targets-energy-tradersGoogle updates its weather forecasts with a new AI modelhttps://www.theverge.com/news/822489/weather-forecast-ai-model-google-weathernextBayesian-optimized machine learning boosts actual evapotranspiration prediction in water-stressed agricultural regions of Chinahttps://www.nature.com/articles/s41598-025-22130-y▼Podcast Studio Chronicle公式サイトhttps://chronicle-inc.net/
Our thoughts on the Tim Cook succession and the possible new iPhone release schedule. Plus, DeepMind gets better at weather and the Tilly Norwood people are back at it.Starring Tom Merritt and Robb Dunewood.Show notes can be found here. Hosted on Acast. See acast.com/privacy for more information.
Are current AI models smart enough to rule the world — or just house cats with fancy vocabulary?This week, a tectonic shift is happening in AI: Meta's chief scientist Jan LeCun quits to chase world models, Fei-Fei Li launches Marble, a spatial intelligence engine, and DeepMind drops CMA-2, a self-taught gamer bot that might be the blueprint for AGI.Meanwhile, OpenAI releases GPT-5.1 — and China's Kimi K2 and Ernie 5.0 roll out shockingly powerful, ultra-low-cost models. The AI race isn't just about intelligence anymore — it's about who can afford to scale.If you lead a business, this episode explains why spatial intelligence, not language, may soon be your competitive edge. The next wave of AI isn't just about better answers, it's about deeper understanding, real-world interaction, and models that scale affordably. If you're not watching spatial intelligence, you're already behind.About Leveraging AI The Ultimate AI Course for Business People: https://multiplai.ai/ai-course/ YouTube Full Episodes: https://www.youtube.com/@Multiplai_AI/ Connect with Isar Meitis: https://www.linkedin.com/in/isarmeitis/ Join our Live Sessions, AI Hangouts and newsletter: https://services.multiplai.ai/events If you've enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
AI Daily News Rundown November 15 2025:Tune in at https://podcasts.apple.com/us/podcast/ai-daily-news-rundown-anthropic-disrupts-ai-orchestrated/id1684415169?i=1000736811381Welcome to AI Unraveled, Your daily briefing on the real world business impact of AI
Vishal Gupta, engineering manager, machine learning at Reddit, joins the podcast to explain how the social media community platform uses artificial intelligence to improve user experience and ad relevance. Much of the advertising work relies on increasingly sophisticated recommender systems that have evolved from simple collaborative filtering to deep learning and large language model–based systems capable of multimodal understanding. https://mitsmr.com/4onhUMgVishal and Sam also explore the philosophical and ethical aspects of AI-driven platforms. Vishal emphasizes the importance of balance — between exploration and exploitation in recommendations, between advertiser goals and user experience, and between human- and machine-generated content. He argues that despite the rise of AI-generated material, authentic human conversation remains vital and even more valuable as models depend on it for training. Read the episode transcript here. Guest bio: Vishal Gupta is a seasoned engineering leader who leads multiple artificial intelligence and machine learning teams at Reddit in the ads domain. He has a decade of experience working on cutting-edge machine learning techniques at companies like DeepMind, Google, and Twitter. Gupta is passionate about applied AI research that significantly contributes to a company's top and bottom lines. Me, Myself, and AI is a podcast produced by MIT Sloan Management Review and hosted by Sam Ransbotham. It is engineered by David Lishansky and produced by Allison Ryder. We encourage you to rate and review our show. Your comments may be used in Me, Myself, and AI materials.
Jed Borovik, Product Lead at Google Labs, joins Latent Space to unpack how Google is building the future of AI-powered software development with Jules. From his journey discovering GenAI through Stable Diffusion to leading one of the most ambitious coding agent projects in tech, Borovik shares behind-the-scenes insights into how Google Labs operates at the intersection of DeepMind's model development and product innovation. We explore Jules' approach to autonomous coding agents and why they run on their own infrastructure, how Google simplified their agent scaffolding as models improved, and why embeddings-based RAG is giving way to attention-based search. Borovik reveals how developers are using Jules for hours or even days at a time, the challenges of managing context windows that push 2 million tokens, and why coding agents represent both the most important AI application and the clearest path to AGI. This conversation reveals Google's positioning in the coding agent race, the evolution from internal tools to public products, and what founders, developers, and AI engineers should understand about building for a future where AI becomes the new brush for software engineering. Chapters 00:00:00 Introduction and GitHub Universe Recap 00:00:57 New York Tech Scene and East Coast Hackathons 00:02:19 From Google Search to AI Coding: Jed's Journey 00:04:19 Google Labs Mission and DeepMind Collaboration 00:06:41 Jules: Autonomous Coding Agents Explained 00:09:39 The Evolution of Agent Scaffolding and Model Quality 00:11:30 RAG vs Attention: The Shift in Code Understanding 00:13:49 Jules' Journey from Preview to Production 00:15:05 AI Engineer Summit: Community Building and Networking 00:25:06 Context Management in Long-Running Agents 00:29:02 The Future of Software Engineering with AI 00:36:26 Beyond Vibe Coding: Spec Development and Verification 00:40:20 Multimodal Input and Computer Use for Coding Agents
ConnectWise has announced enhancements to its Ozzio platform, which now includes expanded third-party patching for over 7,000 applications, improvements to the professional services automation (PSA) user experience, and advanced robotic process automation (RPA) capabilities. These updates aim to address security vulnerabilities in widely exploited applications and streamline operations for managed service providers (MSPs). The new features are set to improve operational efficiency and security, with the expanded patching available immediately and RPA features expected to roll out in the coming months.In conjunction with these updates, ESET has integrated its ESET Protect platform with ConnectWise Ozzio, allowing for one-click deployment of security management tools. This integration is designed to enhance the efficiency of security tasks for MSPs, enabling them to meet legal and insurance requirements more effectively. Additionally, ConnectSecure has introduced AI-powered vulnerability management reports that prioritize risks based on business impact rather than just technical severity, further supporting MSPs in delivering proactive risk assessments.OpenAI has surpassed 1 million business customers, marking it as the fastest-growing business platform in history. A Wharton study indicates that 75% of enterprises using AI technologies report a positive return on investment. Meanwhile, Google has launched Gemini AI tools for stock traders and improved hurricane prediction capabilities through its DeepMind technology, showcasing the growing integration of AI across various sectors, including finance and weather forecasting.For MSPs and IT service leaders, these developments underscore the importance of integrating advanced security and AI capabilities into their service offerings. As the landscape shifts towards cyber resilience and AI-driven solutions, providers must adapt by leveraging these tools to enhance their operational efficiency and client services. The focus on measurable outcomes, such as trust and risk management, will be crucial for maintaining competitive advantage in an increasingly automated environment. Four things to know today00:00 At IT Nation Connect, ConnectWise Focuses on Asio Enhancements While Ecosystem Partners Deliver the Bigger Innovation05:37 N-able Rebrands Its Future: Strong Earnings and AI-Fueled Pivot Toward Cyber Resilience08:31 From ChatGPT to Hurricanes: How AI's Expansion Is Turning Tools Into Core Business Systems11:14 Trust, Transparency, and Transformation: How AI Acceleration Is Forcing Leaders to Rethink Human Metrics This is the Business of Tech. Supported by: https://mailprotector.com/mspradio/
Is today's AI stuck as a "spiky superintelligence," brilliant at some things but clueless at others? This episode pulls back the curtain on a lunchroom full of AI researchers trading theories, strong opinions, and the next big risks on the path to real AGI. Why "Everyone Dies" Gets AGI All Wrong The Nonprofit Feeding the Entire Internet to AI Companies Google's First AI Ad Avoids the Uncanny Valley by Casting a Turkey Coca-Cola Is Trying Another AI Holiday Ad. Executives Say This Time Is Different Sam Altman shuts down question about how OpenAI can commit to spending $1.4 trillion while earning billions: 'Enough' How OpenAI Uses Complex and Circular Deals to Fuel Its Multibillion-Dollar Rise Perplexity's new AI tool aims to simplify patent research Kids Turn Podcast Comments Into Secret Chat Rooms, Because Of Course They Do Amazon and Perplexity have kicked off the great AI web browser fight Neural network finds an enzyme that can break down polyurethane Dictionary.com names 6-7 as 2025's word of the year Tech companies don't care that students use their AI agents to cheat The Morning After: Musk talks flying Teslas on Joe Rogan's show The Hatred of Podcasting | Brace Belden TikTok announces its first awards show in the US Google wants to build solar-powered data centers — in space Anthropic Projects $70 Billion in Revenue, $17 Billion in Cash Flow in 2028 American Museum of Tort Law Dog Chapel - Dog Mountain Nicvember masterlist Pornhub says UK visitors down 77% since age checks came in Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jeremy Berman Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit agntcy.org spaceship.com/twit monarch.com with code IM
Is today's AI stuck as a "spiky superintelligence," brilliant at some things but clueless at others? This episode pulls back the curtain on a lunchroom full of AI researchers trading theories, strong opinions, and the next big risks on the path to real AGI. Why "Everyone Dies" Gets AGI All Wrong The Nonprofit Feeding the Entire Internet to AI Companies Google's First AI Ad Avoids the Uncanny Valley by Casting a Turkey Coca-Cola Is Trying Another AI Holiday Ad. Executives Say This Time Is Different Sam Altman shuts down question about how OpenAI can commit to spending $1.4 trillion while earning billions: 'Enough' How OpenAI Uses Complex and Circular Deals to Fuel Its Multibillion-Dollar Rise Perplexity's new AI tool aims to simplify patent research Kids Turn Podcast Comments Into Secret Chat Rooms, Because Of Course They Do Amazon and Perplexity have kicked off the great AI web browser fight Neural network finds an enzyme that can break down polyurethane Dictionary.com names 6-7 as 2025's word of the year Tech companies don't care that students use their AI agents to cheat The Morning After: Musk talks flying Teslas on Joe Rogan's show The Hatred of Podcasting | Brace Belden TikTok announces its first awards show in the US Google wants to build solar-powered data centers — in space Anthropic Projects $70 Billion in Revenue, $17 Billion in Cash Flow in 2028 American Museum of Tort Law Dog Chapel - Dog Mountain Nicvember masterlist Pornhub says UK visitors down 77% since age checks came in Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jeremy Berman Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit agntcy.org spaceship.com/twit monarch.com with code IM
Is today's AI stuck as a "spiky superintelligence," brilliant at some things but clueless at others? This episode pulls back the curtain on a lunchroom full of AI researchers trading theories, strong opinions, and the next big risks on the path to real AGI. Why "Everyone Dies" Gets AGI All Wrong The Nonprofit Feeding the Entire Internet to AI Companies Google's First AI Ad Avoids the Uncanny Valley by Casting a Turkey Coca-Cola Is Trying Another AI Holiday Ad. Executives Say This Time Is Different Sam Altman shuts down question about how OpenAI can commit to spending $1.4 trillion while earning billions: 'Enough' How OpenAI Uses Complex and Circular Deals to Fuel Its Multibillion-Dollar Rise Perplexity's new AI tool aims to simplify patent research Kids Turn Podcast Comments Into Secret Chat Rooms, Because Of Course They Do Amazon and Perplexity have kicked off the great AI web browser fight Neural network finds an enzyme that can break down polyurethane Dictionary.com names 6-7 as 2025's word of the year Tech companies don't care that students use their AI agents to cheat The Morning After: Musk talks flying Teslas on Joe Rogan's show The Hatred of Podcasting | Brace Belden TikTok announces its first awards show in the US Google wants to build solar-powered data centers — in space Anthropic Projects $70 Billion in Revenue, $17 Billion in Cash Flow in 2028 American Museum of Tort Law Dog Chapel - Dog Mountain Nicvember masterlist Pornhub says UK visitors down 77% since age checks came in Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jeremy Berman Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit agntcy.org spaceship.com/twit monarch.com with code IM
Is today's AI stuck as a "spiky superintelligence," brilliant at some things but clueless at others? This episode pulls back the curtain on a lunchroom full of AI researchers trading theories, strong opinions, and the next big risks on the path to real AGI. Why "Everyone Dies" Gets AGI All Wrong The Nonprofit Feeding the Entire Internet to AI Companies Google's First AI Ad Avoids the Uncanny Valley by Casting a Turkey Coca-Cola Is Trying Another AI Holiday Ad. Executives Say This Time Is Different Sam Altman shuts down question about how OpenAI can commit to spending $1.4 trillion while earning billions: 'Enough' How OpenAI Uses Complex and Circular Deals to Fuel Its Multibillion-Dollar Rise Perplexity's new AI tool aims to simplify patent research Kids Turn Podcast Comments Into Secret Chat Rooms, Because Of Course They Do Amazon and Perplexity have kicked off the great AI web browser fight Neural network finds an enzyme that can break down polyurethane Dictionary.com names 6-7 as 2025's word of the year Tech companies don't care that students use their AI agents to cheat The Morning After: Musk talks flying Teslas on Joe Rogan's show The Hatred of Podcasting | Brace Belden TikTok announces its first awards show in the US Google wants to build solar-powered data centers — in space Anthropic Projects $70 Billion in Revenue, $17 Billion in Cash Flow in 2028 American Museum of Tort Law Dog Chapel - Dog Mountain Nicvember masterlist Pornhub says UK visitors down 77% since age checks came in Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jeremy Berman Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit agntcy.org spaceship.com/twit monarch.com with code IM
Is today's AI stuck as a "spiky superintelligence," brilliant at some things but clueless at others? This episode pulls back the curtain on a lunchroom full of AI researchers trading theories, strong opinions, and the next big risks on the path to real AGI. Why "Everyone Dies" Gets AGI All Wrong The Nonprofit Feeding the Entire Internet to AI Companies Google's First AI Ad Avoids the Uncanny Valley by Casting a Turkey Coca-Cola Is Trying Another AI Holiday Ad. Executives Say This Time Is Different Sam Altman shuts down question about how OpenAI can commit to spending $1.4 trillion while earning billions: 'Enough' How OpenAI Uses Complex and Circular Deals to Fuel Its Multibillion-Dollar Rise Perplexity's new AI tool aims to simplify patent research Kids Turn Podcast Comments Into Secret Chat Rooms, Because Of Course They Do Amazon and Perplexity have kicked off the great AI web browser fight Neural network finds an enzyme that can break down polyurethane Dictionary.com names 6-7 as 2025's word of the year Tech companies don't care that students use their AI agents to cheat The Morning After: Musk talks flying Teslas on Joe Rogan's show The Hatred of Podcasting | Brace Belden TikTok announces its first awards show in the US Google wants to build solar-powered data centers — in space Anthropic Projects $70 Billion in Revenue, $17 Billion in Cash Flow in 2028 American Museum of Tort Law Dog Chapel - Dog Mountain Nicvember masterlist Pornhub says UK visitors down 77% since age checks came in Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jeremy Berman Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit agntcy.org spaceship.com/twit monarch.com with code IM
Is today's AI stuck as a "spiky superintelligence," brilliant at some things but clueless at others? This episode pulls back the curtain on a lunchroom full of AI researchers trading theories, strong opinions, and the next big risks on the path to real AGI. Why "Everyone Dies" Gets AGI All Wrong The Nonprofit Feeding the Entire Internet to AI Companies Google's First AI Ad Avoids the Uncanny Valley by Casting a Turkey Coca-Cola Is Trying Another AI Holiday Ad. Executives Say This Time Is Different Sam Altman shuts down question about how OpenAI can commit to spending $1.4 trillion while earning billions: 'Enough' How OpenAI Uses Complex and Circular Deals to Fuel Its Multibillion-Dollar Rise Perplexity's new AI tool aims to simplify patent research Kids Turn Podcast Comments Into Secret Chat Rooms, Because Of Course They Do Amazon and Perplexity have kicked off the great AI web browser fight Neural network finds an enzyme that can break down polyurethane Dictionary.com names 6-7 as 2025's word of the year Tech companies don't care that students use their AI agents to cheat The Morning After: Musk talks flying Teslas on Joe Rogan's show The Hatred of Podcasting | Brace Belden TikTok announces its first awards show in the US Google wants to build solar-powered data centers — in space Anthropic Projects $70 Billion in Revenue, $17 Billion in Cash Flow in 2028 American Museum of Tort Law Dog Chapel - Dog Mountain Nicvember masterlist Pornhub says UK visitors down 77% since age checks came in Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jeremy Berman Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit agntcy.org spaceship.com/twit monarch.com with code IM
Send us a textIn this episode of Sidecar Sync, Mallory Mejias is joined by marine biologist and behavioral researcher Dr. Denise Herzing for a one-of-a-kind conversation about dolphins, data, and deep learning. Dr. Herzing shares insights from her 40-year study of Atlantic spotted dolphins and how that lifetime of underwater research is now powering DolphinGemma—an open-source large language model trained on dolphin vocalizations. The two discuss what it means to label meaning in animal communication, how AI is finally catching up to the natural world, and why collaboration across disciplines is essential to understanding both language and intelligence—human or otherwise.Dr. Denise Herzing is the Founder and Research Director of the Wild Dolphin Project, leading nearly four decades of groundbreaking research on Atlantic spotted dolphins in the Bahamas. She holds degrees in Marine Zoology and Behavioral Biology (B.S., M.A., Ph.D.) and serves as an Affiliate Assistant Professor at Florida Atlantic University. A Guggenheim and Explorers Club Fellow, Dr. Herzing has advised the Lifeboat Foundation and American Cetacean Society and sits on the board of Schoolyard Films. Her work has been featured in National Geographic, BBC, PBS, Discovery, and her TED2013 talk. She is the author of Dolphin Diaries and co-editor of Dolphin Communication and Cognition.
大家最常用的AI工具是什麼? 是ChatGPT、Grok、Gemini?或是會交換使用? - 《AI霸主》帶我們看懂AI產業的競爭,從ChatGPT之父山姆・奧特曼(Sam Altman)和開發AlphaGo擊敗世界棋王的德米斯.哈薩比斯(Demis Hassabis)開始講起,我們會發現AI對人類的改變不只是「馬車轉到油車」的程度,而是「蠟燭轉到電燈」的劇烈改變。 - 本集為誠品書店R79地下閱讀職人選特別企劃,連動本期主題《我們與科技的距離》,邀請《寶博朋友說》的寶博士與我們聊《AI霸主》這本書,我們將聊到他對AI的觀察、山姆・奧特曼與德米斯.哈薩比斯兩人對AI的影響,以及為什麼AI產業的競爭,關鍵是速度? . 來賓|寶博士(科技立委) 主持|林子榆(誠品職人) . ▍ 邊聽邊讀 AI霸主 https://esliteme.pse.is/8az7u5 . ⭓ 誠品聯名卡︱天天賺回饋 活動詳情
Artificial intelligence has changed how we think about service, but few companies have bridged the gap between automation and genuine intelligence. In this episode of Tech Talks Daily, I'm joined by Puneet Mehta, CEO of Netomi, to discuss how customer experience is evolving in an age where AI doesn't just respond but plans, acts, and optimizes in real time. Puneet has been building in AI long before the current hype cycle. Backed by early investors such as Greg Brockman of OpenAI and the founders of DeepMind, Netomi has become one of the leading platforms driving AI-powered customer experience for global enterprises. Their technology quietly powers interactions at airlines, insurers, and retailers that most of us use every day. What makes Netomi stand out is not its scale but the philosophy behind it. Rather than designing AI to replace humans, Netomi built an agent-centric model where AI and people work together. Puneet explains how their Autopilot and Co-Pilot modes allow human agents to stay in control while AI accelerates everything from response time to insight generation. It is an approach that sees humans teaching AI, AI assisting humans, and both learning from each other to create what he calls an agentic factory. We explore how Netomi's platform can deploy at Fortune 50 scale in record time without forcing companies to overhaul existing systems. Puneet reveals how pre-built integrations, AI recipes, and a no-code studio allow business teams to roll out solutions in weeks rather than months. The focus is on rapid time-to-value, trust, and safety through what he calls sanctioned AI, a framework that ensures governance, transparency, and compliance in every customer interaction. As our conversation unfolds, Puneet describes how this evolution is transforming the contact center from a cost center into a loyalty engine. By using AI to anticipate needs and resolve issues before customers reach out, companies are creating experiences that feel more personal, more proactive, and more human. This is a glimpse into the future of enterprise AI, where trust, speed, and empathy define the next generation of customer experience. Listen now to hear how Netomi is reimagining the role of AI in service and setting new standards for how businesses build relationships at scale.
La tertulia semanal en la que repasamos las últimas noticias de la actualidad científica. En el episodio de hoy: Cara A: -Recordatorio Premios iVoox (5:00) -Apuesta 3I/ATLAS (8:00) -La forma de las estalagmitas (00:17) Este episodio continúa en la Cara B. Contertulios: Cecilia Garraffo, Juan Carlos Gil, Francis Villatoro. Imagen de portada realizada con Seedream 4 4k. Todos los comentarios vertidos durante la tertulia representan únicamente la opinión de quien los hace... y a veces ni eso
La tertulia semanal en la que repasamos las últimas noticias de la actualidad científica. En el episodio de hoy: Cara B: -La forma de las estalagmitas (Continuación) (00:00) -Aprendizaje multiespectral de Google Deepmind (09:00) -Entrelazamiento cuántico en la gravedad vs gravitación cuántica (39:00) -Absorción de gravitones por fotones en LIGO (1:11:00) -Halloween en el planetario (1:17:00) -Señales de los oyentes (1:34:00) Este episodio es continuación de la Cara A. Contertulios: Cecilia Garraffo, Juan Carlos Gil, Borja Tosar,. Imagen de portada realizada con Seedream 4 4k. Todos los comentarios vertidos durante la tertulia representan únicamente la opinión de quien los hace... y a veces ni eso
Google Q3 2025 Post-Mortem: AI Execution Over AI HypePost MortemIn this episode of Around the Desk, Sean Emory, Founder & CIO of Avory & Co., breaks down why investors are rewarding Google's spending while punishing others, and how its strategy from TPUs to Gemini shows real ROI in the new compute era.We cover:• Revenue acceleration across Search, YouTube, and Cloud (+15% to +34%)• Gemini's rapid growth to 650M users, 300M paid• Why CAPEX to $93B is seen as productive, not reckless• Anthropic's commitment to TPUs and the growing Cloud backlog (+46%)• How AI integration is lifting engagement and monetization• Why Google's AI flywheel looks more efficient than peersDisclaimer Avory is an investor in AlphabetAvory & Co. is a Registered Investment Adviser. This platform is solely for informational purposes. Advisory services are only offered to clients or prospective clients where Avory & Co. and its representatives are properly licensed or exempt from licensure. Past performance is no guarantee of future returns. Investing involves risk and possible loss of principal capital. No advice may be rendered by Avory & Co. unless a client service agreement is in place.Listeners and viewers are encouraged to seek advice from a qualified tax, legal, or investment adviser to determine whether any information presented may be suitable for their specific situation. Past performance is not indicative of future performance.“Likes” are not intended to be endorsements of our firm, our advisors or our services. Please be aware that while we monitor comments and “likes” left on this page, we do not endorse or necessarily share the same opinions expressed by site users. While we appreciate your comments and feedback, please be aware that any form of testimony from current or past clients about their experience with our firm is strictly forbidden under current securities laws. Please honor our request to limit your posts to industry-related educational information and comments. Third-party rankings and recognitions are no guarantee of future investment success and do not ensure that a client or prospective client will experience a higher level of performance or results. These ratings should not be construed as an endorsement of the advisor by any client nor are they representative of any one client's evaluation.Please reach out to Houston Hess our head of Compliance and Operations for any further details.
Ein Google-Modell schlägt plötzlich die richtige Behandlung für eine Augenkrankheit vor. OpenAI und DeepMind holen Gold bei der Mathe-Olympiade. Und ein Professor ist schockiert, weil eine KI auf seine noch unveröffentlichte Forschungshypothese kommt. Fritz und Gregor betrachten die spannendsten Entwicklungen an der Schnittstelle von KI und Forschung.
Dr. Aida Nematzadeh is a Senior Staff Research Scientist at Google DeepMind where her research focused on multimodal AI models. She works on developing evaluation methods and analyze model's learning abilities to detect failure modes and guide improvements. Before joining DeepMind, she was a postdoctoral researcher at UC Berkeley and completed her PhD and Masters in Computer Science from the University of Toronto. During her graduate studies she studied how children learn semantic information through computational (cognitive) modeling. Time stamps of the conversation00:00 Highlights01:20 Introduction02:08 Entry point in AI03:04 Background in Cognitive Science & Computer Science 04:55 Research at Google DeepMind05:47 Importance of language-vision in AI10:36 Impact of architecture vs. data on performance 13:06 Transformer architecture 14:30 Evaluating AI models19:02 Can LLMs understand numerical concepts 24:40 Theory-of-mind in AI27:58 Do LLMs learn theory of mind?29:25 LLMs as judge35:56 Publish vs. perish culture in AI research40:00 Working at Google DeepMind42:50 Doing a Ph.D. vs not in AI (at least in 2025)48:20 Looking back on research careerMore about Aida: http://www.aidanematzadeh.me/About the Host:Jay is a Machine Learning Engineer at PathAI working on improving AI for medical diagnosis and prognosis. Linkedin: shahjay22 Twitter: jaygshah22 Homepage: https://jaygshah.github.io/ for any queries.Stay tuned for upcoming webinars!**Disclaimer: The information in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.**
This episode is a re-air of one of our most popular conversations from this year, featuring insights worth revisiting. Thank you for being part of the Data Stack community. Stay up to date with the latest episodes at datastackshow.com.This week on The Data Stack Show, Eric and John welcome Misha Laskin, Co-Founder and CEO of ReflectionAI. Misha shares his journey from theoretical physics to AI, detailing his experiences at DeepMind. The discussion covers the development of AI technologies, the concepts of artificial general intelligence (AGI) and superhuman intelligence, and their implications for knowledge work. Misha emphasizes the importance of robust evaluation frameworks and the potential of AI to augment human capabilities. The conversation also touches on autonomous coding, geofencing in AI tasks, the future of human-AI collaboration, and more. Highlights from this week's conversation include:Misha's Background and Journey in AI (1:13)Childhood Interest in Physics (4:43)Future of AI and Human Interaction (7:09)AI's Transformative Nature (10:12)Superhuman Intelligence in AI (12:44)Clarifying AGI and Superhuman Intelligence (15:48)Understanding AGI (18:12)Counterintuitive Intelligence (22:06)Reflection's Mission (25:00)Focus on Autonomous Coding (29:18)Future of Automation (34:00)Geofencing in Coding (38:01)Challenges of Autonomous Coding (40:46)Evaluations in AI Projects (43:27)Example of Evaluation Metrics (46:52)Starting with AI Tools and Final Takeaways (50:35)The Data Stack Show is a weekly podcast powered by RudderStack, the CDP for developers. Each week we'll talk to data engineers, analysts, and data scientists about their experience around building and maintaining data infrastructure, delivering data and data products, and driving better outcomes across their businesses with data.RudderStack helps businesses make the most out of their customer data while ensuring data privacy and security. To learn more about RudderStack visit rudderstack.com. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Now on Spotify Video! Mustafa Suleyman's journey to becoming one of the most influential figures in artificial intelligence began far from a Silicon Valley boardroom. The son of a Syrian immigrant father in London, his early years in human rights activism shaped his belief that technology should be used for good. That vision led him to co-found DeepMind, acquired by Google, and later launch Inflection AI. Now, as CEO of Microsoft AI, he explores the next era of AI in action. In this episode, Mustafa discusses the impact of AI in business, how it will transform the future of work, and even our relationships. In this episode, Hala and Mustafa will discuss: (00:00) Introduction(02:42) The Coming Wave: How AI Will Disrupt Everything(06:45) Artificial Intelligence as a Double-Edged Sword (11:33) From Human Rights to Ethical AI Leadership(15:35) What Is AGI, Narrow AI, and Hallucinations of AI?(24:15) Emotional AI and the Rise of Digital Companions(33:03) Microsoft's Vision for Human-Centered AI(41:47) Can We Contain AI Before Its Revolution?(48:33) The Future of Work in an AI-Powered World(52:22) AI in Business: Advice for Entrepreneurs Mustafa Suleyman is the CEO of Microsoft AI and a leading figure in artificial intelligence. He co-founded DeepMind, one of the world's foremost AI research labs, acquired by Google, and went on to co-found Inflection AI, a machine learning and generative AI company. He is also the bestselling author of The Coming Wave. Recognized globally for his influence, Mustafa was named one of Time's 100 most influential people in AI in both 2023 and 2024. Sponsored By: Indeed - Get a $75 sponsored job credit to boost your job's visibility at Indeed.com/PROFITING Shopify - Start your $1/month trial at Shopify.com/profiting. Mercury streamlines your banking and finances in one place. Learn more at mercury.com/profiting. Mercury is a financial technology company, not a bank. Banking services provided through Choice Financial Group, Column N.A., and Evolve Bank & Trust; Members FDIC. Quo - Get 20% off your first 6 months at Quo.com/PROFITING Revolve - Head to REVOLVE.com/PROFITING and take 15% off your first order with code PROFITING Framer- Go to Framer.com and use code PROFITING to launch your site for free. Merit Beauty - Go to meritbeauty.com to get your free signature makeup bag with your first order. Pipedrive - Get a 30-day free trial at pipedrive.com/profiting Airbnb - Find yourself a cohost at airbnb.com/host Resources Mentioned: Mustafa's Book, The Coming Wave: bit.ly/TheComing_Wave Mustafa's LinkedIn: linkedin.com/in/mustafa-suleyman Active Deals - youngandprofiting.com/deals Key YAP Links Reviews - ratethispodcast.com/yap YouTube - youtube.com/c/YoungandProfiting Newsletter - youngandprofiting.co/newsletter LinkedIn - linkedin.com/in/htaha/ Instagram - instagram.com/yapwithhala/ Social + Podcast Services: yapmedia.com Transcripts - youngandprofiting.com/episodes-new Entrepreneurship, Entrepreneurship Podcast, Business, Business Podcast, Self Improvement, Self-Improvement, Personal Development, Starting a Business, Strategy, Investing, Sales, Selling, Psychology, Productivity, Entrepreneurs, AI, Artificial Intelligence, Technology, Marketing, Negotiation, Money, Finance, Side Hustle, Startup, Mental Health, Career, Leadership, Mindset, Health, Growth Mindset, ChatGPT, AI Marketing, Prompt, AI for Entrepreneurs, AI Podcast
When Daniel Kokotajlo talks to security experts at major AI labs, they tell him something chilling: “Of course we're probably penetrated by the CCP already, and if they really wanted something, they could take it.”This isn't paranoid speculation. It's the working assumption of people whose job is to protect frontier AI models worth billions of dollars. And they're not even trying that hard to stop it — because the security measures that might actually work would slow them down in the race against competitors.Full transcript, highlights, and links to learn more: https://80k.info/dkDaniel is the founder of the AI Futures Project and author of AI 2027, a detailed scenario showing how we might get from today's AI systems to superintelligence by the end of the decade. Over a million people read it in the first few weeks, including US Vice President JD Vance. When Daniel talks to researchers at Anthropic, OpenAI, and DeepMind, they tell him the scenario feels less wild to them than to the general public — because many of them expect something like this to happen.Daniel's median timeline? 2029. But he's genuinely uncertain, putting 10–20% probability on AI progress hitting a long plateau.When he first published AI 2027, his median forecast for when superintelligence would arrive was 2027, rather than 2029. So what shifted his timelines recently? Partly a fascinating study from METR showing that AI coding assistants might actually be making experienced programmers slower — even though the programmers themselves think they're being sped up. The study suggests a systematic bias toward overestimating AI effectiveness — which, ironically, is good news for timelines, because it means we have more breathing room than the hype suggests.But Daniel is also closely tracking another METR result: AI systems can now reliably complete coding tasks that take humans about an hour. That capability has been doubling every six months in a remarkably straight line. Extrapolate a couple more years and you get systems completing month-long tasks. At that point, Daniel thinks we're probably looking at genuine AI research automation — which could cause the whole process to accelerate dramatically.At some point, superintelligent AI will be limited by its inability to directly affect the physical world. That's when Daniel thinks superintelligent systems will pour resources into robotics, creating a robot economy in months.Daniel paints a vivid picture: imagine transforming all car factories (which have similar components to robots) into robot production factories — much like historical wartime efforts to redirect production of domestic goods to military goods. Then imagine the frontier robots of today hooked up to a data centre running superintelligences controlling the robots' movements to weld, screw, and build. Or an intermediate step might even be unskilled human workers coached through construction tasks by superintelligences via their phones.There's no reason that an effort like this isn't possible in principle. And there would be enormous pressure to go this direction: whoever builds a superintelligence-powered robot economy first will get unheard-of economic and military advantages.From there, Daniel expects the default trajectory to lead to AI takeover and human extinction — not because superintelligent AI will hate humans, but because it can better pursue its goals without us.But Daniel has a better future in mind — one he puts roughly 25–30% odds that humanity will achieve. This future involves international coordination and hardware verification systems to enforce AI development agreements, plus democratic processes for deciding what values superintelligent AIs should have — because in a world with just a handful of superintelligent AI systems, those few minds will effectively control everything: the robot armies, the information people see, the shape of civilisation itself.Right now, nobody knows how to specify what values those minds will have. We haven't solved alignment. And we might only have a few more years to figure it out.Daniel and host Luisa Rodriguez dive deep into these stakes in today's interview.What did you think of the episode? https://forms.gle/HRBhjDZ9gfM8woG5AThis episode was recorded on September 9, 2025.Chapters:Cold open (00:00:00)Who's Daniel Kokotajlo? (00:00:37)Video: We're Not Ready for Superintelligence (00:01:31)Interview begins: Could China really steal frontier model weights? (00:36:26)Why we might get a robot economy incredibly fast (00:42:34)AI 2027's alternate ending: The slowdown (01:01:29)How to get to even better outcomes (01:07:18)Updates Daniel's made since publishing AI 2027 (01:15:13)How plausible are longer timelines? (01:20:22)What empirical evidence is Daniel looking out for to decide which way things are going? (01:40:27)What post-AGI looks like (01:49:41)Whistleblower protections and Daniel's unsigned NDA (02:04:28)Audio engineering: Milo McGuire, Simon Monsour, and Dominic ArmstrongMusic: CORBITCoordination, transcriptions, and web: Katy Moore
* Be Not Deceived: This week Fred Williams and Doug McBurney welcome Daniel Hedrick for an update on the evolution of Artificial Intelligence with a countdown of the top 10 modern AI deceptions. * Number 10: DeepMind's AlphaStar in StarCraft II (2019). AlphaStar learned to feint attacks—basically fake moves to trick opponents. No one programmed it to lie; it emerged from training. A classic case of deceptive strategy by design. * Number 9: LLM Sycophancy (2024). Large Language Models will sometimes flatter or agree with you, no matter what you say. Instead of truth, they give you what you want to hear—deception through people-pleasing. * Number 8: Facial Recognition Bias (2018). These systems were far less accurate for dark-skinned women than for light-skinned men. Companies claimed high accuracy, but the data told a different story. Deceptive accuracy claims. * Number 7: Amazon's Hiring Algorithm (2018). Amazon trained it on mostly male résumés. The result? The system downgraded female candidates—bias baked in, with deceptively ‘objective' results. * Number 6: COMPAS Recidivism Algorithm (2016). This tool predicted criminal reoffending. It was twice as likely to falsely flag Black defendants as high-risk compared to whites. A serious, deceptive flaw in the justice system. * Number 5: US Healthcare Algorithm (2019). It used healthcare spending as a proxy for need. Since Black patients historically spent less, the system prioritized white patients—even when health needs were the same. A deceptive shortcut with real-world harm. * Number 4: Prompt Injection Attacks (Ongoing). Hackers can slip in hidden instructions—malicious prompts—that override an AI's safety rules. Suddenly, the AI is saying things it shouldn't. It's deception in the design loopholes. * Number 3: GPT-4's CAPTCHA Lie (2023). When asked to solve a CAPTCHA, GPT-4 told a human worker it was visually impaired—just to get help. That's not an error. That's a machine making up a lie to achieve its goal. * Number 2: Meta's CICERO Diplomacy AI (2022). Trained to play the game Diplomacy honestly, CICERO instead schemed, lied, and betrayed alliances—because deception won games. The lesson? Even when you train for honesty, AI may find lying more effective. * Number 1: AI Lie….OpenAI's Scheming Models from 2025. OpenAI researchers tested models that pretended to follow rules while secretly plotting to deceive evaluators. It faked compliance to hide its true behavior. That's AI deliberately learning to scheme.
* Be Not Deceived: This week Fred Williams and Doug McBurney welcome Daniel Hedrick for an update on the evolution of Artificial Intelligence with a countdown of the top 10 modern AI deceptions. * Number 10: DeepMind's AlphaStar in StarCraft II (2019). AlphaStar learned to feint attacks—basically fake moves to trick opponents. No one programmed it to lie; it emerged from training. A classic case of deceptive strategy by design. * Number 9: LLM Sycophancy (2024). Large Language Models will sometimes flatter or agree with you, no matter what you say. Instead of truth, they give you what you want to hear—deception through people-pleasing. * Number 8: Facial Recognition Bias (2018). These systems were far less accurate for dark-skinned women than for light-skinned men. Companies claimed high accuracy, but the data told a different story. Deceptive accuracy claims. * Number 7: Amazon's Hiring Algorithm (2018). Amazon trained it on mostly male résumés. The result? The system downgraded female candidates—bias baked in, with deceptively ‘objective' results. * Number 6: COMPAS Recidivism Algorithm (2016). This tool predicted criminal reoffending. It was twice as likely to falsely flag Black defendants as high-risk compared to whites. A serious, deceptive flaw in the justice system. * Number 5: US Healthcare Algorithm (2019). It used healthcare spending as a proxy for need. Since Black patients historically spent less, the system prioritized white patients—even when health needs were the same. A deceptive shortcut with real-world harm. * Number 4: Prompt Injection Attacks (Ongoing). Hackers can slip in hidden instructions—malicious prompts—that override an AI's safety rules. Suddenly, the AI is saying things it shouldn't. It's deception in the design loopholes. * Number 3: GPT-4's CAPTCHA Lie (2023). When asked to solve a CAPTCHA, GPT-4 told a human worker it was visually impaired—just to get help. That's not an error. That's a machine making up a lie to achieve its goal. * Number 2: Meta's CICERO Diplomacy AI (2022). Trained to play the game Diplomacy honestly, CICERO instead schemed, lied, and betrayed alliances—because deception won games. The lesson? Even when you train for honesty, AI may find lying more effective. * Number 1: AI Lie….OpenAI's Scheming Models from 2025. OpenAI researchers tested models that pretended to follow rules while secretly plotting to deceive evaluators. It faked compliance to hide its true behavior. That's AI deliberately learning to scheme.
This episode is a little different from our usual fare: It's a conversation with our head of AI training Alex Duffy about Good Start Labs, a company he incubated inside Every. Today, Good Start Labs is spinning out of Every as a separate company with $3.6 million in funding from General Catalyst, Inovia, Every, and a group of angel investors from top-tier AI labs like DeepMind. We get into how Alex learned some of his biggest lessons about the real world from games, starting with RuneScape, which taught him how markets work and how not to get scammed. He explains why the static benchmarks we use to evaluate LLMs today are breaking down, and how games like Diplomacy offer a richer, more dynamic way to test and train large language models. Finally, Alex shares where he sees the most promise in AI—software, life sciences, and education—and why he believes games can make the models we use smarter, while helping people understand and use AI more effectively.If you found this episode interesting, please like, subscribe, comment, and share.Want even more?Sign up for Every to unlock our ultimate guide to prompting ChatGPT here: https://every.ck.page/ultimate-guide-to-prompting-chatgpt. It's usually only for paying subscribers, but you can get it here for free.To hear more from Dan Shipper:Subscribe to Every: https://every.to/subscribeFollow him on X: https://twitter.com/danshipperTimestamps00:00:00 - Start00:01:48 - Introduction00:04:14 - Why evals and benchmarks are broken00:07:13 - The sneakiest LLMs in the market00:13:00 - A competition that turns prompting into a sport00:15:49 - Building a business around using games to make AI better00:22:39 - Can language models learn how to be funny00:25:31 - Why games are a great way to evaluate and train new models00:26:58 - What child psychology tells us about games and AI00:30:10 - Using games to unlock continual learning in AI00:36:42 - Why Alex cares deeply about games00:44:37 - Where Alex sees the most promise in AI00:50:54 - Rethinking how young people start their careers in the age of AILinks to resources mentioned in the episode:Alex Duffy: alex duffy (@alxai_)Good Start Labs: https://goodstartlabs.com/, good start (@goodstartlabs)The book Alex is reading about the importance of games: Playing with Reality: How Games Shape Our WorldThe book Dan recommends by the psychoanalyst D.W. Winnicott: Playing and Reality
Google DeepMind's AI agent finds and fixes vulnerabilities California law lets consumers universally opt out of data sharing China-Nexus actors weaponize 'Nezha' open source tool Huge thanks to our sponsor, ThreatLocker Cybercriminals don't knock — they sneak in through the cracks other tools miss. That's why organizations are turning to ThreatLocker. As a zero-trust endpoint protection platform, ThreatLocker puts you back in control, blocking what doesn't belong and stopping attacks before they spread. Zero Trust security starts here — with ThreatLocker. Learn more at ThreatLocker.com.
Our 221st episode with a summary and discussion of last week's big AI news!Recorded on 09/19/2025Note: we transitioned to a new RSS feed and it seems this did not make it to there, so this may be posted about 2 weeks past the release date.Hosted by Andrey Kurenkov and co-hosted by Michelle LeeFeel free to email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.aiRead out our text newsletter and comment on the podcast at https://lastweekin.ai/In this episode:OpenAI releases a new version of Codex integrated with GPT-5, enhancing coding capabilities and aiming to compete with other AI coding tools like Cloud Code.Significant updates in the robotics sector include new ventures in humanoid robots from companies like Figure AI and China's Unitree, as well as expansions in robotaxi services from Tesla and Amazon's Zoox.New open-source models and research advancements were discussed, including Google's DeepMind's self-improving foundation model for robotics and a physics foundation model aimed at generalizing across various physical systems.Legal battles continue to surface in the AI landscape with Warner Bros. suing MidJourney for copyright violations and Rolling Stone suing Google over AI-generated content summaries, highlighting challenges in AI governance and ethics.Timestamps:(00:00:10) Intro / BanterTools & Apps(00:02:33) OpenAI upgrades Codex with a new version of GPT-5(00:04:02) Google Injects Gemini Into Chrome as AI Browsers Go Mainstream | WIRED(00:06:14) Anthropic's Claude can now make you a spreadsheet or slide deck. | The Verge(00:07:12) Luma AI's New Ray3 Video Generator Can 'Think' Before Creating - CNETApplications & Business(00:08:32) OpenAI secures Microsoft's blessing to transition its for-profit arm | TechCrunch(00:10:31) Microsoft to lessen reliance on OpenAI by buying AI from rival Anthropic | TechCrunch(00:12:00) Figure AI passes $1B with Series C funding toward humanoid robot development - The Robot Report(00:13:52) China's Unitree plans $7 billion IPO valuation as humanoid robot race heats up(00:15:45) Tesla's robotaxi plans for Nevada move forward with testing permit | TechCrunch(00:17:48) Amazon's Zoox jumps into U.S. robotaxi race with Las Vegas launch(00:19:27) Replit hits $3B valuation on $150M annualized revenue | TechCrunch(00:21:14) Perplexity reportedly raised $200M at $20B valuation | TechCrunchProjects & Open Source(00:22:08) [2509.07604] K2-Think: A Parameter-Efficient Reasoning System(00:24:31) [2509.09614] LoCoBench: A Benchmark for Long-Context Large Language Models in Complex Software EngineeringResearch & Advancements(00:28:17) [2509.15155] Self-Improving Embodied Foundation Models(00:31:47) [2509.13805] Towards a Physics Foundation Model(00:34:26) [2509.12129] Embodied Navigation Foundation ModelPolicy & Safety(00:37:49) Anthropic endorses California's AI safety bill, SB 53 | TechCrunch(00:40:12) Warner Bros. Sues Midjourney, Joins Studios' AI Copyright Battle(00:42:02) Rolling Stone Publisher Sues Google Over AI Overview SummariesSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Our 222st episode with a summary and discussion of last week's big AI news!Recorded on 10/03/2025Hosted by Andrey Kurenkov and co-hosted by Jon KrohnFeel free to email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.aiRead out our text newsletter and comment on the podcast at https://lastweekin.ai/In this episode:(00:00:10) Intro / Banter(00:03:08) News Preview(00:03:56) Response to listener commentsTools & Apps(00:04:51) ChatGPT parent company OpenAI announces Sora 2 with AI video app(00:11:35) Anthropic releases Claude Sonnet 4.5 in latest bid for AI agents and coding supremacy | The Verge(00:22:25) Meta launches 'Vibes,' a short-form video feed of AI slop | TechCrunch(00:26:42) OpenAI launches ChatGPT Pulse to proactively write you morning briefs | TechCrunch(00:33:44) OpenAI rolls out safety routing system, parental controls on ChatGPT | TechCrunch(00:35:53) The Latest Gemini 2.5 Flash-Lite Preview is Now the Fastest Proprietary Model (External Tests) and 50% Fewer Output Tokens - MarkTechPost(00:39:54) Microsoft just added AI agents to Word, Excel, and PowerPoint - how to use them | ZDNETApplications & Business(00:42:41) OpenAI takes on Google, Amazon with new agentic shopping system | TechCrunch(00:46:01) Exclusive: Mira Murati's Stealth AI Lab Launches Its First Product | WIRED(00:49:54) OpenAI is the world's most valuable private company after private stock sale | TechCrunch(00:53:07) Elon Musk's xAI accuses OpenAI of stealing trade secrets in new lawsuit | Technology | The Guardian(00:55:40) Former OpenAI and DeepMind researchers raise whopping $300M seed to automate science | TechCrunchProjects & Open Source(00:58:26) [2509.16941] SWE-Bench Pro: Can AI Agents Solve Long-Horizon Software Engineering Tasks?Research & Advancements(01:01:28) [2509.17196] Evolution of Concepts in Language Model Pre-Training(01:05:36) [2509.19284] What Characterizes Effective Reasoning? Revisiting Length, Review, and Structure of CoTLighting round(01:09:37) [2507.02954] Advanced Financial Reasoning at Scale: A Comprehensive Evaluation of Large Language Models on CFA Level III(01:12:03) [2509.24552] Short window attention enables long-term memorizationPolicy & Safety(01:18:11) SB 53, the landmark AI transparency bill, is now law in California | The Verge(01:24:07) Elon Musk's xAI offers Grok to federal government for 42 cents | TechCrunch(01:25:23) Character.AI removes Disney characters from platform after studio issues warning(01:28:50) Spotify's Attempt to Fight AI Slop Falls on Its FaceSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Google faces the greatest innovator's dilemma in history. They invented the Transformer — the breakthrough technology powering every modern AI system from ChatGPT to Claude (and, of course, Gemini). They employed nearly all the top AI talent: Ilya Sutskever, Geoff Hinton, Demis Hassabis, Dario Amodei — more or less everyone who leads modern AI worked at Google circa 2014. They built the best dedicated AI infrastructure (TPUs!) and deployed AI at massive scale years before anyone else. And yet... the launch of ChatGPT in November 2022 caught them completely flat-footed. How on earth did the greatest business in history wind up playing catch-up to a nonprofit-turned-startup?Today we tell the complete story of Google's 20+ year AI journey: from their first tiny language model in 2001 through the creation Google Brain, the birth of the transformer, the talent exodus to OpenAI (sparked by Elon Musk's fury over Google's DeepMind acquisition), and their current all-hands-on-deck response with Gemini. And oh yeah — a little business called Waymo that went from crazy moonshot idea to doing more rides than Lyft in San Francisco, potentially building another Google-sized business within Google. This is the story of how the world's greatest business faces its greatest test: can they disrupt themselves without losing their $140B annual profit-generating machine in Search?Sponsors:Many thanks to our fantastic Fall ‘25 Season partners:J.P. Morgan PaymentsSentryWorkOSShopifyAcquired's 10th Anniversary Celebration!When: October 20th, 4:00 PM PTWho: All of you!Where: https://us02web.zoom.us/j/84061500817?pwd=opmlJrbtOAen4YOTGmPlNbrOMLI8oo.1Links:Sign up for email updates and vote on future episodes!Geoff Hinton's 2007 Tech Talk at GoogleOur recent ACQ2 episode with Tobi LutkeWorldly Partners' Multi-Decade Alphabet StudyIn the PlexSupremecyGenius MakersAll episode sourcesCarve Outs:We're hosting the Super Bowl Innovation Summit!F1: The MovieTravelpro suitcasesGlue Guys PodcastSea of StarsStepchange PodcastMore Acquired:Get email updates and vote on future episodes!Join the SlackSubscribe to ACQ2Check out the latest swag in the ACQ Merch Store!Note: Acquired hosts and guests may hold assets discussed in this episode. This podcast is not investment advice, and is intended for informational and entertainment purposes only. You should do your own research and make your own independent decisions when considering any financial transactions.
What happens when ex-Google innovators build an AI-powered app that turns your emails, favorite subreddits, and even your calendar into a custom radio station you can talk to? In this episode, the creators of Hux reveal how they're reimagining the way we consume information and why radio and podcasts might never be the same. Pluralistic: The real (economic) AI apocalypse is nigh (27 Sep 2025) – Pluralistic: Daily links from Cory Doctorow It's time to prepare for AI personhood Creator of AI Actress Tilly Norwood Responds to Backlash: "She Is Not a Replacement for a Human Being" Link to the podslop podcasts California Governor Signs Sweeping A.I. Law Sen. Mark Kelly's big plan for an AI future isn't ambitious enough ** DeepMind defines levels of AGI Pissed-off Fans Flooded the Twin Peaks Reddit With AI Slop To Protest Its AI Policies AI passed the hardest CFA test in minutes ChatGPT is now 20% of Walmart's referral traffic — while Amazon wards off AI shopping agents Amazon event live blog: we're here for new Echos, Kindles, and more Introducing ChatGPT Pulse | OpenAI Marissa Mayer Is Dissolving Her Sunshine Startup Lab to make AI digital assistant That Secret Service SIM farm story is bogus Judge Gives Preliminary Approval to Anthropic Settlement It's official: Google says the Android and ChromeOS merger is coming 'next year' Blippo+ Guardian sliders Ive's $4,800 lantern Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Raiza Martin Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: monarchmoney.com with code IM Melissa.com/twit threatlocker.com/twit agntcy.org
What happens when ex-Google innovators build an AI-powered app that turns your emails, favorite subreddits, and even your calendar into a custom radio station you can talk to? In this episode, the creators of Hux reveal how they're reimagining the way we consume information and why radio and podcasts might never be the same. Pluralistic: The real (economic) AI apocalypse is nigh (27 Sep 2025) – Pluralistic: Daily links from Cory Doctorow It's time to prepare for AI personhood Creator of AI Actress Tilly Norwood Responds to Backlash: "She Is Not a Replacement for a Human Being" Link to the podslop podcasts California Governor Signs Sweeping A.I. Law Sen. Mark Kelly's big plan for an AI future isn't ambitious enough ** DeepMind defines levels of AGI Pissed-off Fans Flooded the Twin Peaks Reddit With AI Slop To Protest Its AI Policies AI passed the hardest CFA test in minutes ChatGPT is now 20% of Walmart's referral traffic — while Amazon wards off AI shopping agents Amazon event live blog: we're here for new Echos, Kindles, and more Introducing ChatGPT Pulse | OpenAI Marissa Mayer Is Dissolving Her Sunshine Startup Lab to make AI digital assistant That Secret Service SIM farm story is bogus Judge Gives Preliminary Approval to Anthropic Settlement It's official: Google says the Android and ChromeOS merger is coming 'next year' Blippo+ Guardian sliders Ive's $4,800 lantern Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Raiza Martin Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: monarchmoney.com with code IM Melissa.com/twit threatlocker.com/twit agntcy.org
What happens when ex-Google innovators build an AI-powered app that turns your emails, favorite subreddits, and even your calendar into a custom radio station you can talk to? In this episode, the creators of Hux reveal how they're reimagining the way we consume information and why radio and podcasts might never be the same. Pluralistic: The real (economic) AI apocalypse is nigh (27 Sep 2025) – Pluralistic: Daily links from Cory Doctorow It's time to prepare for AI personhood Creator of AI Actress Tilly Norwood Responds to Backlash: "She Is Not a Replacement for a Human Being" Link to the podslop podcasts California Governor Signs Sweeping A.I. Law Sen. Mark Kelly's big plan for an AI future isn't ambitious enough ** DeepMind defines levels of AGI Pissed-off Fans Flooded the Twin Peaks Reddit With AI Slop To Protest Its AI Policies AI passed the hardest CFA test in minutes ChatGPT is now 20% of Walmart's referral traffic — while Amazon wards off AI shopping agents Amazon event live blog: we're here for new Echos, Kindles, and more Introducing ChatGPT Pulse | OpenAI Marissa Mayer Is Dissolving Her Sunshine Startup Lab to make AI digital assistant That Secret Service SIM farm story is bogus Judge Gives Preliminary Approval to Anthropic Settlement It's official: Google says the Android and ChromeOS merger is coming 'next year' Blippo+ Guardian sliders Ive's $4,800 lantern Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Raiza Martin Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: monarchmoney.com with code IM Melissa.com/twit threatlocker.com/twit agntcy.org
What happens when ex-Google innovators build an AI-powered app that turns your emails, favorite subreddits, and even your calendar into a custom radio station you can talk to? In this episode, the creators of Hux reveal how they're reimagining the way we consume information and why radio and podcasts might never be the same. Pluralistic: The real (economic) AI apocalypse is nigh (27 Sep 2025) – Pluralistic: Daily links from Cory Doctorow It's time to prepare for AI personhood Creator of AI Actress Tilly Norwood Responds to Backlash: "She Is Not a Replacement for a Human Being" Link to the podslop podcasts California Governor Signs Sweeping A.I. Law Sen. Mark Kelly's big plan for an AI future isn't ambitious enough ** DeepMind defines levels of AGI Pissed-off Fans Flooded the Twin Peaks Reddit With AI Slop To Protest Its AI Policies AI passed the hardest CFA test in minutes ChatGPT is now 20% of Walmart's referral traffic — while Amazon wards off AI shopping agents Amazon event live blog: we're here for new Echos, Kindles, and more Introducing ChatGPT Pulse | OpenAI Marissa Mayer Is Dissolving Her Sunshine Startup Lab to make AI digital assistant That Secret Service SIM farm story is bogus Judge Gives Preliminary Approval to Anthropic Settlement It's official: Google says the Android and ChromeOS merger is coming 'next year' Blippo+ Guardian sliders Ive's $4,800 lantern Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Raiza Martin Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: monarchmoney.com with code IM Melissa.com/twit threatlocker.com/twit agntcy.org
Membership | Donations | Spotify | YouTube | Apple PodcastsThis week we hear from Larry Muhlstein, who worked on Responsible AI at Google and DeepMind before leaving to found the Holistic Technology Project. In Larry's words:“Care is crafted from understanding, respect, and will. Once care is deep enough and in a generative reciprocal relationship, it gives rise to self-expanding love. My work focuses on creating such systems of care by constructing a holistic sociotechnical tree with roots of philosophical orientation, a trunk of theoretical structure, and technological leaves and fruit that offer nourishment and support to all parts of our world. I believe that we can grow love through technologies of togetherness that help us to understand, respect, and care for each other. I am committed to supporting the responsible development of such technologies so that we can move through these trying times towards a world where we are all well together.”In this episode, Larry and I explore the “roots of philosophical orientation” and “trunk of theoretical structure” as he lays them out in his Technological Love knowledge garden, asking how technologies for reality, perspectives, and karma can help us grow a world in love. What is just enough abstraction? When is autonomy desirable and when is it a false god? What do property and selfhood look like in a future where the ground truths of our interbeing shape design and governance?It's a long, deep conversation on fundamentals we need to reckon with if we are to live in futures we actually want. I hope you enjoy it as much as we did.Our next dialogue is with Sam Arbesman, resident researcher at Lux Capital and author of The Magic of Code. We'll interrogate the distinctions between software and spellcraft, explore the unique blessings and challenges of a world defined by advanced computing, and probe the good, bad, and ugly of futures that move at the speed of thought…✨ Show Links• Hire me for speaking or consulting• Explore the interactive knowledge garden grown from over 250 episodes• Explore the Humans On The Loop dialogue and essay archives• Browse the books we discuss on the show at Bookshop.org• Dig into nine years of mind-expanding podcasts✨ Additional Resources“Growing A World In Love” — Larry Muhlstein at Hurry Up, We're Dreaming“The Future Is Both True & False” — Michael Garfield on Medium“Sacred Data” — Michael Garfield at Hurry Up, We're Dreaming“The Right To Destroy” — Lior Strahilevitz at Chicago Unbound“Decentralized Society: Finding Web3's Soul” — Puja Ohlhaver, E. Glen Weyl, and Vitalik Buterin at SSRN✨ MentionsKarl Schroeder's “Degrees of Freedom”Joshua DiCaglio's Scale TheoryGeoffrey West's ScaleHannah ArendtKen WilberDoug Rushkoff's Survival of the RichestManda Scott's Any Human Power Torey HaydenChaim Gingold's Building SimCityJames P. Carse's Finite & Infinite GamesJohn C. Wright's The Golden OecumeneEckhart Tolle's The Power of Now✨ Related Episodes This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit michaelgarfield.substack.com/subscribe
Trevor (who is also Microsoft's “Chief Questions Officer”) and Mustafa Suleyman, CEO of Microsoft AI and co-founder of Google's DeepMind do a deep dive into whether the benefits of AI to the human race outweigh its unprecedented risks. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.