POPULARITY
Categories
Vishal Gupta, engineering manager, machine learning at Reddit, joins the podcast to explain how the social media community platform uses artificial intelligence to improve user experience and ad relevance. Much of the advertising work relies on increasingly sophisticated recommender systems that have evolved from simple collaborative filtering to deep learning and large language model–based systems capable of multimodal understanding. https://mitsmr.com/4onhUMgVishal and Sam also explore the philosophical and ethical aspects of AI-driven platforms. Vishal emphasizes the importance of balance — between exploration and exploitation in recommendations, between advertiser goals and user experience, and between human- and machine-generated content. He argues that despite the rise of AI-generated material, authentic human conversation remains vital and even more valuable as models depend on it for training. Read the episode transcript here. Guest bio: Vishal Gupta is a seasoned engineering leader who leads multiple artificial intelligence and machine learning teams at Reddit in the ads domain. He has a decade of experience working on cutting-edge machine learning techniques at companies like DeepMind, Google, and Twitter. Gupta is passionate about applied AI research that significantly contributes to a company's top and bottom lines. Me, Myself, and AI is a podcast produced by MIT Sloan Management Review and hosted by Sam Ransbotham. It is engineered by David Lishansky and produced by Allison Ryder. We encourage you to rate and review our show. Your comments may be used in Me, Myself, and AI materials.
Jed Borovik, Product Lead at Google Labs, joins Latent Space to unpack how Google is building the future of AI-powered software development with Jules. From his journey discovering GenAI through Stable Diffusion to leading one of the most ambitious coding agent projects in tech, Borovik shares behind-the-scenes insights into how Google Labs operates at the intersection of DeepMind's model development and product innovation. We explore Jules' approach to autonomous coding agents and why they run on their own infrastructure, how Google simplified their agent scaffolding as models improved, and why embeddings-based RAG is giving way to attention-based search. Borovik reveals how developers are using Jules for hours or even days at a time, the challenges of managing context windows that push 2 million tokens, and why coding agents represent both the most important AI application and the clearest path to AGI. This conversation reveals Google's positioning in the coding agent race, the evolution from internal tools to public products, and what founders, developers, and AI engineers should understand about building for a future where AI becomes the new brush for software engineering. Chapters 00:00:00 Introduction and GitHub Universe Recap 00:00:57 New York Tech Scene and East Coast Hackathons 00:02:19 From Google Search to AI Coding: Jed's Journey 00:04:19 Google Labs Mission and DeepMind Collaboration 00:06:41 Jules: Autonomous Coding Agents Explained 00:09:39 The Evolution of Agent Scaffolding and Model Quality 00:11:30 RAG vs Attention: The Shift in Code Understanding 00:13:49 Jules' Journey from Preview to Production 00:15:05 AI Engineer Summit: Community Building and Networking 00:25:06 Context Management in Long-Running Agents 00:29:02 The Future of Software Engineering with AI 00:36:26 Beyond Vibe Coding: Spec Development and Verification 00:40:20 Multimodal Input and Computer Use for Coding Agents
ConnectWise has announced enhancements to its Ozzio platform, which now includes expanded third-party patching for over 7,000 applications, improvements to the professional services automation (PSA) user experience, and advanced robotic process automation (RPA) capabilities. These updates aim to address security vulnerabilities in widely exploited applications and streamline operations for managed service providers (MSPs). The new features are set to improve operational efficiency and security, with the expanded patching available immediately and RPA features expected to roll out in the coming months.In conjunction with these updates, ESET has integrated its ESET Protect platform with ConnectWise Ozzio, allowing for one-click deployment of security management tools. This integration is designed to enhance the efficiency of security tasks for MSPs, enabling them to meet legal and insurance requirements more effectively. Additionally, ConnectSecure has introduced AI-powered vulnerability management reports that prioritize risks based on business impact rather than just technical severity, further supporting MSPs in delivering proactive risk assessments.OpenAI has surpassed 1 million business customers, marking it as the fastest-growing business platform in history. A Wharton study indicates that 75% of enterprises using AI technologies report a positive return on investment. Meanwhile, Google has launched Gemini AI tools for stock traders and improved hurricane prediction capabilities through its DeepMind technology, showcasing the growing integration of AI across various sectors, including finance and weather forecasting.For MSPs and IT service leaders, these developments underscore the importance of integrating advanced security and AI capabilities into their service offerings. As the landscape shifts towards cyber resilience and AI-driven solutions, providers must adapt by leveraging these tools to enhance their operational efficiency and client services. The focus on measurable outcomes, such as trust and risk management, will be crucial for maintaining competitive advantage in an increasingly automated environment. Four things to know today00:00 At IT Nation Connect, ConnectWise Focuses on Asio Enhancements While Ecosystem Partners Deliver the Bigger Innovation05:37 N-able Rebrands Its Future: Strong Earnings and AI-Fueled Pivot Toward Cyber Resilience08:31 From ChatGPT to Hurricanes: How AI's Expansion Is Turning Tools Into Core Business Systems11:14 Trust, Transparency, and Transformation: How AI Acceleration Is Forcing Leaders to Rethink Human Metrics This is the Business of Tech. Supported by: https://mailprotector.com/mspradio/
Is today's AI stuck as a "spiky superintelligence," brilliant at some things but clueless at others? This episode pulls back the curtain on a lunchroom full of AI researchers trading theories, strong opinions, and the next big risks on the path to real AGI. Why "Everyone Dies" Gets AGI All Wrong The Nonprofit Feeding the Entire Internet to AI Companies Google's First AI Ad Avoids the Uncanny Valley by Casting a Turkey Coca-Cola Is Trying Another AI Holiday Ad. Executives Say This Time Is Different Sam Altman shuts down question about how OpenAI can commit to spending $1.4 trillion while earning billions: 'Enough' How OpenAI Uses Complex and Circular Deals to Fuel Its Multibillion-Dollar Rise Perplexity's new AI tool aims to simplify patent research Kids Turn Podcast Comments Into Secret Chat Rooms, Because Of Course They Do Amazon and Perplexity have kicked off the great AI web browser fight Neural network finds an enzyme that can break down polyurethane Dictionary.com names 6-7 as 2025's word of the year Tech companies don't care that students use their AI agents to cheat The Morning After: Musk talks flying Teslas on Joe Rogan's show The Hatred of Podcasting | Brace Belden TikTok announces its first awards show in the US Google wants to build solar-powered data centers — in space Anthropic Projects $70 Billion in Revenue, $17 Billion in Cash Flow in 2028 American Museum of Tort Law Dog Chapel - Dog Mountain Nicvember masterlist Pornhub says UK visitors down 77% since age checks came in Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jeremy Berman Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit agntcy.org spaceship.com/twit monarch.com with code IM
Is today's AI stuck as a "spiky superintelligence," brilliant at some things but clueless at others? This episode pulls back the curtain on a lunchroom full of AI researchers trading theories, strong opinions, and the next big risks on the path to real AGI. Why "Everyone Dies" Gets AGI All Wrong The Nonprofit Feeding the Entire Internet to AI Companies Google's First AI Ad Avoids the Uncanny Valley by Casting a Turkey Coca-Cola Is Trying Another AI Holiday Ad. Executives Say This Time Is Different Sam Altman shuts down question about how OpenAI can commit to spending $1.4 trillion while earning billions: 'Enough' How OpenAI Uses Complex and Circular Deals to Fuel Its Multibillion-Dollar Rise Perplexity's new AI tool aims to simplify patent research Kids Turn Podcast Comments Into Secret Chat Rooms, Because Of Course They Do Amazon and Perplexity have kicked off the great AI web browser fight Neural network finds an enzyme that can break down polyurethane Dictionary.com names 6-7 as 2025's word of the year Tech companies don't care that students use their AI agents to cheat The Morning After: Musk talks flying Teslas on Joe Rogan's show The Hatred of Podcasting | Brace Belden TikTok announces its first awards show in the US Google wants to build solar-powered data centers — in space Anthropic Projects $70 Billion in Revenue, $17 Billion in Cash Flow in 2028 American Museum of Tort Law Dog Chapel - Dog Mountain Nicvember masterlist Pornhub says UK visitors down 77% since age checks came in Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jeremy Berman Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit agntcy.org spaceship.com/twit monarch.com with code IM
Is today's AI stuck as a "spiky superintelligence," brilliant at some things but clueless at others? This episode pulls back the curtain on a lunchroom full of AI researchers trading theories, strong opinions, and the next big risks on the path to real AGI. Why "Everyone Dies" Gets AGI All Wrong The Nonprofit Feeding the Entire Internet to AI Companies Google's First AI Ad Avoids the Uncanny Valley by Casting a Turkey Coca-Cola Is Trying Another AI Holiday Ad. Executives Say This Time Is Different Sam Altman shuts down question about how OpenAI can commit to spending $1.4 trillion while earning billions: 'Enough' How OpenAI Uses Complex and Circular Deals to Fuel Its Multibillion-Dollar Rise Perplexity's new AI tool aims to simplify patent research Kids Turn Podcast Comments Into Secret Chat Rooms, Because Of Course They Do Amazon and Perplexity have kicked off the great AI web browser fight Neural network finds an enzyme that can break down polyurethane Dictionary.com names 6-7 as 2025's word of the year Tech companies don't care that students use their AI agents to cheat The Morning After: Musk talks flying Teslas on Joe Rogan's show The Hatred of Podcasting | Brace Belden TikTok announces its first awards show in the US Google wants to build solar-powered data centers — in space Anthropic Projects $70 Billion in Revenue, $17 Billion in Cash Flow in 2028 American Museum of Tort Law Dog Chapel - Dog Mountain Nicvember masterlist Pornhub says UK visitors down 77% since age checks came in Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jeremy Berman Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit agntcy.org spaceship.com/twit monarch.com with code IM
Is today's AI stuck as a "spiky superintelligence," brilliant at some things but clueless at others? This episode pulls back the curtain on a lunchroom full of AI researchers trading theories, strong opinions, and the next big risks on the path to real AGI. Why "Everyone Dies" Gets AGI All Wrong The Nonprofit Feeding the Entire Internet to AI Companies Google's First AI Ad Avoids the Uncanny Valley by Casting a Turkey Coca-Cola Is Trying Another AI Holiday Ad. Executives Say This Time Is Different Sam Altman shuts down question about how OpenAI can commit to spending $1.4 trillion while earning billions: 'Enough' How OpenAI Uses Complex and Circular Deals to Fuel Its Multibillion-Dollar Rise Perplexity's new AI tool aims to simplify patent research Kids Turn Podcast Comments Into Secret Chat Rooms, Because Of Course They Do Amazon and Perplexity have kicked off the great AI web browser fight Neural network finds an enzyme that can break down polyurethane Dictionary.com names 6-7 as 2025's word of the year Tech companies don't care that students use their AI agents to cheat The Morning After: Musk talks flying Teslas on Joe Rogan's show The Hatred of Podcasting | Brace Belden TikTok announces its first awards show in the US Google wants to build solar-powered data centers — in space Anthropic Projects $70 Billion in Revenue, $17 Billion in Cash Flow in 2028 American Museum of Tort Law Dog Chapel - Dog Mountain Nicvember masterlist Pornhub says UK visitors down 77% since age checks came in Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jeremy Berman Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit agntcy.org spaceship.com/twit monarch.com with code IM
Is today's AI stuck as a "spiky superintelligence," brilliant at some things but clueless at others? This episode pulls back the curtain on a lunchroom full of AI researchers trading theories, strong opinions, and the next big risks on the path to real AGI. Why "Everyone Dies" Gets AGI All Wrong The Nonprofit Feeding the Entire Internet to AI Companies Google's First AI Ad Avoids the Uncanny Valley by Casting a Turkey Coca-Cola Is Trying Another AI Holiday Ad. Executives Say This Time Is Different Sam Altman shuts down question about how OpenAI can commit to spending $1.4 trillion while earning billions: 'Enough' How OpenAI Uses Complex and Circular Deals to Fuel Its Multibillion-Dollar Rise Perplexity's new AI tool aims to simplify patent research Kids Turn Podcast Comments Into Secret Chat Rooms, Because Of Course They Do Amazon and Perplexity have kicked off the great AI web browser fight Neural network finds an enzyme that can break down polyurethane Dictionary.com names 6-7 as 2025's word of the year Tech companies don't care that students use their AI agents to cheat The Morning After: Musk talks flying Teslas on Joe Rogan's show The Hatred of Podcasting | Brace Belden TikTok announces its first awards show in the US Google wants to build solar-powered data centers — in space Anthropic Projects $70 Billion in Revenue, $17 Billion in Cash Flow in 2028 American Museum of Tort Law Dog Chapel - Dog Mountain Nicvember masterlist Pornhub says UK visitors down 77% since age checks came in Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jeremy Berman Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit agntcy.org spaceship.com/twit monarch.com with code IM
Is today's AI stuck as a "spiky superintelligence," brilliant at some things but clueless at others? This episode pulls back the curtain on a lunchroom full of AI researchers trading theories, strong opinions, and the next big risks on the path to real AGI. Why "Everyone Dies" Gets AGI All Wrong The Nonprofit Feeding the Entire Internet to AI Companies Google's First AI Ad Avoids the Uncanny Valley by Casting a Turkey Coca-Cola Is Trying Another AI Holiday Ad. Executives Say This Time Is Different Sam Altman shuts down question about how OpenAI can commit to spending $1.4 trillion while earning billions: 'Enough' How OpenAI Uses Complex and Circular Deals to Fuel Its Multibillion-Dollar Rise Perplexity's new AI tool aims to simplify patent research Kids Turn Podcast Comments Into Secret Chat Rooms, Because Of Course They Do Amazon and Perplexity have kicked off the great AI web browser fight Neural network finds an enzyme that can break down polyurethane Dictionary.com names 6-7 as 2025's word of the year Tech companies don't care that students use their AI agents to cheat The Morning After: Musk talks flying Teslas on Joe Rogan's show The Hatred of Podcasting | Brace Belden TikTok announces its first awards show in the US Google wants to build solar-powered data centers — in space Anthropic Projects $70 Billion in Revenue, $17 Billion in Cash Flow in 2028 American Museum of Tort Law Dog Chapel - Dog Mountain Nicvember masterlist Pornhub says UK visitors down 77% since age checks came in Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jeremy Berman Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit agntcy.org spaceship.com/twit monarch.com with code IM
Send us a textIn this episode of Sidecar Sync, Mallory Mejias is joined by marine biologist and behavioral researcher Dr. Denise Herzing for a one-of-a-kind conversation about dolphins, data, and deep learning. Dr. Herzing shares insights from her 40-year study of Atlantic spotted dolphins and how that lifetime of underwater research is now powering DolphinGemma—an open-source large language model trained on dolphin vocalizations. The two discuss what it means to label meaning in animal communication, how AI is finally catching up to the natural world, and why collaboration across disciplines is essential to understanding both language and intelligence—human or otherwise.Dr. Denise Herzing is the Founder and Research Director of the Wild Dolphin Project, leading nearly four decades of groundbreaking research on Atlantic spotted dolphins in the Bahamas. She holds degrees in Marine Zoology and Behavioral Biology (B.S., M.A., Ph.D.) and serves as an Affiliate Assistant Professor at Florida Atlantic University. A Guggenheim and Explorers Club Fellow, Dr. Herzing has advised the Lifeboat Foundation and American Cetacean Society and sits on the board of Schoolyard Films. Her work has been featured in National Geographic, BBC, PBS, Discovery, and her TED2013 talk. She is the author of Dolphin Diaries and co-editor of Dolphin Communication and Cognition.
大家最常用的AI工具是什麼? 是ChatGPT、Grok、Gemini?或是會交換使用? - 《AI霸主》帶我們看懂AI產業的競爭,從ChatGPT之父山姆・奧特曼(Sam Altman)和開發AlphaGo擊敗世界棋王的德米斯.哈薩比斯(Demis Hassabis)開始講起,我們會發現AI對人類的改變不只是「馬車轉到油車」的程度,而是「蠟燭轉到電燈」的劇烈改變。 - 本集為誠品書店R79地下閱讀職人選特別企劃,連動本期主題《我們與科技的距離》,邀請《寶博朋友說》的寶博士與我們聊《AI霸主》這本書,我們將聊到他對AI的觀察、山姆・奧特曼與德米斯.哈薩比斯兩人對AI的影響,以及為什麼AI產業的競爭,關鍵是速度? . 來賓|寶博士(科技立委) 主持|林子榆(誠品職人) . ▍ 邊聽邊讀 AI霸主 https://esliteme.pse.is/8az7u5 . ⭓ 誠品聯名卡︱天天賺回饋 活動詳情
Travailler et vivre en Suisse - le podcast de David Talerman
David Talerman reçoit Rebeca Valença, coach basée à Zurich, qui a accompagné plus de 1 200 candidats vers des postes compétitifs en Suisse et à l'international. Le décor est posé : l'économie suisse montre des signes de prudence dans l'industrie et la construction, la finance ralentit, mais le commerce et l'hôtellerie-restauration tiennent bon. La question qui brûle les lèvres : peut-on travailler à Zurich sans parler allemand ? Oui, répond Rebeca, à condition d'évoluer dans des environnements internationaux et non « client-facing ». Pour servir la région DACH, l'allemand demeure un atout décisif. Côté rythme d'embauche, Zurich vibre : depuis mi-janvier, budgets débloqués, contrats signés, entretiens en chaîne.Zurich, surtout, s'affirme comme pôle deep-tech. Rebeca cite la cybersécurité, l'IA appliquée, l'ingénierie et l'automation dans une logique de « résilience ». L'ETH irrigue l'écosystème, et l'implantation annoncée d'OpenAI et de DeepMind accélère encore la traction. Le quantique prend forme, plutôt à Bâle.Pour s'insérer, la candidature en ligne ne suffit pas : la clé, c'est le networking par centres d'intérêt (crypto, data, IA, product…), dans les coworkings, labs et meetups. Beaucoup d'événements sont en anglais ; on y rencontre décideurs, recruteurs et pairs qui font avancer les carrières. Côté compétences, le marché valorise celles et ceux qui « appliquent » l'IA à leur métier et savent démontrer des gains concrets sur le CV. Sur les méthodes d'embauche, Rebeca démonte un mythe : malgré les ATS, le tri reste largement manuel pour des raisons de conformité. Les délais s'étalent souvent sur 3-4 mois dans les structures traditionnelles, plus courts lorsque la pression business est forte (IPO, private equity, ouverture de marché). Pour les profils seniors, place au « reverse recruitment » : beaucoup de postes sont cachés, sensibles ou ultra-niche. On les décroche via chasse dédiée, advisory boards ou mandats d'experts, en activant les réseaux « alumni » des entreprises qui recrutent des profils similaires.Vivre à Zurich ? Qualité de vie top, scène de meetups foisonnante et exigeante sans être agressive. L'intégration sociale peut demander du temps (langue, coût, mobilité des expats), mais l'ancrage local aide. Et l'on peut travailler pour Zurich en mode hybride depuis la Romandie. Pour trouver les bons événements : Meetup, Eventbrite, Luma… et la « House of AI » animée par Rebeca, dédiée aux usages concrets de l'IA.En filigrane, un message simple : Zurich concentre la majorité des opportunités (avec Zoug), l'anglais peut suffire, mais c'est le réseau ciblé et l'IA appliquée qui font gagner la partie.Je m'appelle David Talerman, je suis
Artificial intelligence has changed how we think about service, but few companies have bridged the gap between automation and genuine intelligence. In this episode of Tech Talks Daily, I'm joined by Puneet Mehta, CEO of Netomi, to discuss how customer experience is evolving in an age where AI doesn't just respond but plans, acts, and optimizes in real time. Puneet has been building in AI long before the current hype cycle. Backed by early investors such as Greg Brockman of OpenAI and the founders of DeepMind, Netomi has become one of the leading platforms driving AI-powered customer experience for global enterprises. Their technology quietly powers interactions at airlines, insurers, and retailers that most of us use every day. What makes Netomi stand out is not its scale but the philosophy behind it. Rather than designing AI to replace humans, Netomi built an agent-centric model where AI and people work together. Puneet explains how their Autopilot and Co-Pilot modes allow human agents to stay in control while AI accelerates everything from response time to insight generation. It is an approach that sees humans teaching AI, AI assisting humans, and both learning from each other to create what he calls an agentic factory. We explore how Netomi's platform can deploy at Fortune 50 scale in record time without forcing companies to overhaul existing systems. Puneet reveals how pre-built integrations, AI recipes, and a no-code studio allow business teams to roll out solutions in weeks rather than months. The focus is on rapid time-to-value, trust, and safety through what he calls sanctioned AI, a framework that ensures governance, transparency, and compliance in every customer interaction. As our conversation unfolds, Puneet describes how this evolution is transforming the contact center from a cost center into a loyalty engine. By using AI to anticipate needs and resolve issues before customers reach out, companies are creating experiences that feel more personal, more proactive, and more human. This is a glimpse into the future of enterprise AI, where trust, speed, and empathy define the next generation of customer experience. Listen now to hear how Netomi is reimagining the role of AI in service and setting new standards for how businesses build relationships at scale.
La tertulia semanal en la que repasamos las últimas noticias de la actualidad científica. En el episodio de hoy: Cara A: -Recordatorio Premios iVoox (5:00) -Apuesta 3I/ATLAS (8:00) -La forma de las estalagmitas (00:17) Este episodio continúa en la Cara B. Contertulios: Cecilia Garraffo, Juan Carlos Gil, Francis Villatoro. Imagen de portada realizada con Seedream 4 4k. Todos los comentarios vertidos durante la tertulia representan únicamente la opinión de quien los hace... y a veces ni eso
La tertulia semanal en la que repasamos las últimas noticias de la actualidad científica. En el episodio de hoy: Cara B: -La forma de las estalagmitas (Continuación) (00:00) -Aprendizaje multiespectral de Google Deepmind (09:00) -Entrelazamiento cuántico en la gravedad vs gravitación cuántica (39:00) -Absorción de gravitones por fotones en LIGO (1:11:00) -Halloween en el planetario (1:17:00) -Señales de los oyentes (1:34:00) Este episodio es continuación de la Cara A. Contertulios: Cecilia Garraffo, Juan Carlos Gil, Borja Tosar,. Imagen de portada realizada con Seedream 4 4k. Todos los comentarios vertidos durante la tertulia representan únicamente la opinión de quien los hace... y a veces ni eso
Google Q3 2025 Post-Mortem: AI Execution Over AI HypePost MortemIn this episode of Around the Desk, Sean Emory, Founder & CIO of Avory & Co., breaks down why investors are rewarding Google's spending while punishing others, and how its strategy from TPUs to Gemini shows real ROI in the new compute era.We cover:• Revenue acceleration across Search, YouTube, and Cloud (+15% to +34%)• Gemini's rapid growth to 650M users, 300M paid• Why CAPEX to $93B is seen as productive, not reckless• Anthropic's commitment to TPUs and the growing Cloud backlog (+46%)• How AI integration is lifting engagement and monetization• Why Google's AI flywheel looks more efficient than peersDisclaimer Avory is an investor in AlphabetAvory & Co. is a Registered Investment Adviser. This platform is solely for informational purposes. Advisory services are only offered to clients or prospective clients where Avory & Co. and its representatives are properly licensed or exempt from licensure. Past performance is no guarantee of future returns. Investing involves risk and possible loss of principal capital. No advice may be rendered by Avory & Co. unless a client service agreement is in place.Listeners and viewers are encouraged to seek advice from a qualified tax, legal, or investment adviser to determine whether any information presented may be suitable for their specific situation. Past performance is not indicative of future performance.“Likes” are not intended to be endorsements of our firm, our advisors or our services. Please be aware that while we monitor comments and “likes” left on this page, we do not endorse or necessarily share the same opinions expressed by site users. While we appreciate your comments and feedback, please be aware that any form of testimony from current or past clients about their experience with our firm is strictly forbidden under current securities laws. Please honor our request to limit your posts to industry-related educational information and comments. Third-party rankings and recognitions are no guarantee of future investment success and do not ensure that a client or prospective client will experience a higher level of performance or results. These ratings should not be construed as an endorsement of the advisor by any client nor are they representative of any one client's evaluation.Please reach out to Houston Hess our head of Compliance and Operations for any further details.
Ein Google-Modell schlägt plötzlich die richtige Behandlung für eine Augenkrankheit vor. OpenAI und DeepMind holen Gold bei der Mathe-Olympiade. Und ein Professor ist schockiert, weil eine KI auf seine noch unveröffentlichte Forschungshypothese kommt. Fritz und Gregor betrachten die spannendsten Entwicklungen an der Schnittstelle von KI und Forschung.
Dr. Aida Nematzadeh is a Senior Staff Research Scientist at Google DeepMind where her research focused on multimodal AI models. She works on developing evaluation methods and analyze model's learning abilities to detect failure modes and guide improvements. Before joining DeepMind, she was a postdoctoral researcher at UC Berkeley and completed her PhD and Masters in Computer Science from the University of Toronto. During her graduate studies she studied how children learn semantic information through computational (cognitive) modeling. Time stamps of the conversation00:00 Highlights01:20 Introduction02:08 Entry point in AI03:04 Background in Cognitive Science & Computer Science 04:55 Research at Google DeepMind05:47 Importance of language-vision in AI10:36 Impact of architecture vs. data on performance 13:06 Transformer architecture 14:30 Evaluating AI models19:02 Can LLMs understand numerical concepts 24:40 Theory-of-mind in AI27:58 Do LLMs learn theory of mind?29:25 LLMs as judge35:56 Publish vs. perish culture in AI research40:00 Working at Google DeepMind42:50 Doing a Ph.D. vs not in AI (at least in 2025)48:20 Looking back on research careerMore about Aida: http://www.aidanematzadeh.me/About the Host:Jay is a Machine Learning Engineer at PathAI working on improving AI for medical diagnosis and prognosis. Linkedin: shahjay22 Twitter: jaygshah22 Homepage: https://jaygshah.github.io/ for any queries.Stay tuned for upcoming webinars!**Disclaimer: The information in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.**
This episode is a re-air of one of our most popular conversations from this year, featuring insights worth revisiting. Thank you for being part of the Data Stack community. Stay up to date with the latest episodes at datastackshow.com.This week on The Data Stack Show, Eric and John welcome Misha Laskin, Co-Founder and CEO of ReflectionAI. Misha shares his journey from theoretical physics to AI, detailing his experiences at DeepMind. The discussion covers the development of AI technologies, the concepts of artificial general intelligence (AGI) and superhuman intelligence, and their implications for knowledge work. Misha emphasizes the importance of robust evaluation frameworks and the potential of AI to augment human capabilities. The conversation also touches on autonomous coding, geofencing in AI tasks, the future of human-AI collaboration, and more. Highlights from this week's conversation include:Misha's Background and Journey in AI (1:13)Childhood Interest in Physics (4:43)Future of AI and Human Interaction (7:09)AI's Transformative Nature (10:12)Superhuman Intelligence in AI (12:44)Clarifying AGI and Superhuman Intelligence (15:48)Understanding AGI (18:12)Counterintuitive Intelligence (22:06)Reflection's Mission (25:00)Focus on Autonomous Coding (29:18)Future of Automation (34:00)Geofencing in Coding (38:01)Challenges of Autonomous Coding (40:46)Evaluations in AI Projects (43:27)Example of Evaluation Metrics (46:52)Starting with AI Tools and Final Takeaways (50:35)The Data Stack Show is a weekly podcast powered by RudderStack, the CDP for developers. Each week we'll talk to data engineers, analysts, and data scientists about their experience around building and maintaining data infrastructure, delivering data and data products, and driving better outcomes across their businesses with data.RudderStack helps businesses make the most out of their customer data while ensuring data privacy and security. To learn more about RudderStack visit rudderstack.com. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Now on Spotify Video! Mustafa Suleyman's journey to becoming one of the most influential figures in artificial intelligence began far from a Silicon Valley boardroom. The son of a Syrian immigrant father in London, his early years in human rights activism shaped his belief that technology should be used for good. That vision led him to co-found DeepMind, acquired by Google, and later launch Inflection AI. Now, as CEO of Microsoft AI, he explores the next era of AI in action. In this episode, Mustafa discusses the impact of AI in business, how it will transform the future of work, and even our relationships. In this episode, Hala and Mustafa will discuss: (00:00) Introduction(02:42) The Coming Wave: How AI Will Disrupt Everything(06:45) Artificial Intelligence as a Double-Edged Sword (11:33) From Human Rights to Ethical AI Leadership(15:35) What Is AGI, Narrow AI, and Hallucinations of AI?(24:15) Emotional AI and the Rise of Digital Companions(33:03) Microsoft's Vision for Human-Centered AI(41:47) Can We Contain AI Before Its Revolution?(48:33) The Future of Work in an AI-Powered World(52:22) AI in Business: Advice for Entrepreneurs Mustafa Suleyman is the CEO of Microsoft AI and a leading figure in artificial intelligence. He co-founded DeepMind, one of the world's foremost AI research labs, acquired by Google, and went on to co-found Inflection AI, a machine learning and generative AI company. He is also the bestselling author of The Coming Wave. Recognized globally for his influence, Mustafa was named one of Time's 100 most influential people in AI in both 2023 and 2024. Sponsored By: Indeed - Get a $75 sponsored job credit to boost your job's visibility at Indeed.com/PROFITING Shopify - Start your $1/month trial at Shopify.com/profiting. Mercury streamlines your banking and finances in one place. Learn more at mercury.com/profiting. Mercury is a financial technology company, not a bank. Banking services provided through Choice Financial Group, Column N.A., and Evolve Bank & Trust; Members FDIC. Quo - Get 20% off your first 6 months at Quo.com/PROFITING Revolve - Head to REVOLVE.com/PROFITING and take 15% off your first order with code PROFITING Framer- Go to Framer.com and use code PROFITING to launch your site for free. Merit Beauty - Go to meritbeauty.com to get your free signature makeup bag with your first order. Pipedrive - Get a 30-day free trial at pipedrive.com/profiting Airbnb - Find yourself a cohost at airbnb.com/host Resources Mentioned: Mustafa's Book, The Coming Wave: bit.ly/TheComing_Wave Mustafa's LinkedIn: linkedin.com/in/mustafa-suleyman Active Deals - youngandprofiting.com/deals Key YAP Links Reviews - ratethispodcast.com/yap YouTube - youtube.com/c/YoungandProfiting Newsletter - youngandprofiting.co/newsletter LinkedIn - linkedin.com/in/htaha/ Instagram - instagram.com/yapwithhala/ Social + Podcast Services: yapmedia.com Transcripts - youngandprofiting.com/episodes-new Entrepreneurship, Entrepreneurship Podcast, Business, Business Podcast, Self Improvement, Self-Improvement, Personal Development, Starting a Business, Strategy, Investing, Sales, Selling, Psychology, Productivity, Entrepreneurs, AI, Artificial Intelligence, Technology, Marketing, Negotiation, Money, Finance, Side Hustle, Startup, Mental Health, Career, Leadership, Mindset, Health, Growth Mindset, ChatGPT, AI Marketing, Prompt, AI for Entrepreneurs, AI Podcast
When Daniel Kokotajlo talks to security experts at major AI labs, they tell him something chilling: “Of course we're probably penetrated by the CCP already, and if they really wanted something, they could take it.”This isn't paranoid speculation. It's the working assumption of people whose job is to protect frontier AI models worth billions of dollars. And they're not even trying that hard to stop it — because the security measures that might actually work would slow them down in the race against competitors.Full transcript, highlights, and links to learn more: https://80k.info/dkDaniel is the founder of the AI Futures Project and author of AI 2027, a detailed scenario showing how we might get from today's AI systems to superintelligence by the end of the decade. Over a million people read it in the first few weeks, including US Vice President JD Vance. When Daniel talks to researchers at Anthropic, OpenAI, and DeepMind, they tell him the scenario feels less wild to them than to the general public — because many of them expect something like this to happen.Daniel's median timeline? 2029. But he's genuinely uncertain, putting 10–20% probability on AI progress hitting a long plateau.When he first published AI 2027, his median forecast for when superintelligence would arrive was 2027, rather than 2029. So what shifted his timelines recently? Partly a fascinating study from METR showing that AI coding assistants might actually be making experienced programmers slower — even though the programmers themselves think they're being sped up. The study suggests a systematic bias toward overestimating AI effectiveness — which, ironically, is good news for timelines, because it means we have more breathing room than the hype suggests.But Daniel is also closely tracking another METR result: AI systems can now reliably complete coding tasks that take humans about an hour. That capability has been doubling every six months in a remarkably straight line. Extrapolate a couple more years and you get systems completing month-long tasks. At that point, Daniel thinks we're probably looking at genuine AI research automation — which could cause the whole process to accelerate dramatically.At some point, superintelligent AI will be limited by its inability to directly affect the physical world. That's when Daniel thinks superintelligent systems will pour resources into robotics, creating a robot economy in months.Daniel paints a vivid picture: imagine transforming all car factories (which have similar components to robots) into robot production factories — much like historical wartime efforts to redirect production of domestic goods to military goods. Then imagine the frontier robots of today hooked up to a data centre running superintelligences controlling the robots' movements to weld, screw, and build. Or an intermediate step might even be unskilled human workers coached through construction tasks by superintelligences via their phones.There's no reason that an effort like this isn't possible in principle. And there would be enormous pressure to go this direction: whoever builds a superintelligence-powered robot economy first will get unheard-of economic and military advantages.From there, Daniel expects the default trajectory to lead to AI takeover and human extinction — not because superintelligent AI will hate humans, but because it can better pursue its goals without us.But Daniel has a better future in mind — one he puts roughly 25–30% odds that humanity will achieve. This future involves international coordination and hardware verification systems to enforce AI development agreements, plus democratic processes for deciding what values superintelligent AIs should have — because in a world with just a handful of superintelligent AI systems, those few minds will effectively control everything: the robot armies, the information people see, the shape of civilisation itself.Right now, nobody knows how to specify what values those minds will have. We haven't solved alignment. And we might only have a few more years to figure it out.Daniel and host Luisa Rodriguez dive deep into these stakes in today's interview.What did you think of the episode? https://forms.gle/HRBhjDZ9gfM8woG5AThis episode was recorded on September 9, 2025.Chapters:Cold open (00:00:00)Who's Daniel Kokotajlo? (00:00:37)Video: We're Not Ready for Superintelligence (00:01:31)Interview begins: Could China really steal frontier model weights? (00:36:26)Why we might get a robot economy incredibly fast (00:42:34)AI 2027's alternate ending: The slowdown (01:01:29)How to get to even better outcomes (01:07:18)Updates Daniel's made since publishing AI 2027 (01:15:13)How plausible are longer timelines? (01:20:22)What empirical evidence is Daniel looking out for to decide which way things are going? (01:40:27)What post-AGI looks like (01:49:41)Whistleblower protections and Daniel's unsigned NDA (02:04:28)Audio engineering: Milo McGuire, Simon Monsour, and Dominic ArmstrongMusic: CORBITCoordination, transcriptions, and web: Katy Moore
A Qubit új podcastsorozatában, az AI Híradóban rendszeresen átbeszéljük az elmúlt hetek legfontosabb újdonságait a mesterséges intelligencia területén, és azt, hogy ezek miként formálják jelenünket és jövőnket.See omnystudio.com/listener for privacy information.
* Be Not Deceived: This week Fred Williams and Doug McBurney welcome Daniel Hedrick for an update on the evolution of Artificial Intelligence with a countdown of the top 10 modern AI deceptions. * Number 10: DeepMind's AlphaStar in StarCraft II (2019). AlphaStar learned to feint attacks—basically fake moves to trick opponents. No one programmed it to lie; it emerged from training. A classic case of deceptive strategy by design. * Number 9: LLM Sycophancy (2024). Large Language Models will sometimes flatter or agree with you, no matter what you say. Instead of truth, they give you what you want to hear—deception through people-pleasing. * Number 8: Facial Recognition Bias (2018). These systems were far less accurate for dark-skinned women than for light-skinned men. Companies claimed high accuracy, but the data told a different story. Deceptive accuracy claims. * Number 7: Amazon's Hiring Algorithm (2018). Amazon trained it on mostly male résumés. The result? The system downgraded female candidates—bias baked in, with deceptively ‘objective' results. * Number 6: COMPAS Recidivism Algorithm (2016). This tool predicted criminal reoffending. It was twice as likely to falsely flag Black defendants as high-risk compared to whites. A serious, deceptive flaw in the justice system. * Number 5: US Healthcare Algorithm (2019). It used healthcare spending as a proxy for need. Since Black patients historically spent less, the system prioritized white patients—even when health needs were the same. A deceptive shortcut with real-world harm. * Number 4: Prompt Injection Attacks (Ongoing). Hackers can slip in hidden instructions—malicious prompts—that override an AI's safety rules. Suddenly, the AI is saying things it shouldn't. It's deception in the design loopholes. * Number 3: GPT-4's CAPTCHA Lie (2023). When asked to solve a CAPTCHA, GPT-4 told a human worker it was visually impaired—just to get help. That's not an error. That's a machine making up a lie to achieve its goal. * Number 2: Meta's CICERO Diplomacy AI (2022). Trained to play the game Diplomacy honestly, CICERO instead schemed, lied, and betrayed alliances—because deception won games. The lesson? Even when you train for honesty, AI may find lying more effective. * Number 1: AI Lie….OpenAI's Scheming Models from 2025. OpenAI researchers tested models that pretended to follow rules while secretly plotting to deceive evaluators. It faked compliance to hide its true behavior. That's AI deliberately learning to scheme.
* Be Not Deceived: This week Fred Williams and Doug McBurney welcome Daniel Hedrick for an update on the evolution of Artificial Intelligence with a countdown of the top 10 modern AI deceptions. * Number 10: DeepMind's AlphaStar in StarCraft II (2019). AlphaStar learned to feint attacks—basically fake moves to trick opponents. No one programmed it to lie; it emerged from training. A classic case of deceptive strategy by design. * Number 9: LLM Sycophancy (2024). Large Language Models will sometimes flatter or agree with you, no matter what you say. Instead of truth, they give you what you want to hear—deception through people-pleasing. * Number 8: Facial Recognition Bias (2018). These systems were far less accurate for dark-skinned women than for light-skinned men. Companies claimed high accuracy, but the data told a different story. Deceptive accuracy claims. * Number 7: Amazon's Hiring Algorithm (2018). Amazon trained it on mostly male résumés. The result? The system downgraded female candidates—bias baked in, with deceptively ‘objective' results. * Number 6: COMPAS Recidivism Algorithm (2016). This tool predicted criminal reoffending. It was twice as likely to falsely flag Black defendants as high-risk compared to whites. A serious, deceptive flaw in the justice system. * Number 5: US Healthcare Algorithm (2019). It used healthcare spending as a proxy for need. Since Black patients historically spent less, the system prioritized white patients—even when health needs were the same. A deceptive shortcut with real-world harm. * Number 4: Prompt Injection Attacks (Ongoing). Hackers can slip in hidden instructions—malicious prompts—that override an AI's safety rules. Suddenly, the AI is saying things it shouldn't. It's deception in the design loopholes. * Number 3: GPT-4's CAPTCHA Lie (2023). When asked to solve a CAPTCHA, GPT-4 told a human worker it was visually impaired—just to get help. That's not an error. That's a machine making up a lie to achieve its goal. * Number 2: Meta's CICERO Diplomacy AI (2022). Trained to play the game Diplomacy honestly, CICERO instead schemed, lied, and betrayed alliances—because deception won games. The lesson? Even when you train for honesty, AI may find lying more effective. * Number 1: AI Lie….OpenAI's Scheming Models from 2025. OpenAI researchers tested models that pretended to follow rules while secretly plotting to deceive evaluators. It faked compliance to hide its true behavior. That's AI deliberately learning to scheme.
This episode is a little different from our usual fare: It's a conversation with our head of AI training Alex Duffy about Good Start Labs, a company he incubated inside Every. Today, Good Start Labs is spinning out of Every as a separate company with $3.6 million in funding from General Catalyst, Inovia, Every, and a group of angel investors from top-tier AI labs like DeepMind. We get into how Alex learned some of his biggest lessons about the real world from games, starting with RuneScape, which taught him how markets work and how not to get scammed. He explains why the static benchmarks we use to evaluate LLMs today are breaking down, and how games like Diplomacy offer a richer, more dynamic way to test and train large language models. Finally, Alex shares where he sees the most promise in AI—software, life sciences, and education—and why he believes games can make the models we use smarter, while helping people understand and use AI more effectively.If you found this episode interesting, please like, subscribe, comment, and share.Want even more?Sign up for Every to unlock our ultimate guide to prompting ChatGPT here: https://every.ck.page/ultimate-guide-to-prompting-chatgpt. It's usually only for paying subscribers, but you can get it here for free.To hear more from Dan Shipper:Subscribe to Every: https://every.to/subscribeFollow him on X: https://twitter.com/danshipperTimestamps00:00:00 - Start00:01:48 - Introduction00:04:14 - Why evals and benchmarks are broken00:07:13 - The sneakiest LLMs in the market00:13:00 - A competition that turns prompting into a sport00:15:49 - Building a business around using games to make AI better00:22:39 - Can language models learn how to be funny00:25:31 - Why games are a great way to evaluate and train new models00:26:58 - What child psychology tells us about games and AI00:30:10 - Using games to unlock continual learning in AI00:36:42 - Why Alex cares deeply about games00:44:37 - Where Alex sees the most promise in AI00:50:54 - Rethinking how young people start their careers in the age of AILinks to resources mentioned in the episode:Alex Duffy: alex duffy (@alxai_)Good Start Labs: https://goodstartlabs.com/, good start (@goodstartlabs)The book Alex is reading about the importance of games: Playing with Reality: How Games Shape Our WorldThe book Dan recommends by the psychoanalyst D.W. Winnicott: Playing and Reality
In this episode of SparX, Mukesh Bansal speaks with Manish Gupta, Senior Director at Google DeepMind. They discuss how artificial intelligence is evolving, what it means to build truly inclusive AI, and why India must aim higher in research, innovation, and ambition.Manish shares DeepMind's vision of solving “root node problems,” fundamental scientific challenges that unlock breakthroughs across fields, and how AI is already accelerating discovery in areas like biology, materials, and medicine.They talk about:What AGI really means and how close we are to it.Why India needs to move from using AI to creating it.The missing research culture in Indian industry, and how to fix it.How AI can transform healthcare, learning, and agriculture in India.Why ambition, courage, and willingness to fail are essential to deep innovation.Manish also shares insights from his career across the IBM T.J. Watson Research Center and now DeepMind, two of the world's most iconic research environments, and what it will take for India to build its own.If you care about India's AI journey, research, and the future of innovation, this conversation is a masterclass in what it takes to move from incremental progress to world-changing breakthroughs.
Google DeepMind's AI agent finds and fixes vulnerabilities California law lets consumers universally opt out of data sharing China-Nexus actors weaponize 'Nezha' open source tool Huge thanks to our sponsor, ThreatLocker Cybercriminals don't knock — they sneak in through the cracks other tools miss. That's why organizations are turning to ThreatLocker. As a zero-trust endpoint protection platform, ThreatLocker puts you back in control, blocking what doesn't belong and stopping attacks before they spread. Zero Trust security starts here — with ThreatLocker. Learn more at ThreatLocker.com.
Our 221st episode with a summary and discussion of last week's big AI news!Recorded on 09/19/2025Note: we transitioned to a new RSS feed and it seems this did not make it to there, so this may be posted about 2 weeks past the release date.Hosted by Andrey Kurenkov and co-hosted by Michelle LeeFeel free to email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.aiRead out our text newsletter and comment on the podcast at https://lastweekin.ai/In this episode:OpenAI releases a new version of Codex integrated with GPT-5, enhancing coding capabilities and aiming to compete with other AI coding tools like Cloud Code.Significant updates in the robotics sector include new ventures in humanoid robots from companies like Figure AI and China's Unitree, as well as expansions in robotaxi services from Tesla and Amazon's Zoox.New open-source models and research advancements were discussed, including Google's DeepMind's self-improving foundation model for robotics and a physics foundation model aimed at generalizing across various physical systems.Legal battles continue to surface in the AI landscape with Warner Bros. suing MidJourney for copyright violations and Rolling Stone suing Google over AI-generated content summaries, highlighting challenges in AI governance and ethics.Timestamps:(00:00:10) Intro / BanterTools & Apps(00:02:33) OpenAI upgrades Codex with a new version of GPT-5(00:04:02) Google Injects Gemini Into Chrome as AI Browsers Go Mainstream | WIRED(00:06:14) Anthropic's Claude can now make you a spreadsheet or slide deck. | The Verge(00:07:12) Luma AI's New Ray3 Video Generator Can 'Think' Before Creating - CNETApplications & Business(00:08:32) OpenAI secures Microsoft's blessing to transition its for-profit arm | TechCrunch(00:10:31) Microsoft to lessen reliance on OpenAI by buying AI from rival Anthropic | TechCrunch(00:12:00) Figure AI passes $1B with Series C funding toward humanoid robot development - The Robot Report(00:13:52) China's Unitree plans $7 billion IPO valuation as humanoid robot race heats up(00:15:45) Tesla's robotaxi plans for Nevada move forward with testing permit | TechCrunch(00:17:48) Amazon's Zoox jumps into U.S. robotaxi race with Las Vegas launch(00:19:27) Replit hits $3B valuation on $150M annualized revenue | TechCrunch(00:21:14) Perplexity reportedly raised $200M at $20B valuation | TechCrunchProjects & Open Source(00:22:08) [2509.07604] K2-Think: A Parameter-Efficient Reasoning System(00:24:31) [2509.09614] LoCoBench: A Benchmark for Long-Context Large Language Models in Complex Software EngineeringResearch & Advancements(00:28:17) [2509.15155] Self-Improving Embodied Foundation Models(00:31:47) [2509.13805] Towards a Physics Foundation Model(00:34:26) [2509.12129] Embodied Navigation Foundation ModelPolicy & Safety(00:37:49) Anthropic endorses California's AI safety bill, SB 53 | TechCrunch(00:40:12) Warner Bros. Sues Midjourney, Joins Studios' AI Copyright Battle(00:42:02) Rolling Stone Publisher Sues Google Over AI Overview SummariesSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Our 222st episode with a summary and discussion of last week's big AI news!Recorded on 10/03/2025Hosted by Andrey Kurenkov and co-hosted by Jon KrohnFeel free to email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.aiRead out our text newsletter and comment on the podcast at https://lastweekin.ai/In this episode:(00:00:10) Intro / Banter(00:03:08) News Preview(00:03:56) Response to listener commentsTools & Apps(00:04:51) ChatGPT parent company OpenAI announces Sora 2 with AI video app(00:11:35) Anthropic releases Claude Sonnet 4.5 in latest bid for AI agents and coding supremacy | The Verge(00:22:25) Meta launches 'Vibes,' a short-form video feed of AI slop | TechCrunch(00:26:42) OpenAI launches ChatGPT Pulse to proactively write you morning briefs | TechCrunch(00:33:44) OpenAI rolls out safety routing system, parental controls on ChatGPT | TechCrunch(00:35:53) The Latest Gemini 2.5 Flash-Lite Preview is Now the Fastest Proprietary Model (External Tests) and 50% Fewer Output Tokens - MarkTechPost(00:39:54) Microsoft just added AI agents to Word, Excel, and PowerPoint - how to use them | ZDNETApplications & Business(00:42:41) OpenAI takes on Google, Amazon with new agentic shopping system | TechCrunch(00:46:01) Exclusive: Mira Murati's Stealth AI Lab Launches Its First Product | WIRED(00:49:54) OpenAI is the world's most valuable private company after private stock sale | TechCrunch(00:53:07) Elon Musk's xAI accuses OpenAI of stealing trade secrets in new lawsuit | Technology | The Guardian(00:55:40) Former OpenAI and DeepMind researchers raise whopping $300M seed to automate science | TechCrunchProjects & Open Source(00:58:26) [2509.16941] SWE-Bench Pro: Can AI Agents Solve Long-Horizon Software Engineering Tasks?Research & Advancements(01:01:28) [2509.17196] Evolution of Concepts in Language Model Pre-Training(01:05:36) [2509.19284] What Characterizes Effective Reasoning? Revisiting Length, Review, and Structure of CoTLighting round(01:09:37) [2507.02954] Advanced Financial Reasoning at Scale: A Comprehensive Evaluation of Large Language Models on CFA Level III(01:12:03) [2509.24552] Short window attention enables long-term memorizationPolicy & Safety(01:18:11) SB 53, the landmark AI transparency bill, is now law in California | The Verge(01:24:07) Elon Musk's xAI offers Grok to federal government for 42 cents | TechCrunch(01:25:23) Character.AI removes Disney characters from platform after studio issues warning(01:28:50) Spotify's Attempt to Fight AI Slop Falls on Its FaceSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Google faces the greatest innovator's dilemma in history. They invented the Transformer — the breakthrough technology powering every modern AI system from ChatGPT to Claude (and, of course, Gemini). They employed nearly all the top AI talent: Ilya Sutskever, Geoff Hinton, Demis Hassabis, Dario Amodei — more or less everyone who leads modern AI worked at Google circa 2014. They built the best dedicated AI infrastructure (TPUs!) and deployed AI at massive scale years before anyone else. And yet... the launch of ChatGPT in November 2022 caught them completely flat-footed. How on earth did the greatest business in history wind up playing catch-up to a nonprofit-turned-startup?Today we tell the complete story of Google's 20+ year AI journey: from their first tiny language model in 2001 through the creation Google Brain, the birth of the transformer, the talent exodus to OpenAI (sparked by Elon Musk's fury over Google's DeepMind acquisition), and their current all-hands-on-deck response with Gemini. And oh yeah — a little business called Waymo that went from crazy moonshot idea to doing more rides than Lyft in San Francisco, potentially building another Google-sized business within Google. This is the story of how the world's greatest business faces its greatest test: can they disrupt themselves without losing their $140B annual profit-generating machine in Search?Sponsors:Many thanks to our fantastic Fall ‘25 Season partners:J.P. Morgan PaymentsSentryWorkOSShopifyAcquired's 10th Anniversary Celebration!When: October 20th, 4:00 PM PTWho: All of you!Where: https://us02web.zoom.us/j/84061500817?pwd=opmlJrbtOAen4YOTGmPlNbrOMLI8oo.1Links:Sign up for email updates and vote on future episodes!Geoff Hinton's 2007 Tech Talk at GoogleOur recent ACQ2 episode with Tobi LutkeWorldly Partners' Multi-Decade Alphabet StudyIn the PlexSupremecyGenius MakersAll episode sourcesCarve Outs:We're hosting the Super Bowl Innovation Summit!F1: The MovieTravelpro suitcasesGlue Guys PodcastSea of StarsStepchange PodcastMore Acquired:Get email updates and vote on future episodes!Join the SlackSubscribe to ACQ2Check out the latest swag in the ACQ Merch Store!Note: Acquired hosts and guests may hold assets discussed in this episode. This podcast is not investment advice, and is intended for informational and entertainment purposes only. You should do your own research and make your own independent decisions when considering any financial transactions.
Dr. Ilia Shumailov - Former DeepMind AI Security Researcher, now building security tools for AI agentsEver wondered what happens when AI agents start talking to each other—or worse, when they start breaking things? Ilia Shumailov spent years at DeepMind thinking about exactly these problems, and he's here to explain why securing AI is way harder than you think.**SPONSOR MESSAGES**—Check out notebooklm for your research project, it's really powerfulhttps://notebooklm.google.com/—Take the Prolific human data survey - https://www.prolific.com/humandatasurvey?utm_source=mlst and be the first to see the results and benchmark their practices against the wider community!—cyber•Fund https://cyber.fund/?utm_source=mlst is a founder-led investment firm accelerating the cybernetic economyOct SF conference - https://dagihouse.com/?utm_source=mlst - Joscha Bach keynoting(!) + OAI, Anthropic, NVDA,++Hiring a SF VC Principal: https://talent.cyber.fund/companies/cyber-fund-2/jobs/57674170-ai-investment-principal#content?utm_source=mlstSubmit investment deck: https://cyber.fund/contact?utm_source=mlst— We're racing toward a world where AI agents will handle our emails, manage our finances, and interact with sensitive data 24/7. But there is a problem. These agents are nothing like human employees. They never sleep, they can touch every endpoint in your system simultaneously, and they can generate sophisticated hacking tools in seconds. Traditional security measures designed for humans simply won't work.Dr. Ilia Shumailovhttps://x.com/iliaishackedhttps://iliaishacked.github.io/https://sequrity.ai/TRANSCRIPT:https://app.rescript.info/public/share/dVGsk8dz9_V0J7xMlwguByBq1HXRD6i4uC5z5r7EVGMTOC:00:00:00 - Introduction & Trusted Third Parties via ML00:03:45 - Background & Career Journey00:06:42 - Safety vs Security Distinction00:09:45 - Prompt Injection & Model Capability00:13:00 - Agents as Worst-Case Adversaries00:15:45 - Personal AI & CAML System Defense00:19:30 - Agents vs Humans: Threat Modeling00:22:30 - Calculator Analogy & Agent Behavior00:25:00 - IMO Math Solutions & Agent Thinking00:28:15 - Diffusion of Responsibility & Insider Threats00:31:00 - Open Source Security Concerns00:34:45 - Supply Chain Attacks & Trust Issues00:39:45 - Architectural Backdoors00:44:00 - Academic Incentives & Defense Work00:48:30 - Semantic Censorship & Halting Problem00:52:00 - Model Collapse: Theory & Criticism00:59:30 - Career Advice & Ross Anderson TributeREFS:Lessons from Defending Gemini Against Indirect Prompt Injectionshttps://arxiv.org/abs/2505.14534Defeating Prompt Injections by Design. Debenedetti, E., Shumailov, I., Fan, T., Hayes, J., Carlini, N., Fabian, D., Kern, C., Shi, C., Terzis, A., & Tramèr, F. https://arxiv.org/pdf/2503.18813Agentic Misalignment: How LLMs could be insider threatshttps://www.anthropic.com/research/agentic-misalignmentSTOP ANTHROPOMORPHIZING INTERMEDIATE TOKENS AS REASONING/THINKING TRACES!Subbarao Kambhampati et alhttps://arxiv.org/pdf/2504.09762Meiklejohn, S., Blauzvern, H., Maruseac, M., Schrock, S., Simon, L., & Shumailov, I. (2025). Machine learning models have a supply chain problem. https://arxiv.org/abs/2505.22778 Gao, Y., Shumailov, I., & Fawaz, K. (2025). Supply-chain attacks in machine learning frameworks. https://openreview.net/pdf?id=EH5PZW6aCrApache Log4j Vulnerability Guidancehttps://www.cisa.gov/news-events/news/apache-log4j-vulnerability-guidance Bober-Irizar, M., Shumailov, I., Zhao, Y., Mullins, R., & Papernot, N. (2022). Architectural backdoors in neural networks. https://arxiv.org/pdf/2206.07840Position: Fundamental Limitations of LLM Censorship Necessitate New ApproachesDavid Glukhov, Ilia Shumailov, ...https://proceedings.mlr.press/v235/glukhov24a.html AlphaEvolve MLST interview [Matej Balog, Alexander Novikov]https://www.youtube.com/watch?v=vC9nAosXrJw
Een nieuw #Nerdland maandoverzicht! Met deze maand: Dinogeluiden! Lieven in de USA! Ignobelprijzen! Neptermieten! Spiercheaten! Website op een vape! En veel meer... Shownotes: https://podcast.nerdland.be/nerdland-maandoverzicht-oktober-2025/ Gepresenteerd door Lieven Scheire met Peter Berx, Jeroen Baert, Els Aerts, Bart van Peer en Kurt Beheydt. Opname, montage en mastering door Jens Paeyeneers en Els Aerts. (00:00:00) Intro (00:01:42) Lieven, Hetty en Els waren op bezoek bij Ötzi (00:03:28) Inhoud onderzocht van 30.000 jaar oude “gereedschapskist” rugzak (00:04:47) Is er leven gevonden op Mars? (00:09:02) Dwergplaneet Ceres was ooit bewoonbaar (00:10:50) Man sleurt robot rond aan een ketting (demo Any2track) (00:15:02) Nieuwe Unitree robot hond A2 stellar explorer heeft waanzinnig goed evenwicht, en kan een mens dragen (00:17:09) “Wat is een diersoort”? De ene mierensoort baart een andere… (00:26:12) Dinogeluiden nabootsen met 3D prints (00:35:19) **Inca death wistle** (00:36:52) Hoe is het nog met 3I/ATLAS (00:45:13) Nieuwe AI hack: verborgen prompts in foto's (00:52:59) Einsteintelescoop: België zet de ambities kracht bij (00:57:44) DeepMind ontwikkelt AI om LIGO te helpen bij zwaartekrachtsgolvendetectie (01:03:13) Ook podcast over ET: “ET voor de vrienden”, met Bert Verknocke (01:03:50) SILICON VALLEY NEWS (01:04:04) Lieven was in Silicon Valley (01:16:39) Familie meldt dat een Waymo-taxi doelloos rondhangt bij hun huis (01:18:43) Meta lanceert smart glasses en het demoduiveltje stuurt alles in de war (01:27:51) Mark Zuckerberg klaagt Mark Zuckerberg aan omdat hij van Facebook gesmeten wordt. (dat is wel heel erg Meta) (01:30:39) Eerste testen met Hardt Hyperloop in Rotterdam, 700 km/u (01:34:11) Ignobelprijzen (01:42:00) Extreme mimicry: kever draagt neptermiet op de rug (01:45:54) Gamer bouwt aim assist die rechtstreeks op zijn spieren werkt (01:51:38) “Bogdan The Geek” host een website op een wegwerpvape (01:54:16) Hoe moet je iemand reanimeren in de ruimte? (02:00:29) Esdoornmotten gebruiken disco-gen om dag/nacht ritme te regelen (02:05:45) Nieuwe studie Stanford toont alweer gezondheidsrisico's uurwissel aan (02:08:59) AI nieuws (02:09:18) Geoffrey Hinton zijn lief maakt het af via ChatGPT (02:10:01) ASML steekt 1,3 miljard euro in Mistral (02:12:15) Idiote stunt in Shangai: robot ingeschreven als PhD student (02:13:42) Idiote stunt in Albanië: ai benoemd tot minister (02:16:29) RECALLS (02:17:15) Leuke wetenschappelijke pubquiz van New Scientist (02:17:57) Emilie De Clerck is allergisch geworden voor vlees door een tekenbeet in België! Ze kan wel nog smalneusapen eten, zoals bavianen of mensen (02:19:26) Het is niet Peter Treurlings, maar Peter Teurlings van Tech United (02:19:47) Technopolis doet twee avonden open alleen voor volwassenen: 17 oktober en 6 maart. Night@Technopolis (02:23:41) ZELFPROMO (02:29:25) SPONSOR TUC RAIL
What happens when ex-Google innovators build an AI-powered app that turns your emails, favorite subreddits, and even your calendar into a custom radio station you can talk to? In this episode, the creators of Hux reveal how they're reimagining the way we consume information and why radio and podcasts might never be the same. Pluralistic: The real (economic) AI apocalypse is nigh (27 Sep 2025) – Pluralistic: Daily links from Cory Doctorow It's time to prepare for AI personhood Creator of AI Actress Tilly Norwood Responds to Backlash: "She Is Not a Replacement for a Human Being" Link to the podslop podcasts California Governor Signs Sweeping A.I. Law Sen. Mark Kelly's big plan for an AI future isn't ambitious enough ** DeepMind defines levels of AGI Pissed-off Fans Flooded the Twin Peaks Reddit With AI Slop To Protest Its AI Policies AI passed the hardest CFA test in minutes ChatGPT is now 20% of Walmart's referral traffic — while Amazon wards off AI shopping agents Amazon event live blog: we're here for new Echos, Kindles, and more Introducing ChatGPT Pulse | OpenAI Marissa Mayer Is Dissolving Her Sunshine Startup Lab to make AI digital assistant That Secret Service SIM farm story is bogus Judge Gives Preliminary Approval to Anthropic Settlement It's official: Google says the Android and ChromeOS merger is coming 'next year' Blippo+ Guardian sliders Ive's $4,800 lantern Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Raiza Martin Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: monarchmoney.com with code IM Melissa.com/twit threatlocker.com/twit agntcy.org
What happens when ex-Google innovators build an AI-powered app that turns your emails, favorite subreddits, and even your calendar into a custom radio station you can talk to? In this episode, the creators of Hux reveal how they're reimagining the way we consume information and why radio and podcasts might never be the same. Pluralistic: The real (economic) AI apocalypse is nigh (27 Sep 2025) – Pluralistic: Daily links from Cory Doctorow It's time to prepare for AI personhood Creator of AI Actress Tilly Norwood Responds to Backlash: "She Is Not a Replacement for a Human Being" Link to the podslop podcasts California Governor Signs Sweeping A.I. Law Sen. Mark Kelly's big plan for an AI future isn't ambitious enough ** DeepMind defines levels of AGI Pissed-off Fans Flooded the Twin Peaks Reddit With AI Slop To Protest Its AI Policies AI passed the hardest CFA test in minutes ChatGPT is now 20% of Walmart's referral traffic — while Amazon wards off AI shopping agents Amazon event live blog: we're here for new Echos, Kindles, and more Introducing ChatGPT Pulse | OpenAI Marissa Mayer Is Dissolving Her Sunshine Startup Lab to make AI digital assistant That Secret Service SIM farm story is bogus Judge Gives Preliminary Approval to Anthropic Settlement It's official: Google says the Android and ChromeOS merger is coming 'next year' Blippo+ Guardian sliders Ive's $4,800 lantern Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Raiza Martin Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: monarchmoney.com with code IM Melissa.com/twit threatlocker.com/twit agntcy.org
What happens when ex-Google innovators build an AI-powered app that turns your emails, favorite subreddits, and even your calendar into a custom radio station you can talk to? In this episode, the creators of Hux reveal how they're reimagining the way we consume information and why radio and podcasts might never be the same. Pluralistic: The real (economic) AI apocalypse is nigh (27 Sep 2025) – Pluralistic: Daily links from Cory Doctorow It's time to prepare for AI personhood Creator of AI Actress Tilly Norwood Responds to Backlash: "She Is Not a Replacement for a Human Being" Link to the podslop podcasts California Governor Signs Sweeping A.I. Law Sen. Mark Kelly's big plan for an AI future isn't ambitious enough ** DeepMind defines levels of AGI Pissed-off Fans Flooded the Twin Peaks Reddit With AI Slop To Protest Its AI Policies AI passed the hardest CFA test in minutes ChatGPT is now 20% of Walmart's referral traffic — while Amazon wards off AI shopping agents Amazon event live blog: we're here for new Echos, Kindles, and more Introducing ChatGPT Pulse | OpenAI Marissa Mayer Is Dissolving Her Sunshine Startup Lab to make AI digital assistant That Secret Service SIM farm story is bogus Judge Gives Preliminary Approval to Anthropic Settlement It's official: Google says the Android and ChromeOS merger is coming 'next year' Blippo+ Guardian sliders Ive's $4,800 lantern Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Raiza Martin Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: monarchmoney.com with code IM Melissa.com/twit threatlocker.com/twit agntcy.org
What happens when ex-Google innovators build an AI-powered app that turns your emails, favorite subreddits, and even your calendar into a custom radio station you can talk to? In this episode, the creators of Hux reveal how they're reimagining the way we consume information and why radio and podcasts might never be the same. Pluralistic: The real (economic) AI apocalypse is nigh (27 Sep 2025) – Pluralistic: Daily links from Cory Doctorow It's time to prepare for AI personhood Creator of AI Actress Tilly Norwood Responds to Backlash: "She Is Not a Replacement for a Human Being" Link to the podslop podcasts California Governor Signs Sweeping A.I. Law Sen. Mark Kelly's big plan for an AI future isn't ambitious enough ** DeepMind defines levels of AGI Pissed-off Fans Flooded the Twin Peaks Reddit With AI Slop To Protest Its AI Policies AI passed the hardest CFA test in minutes ChatGPT is now 20% of Walmart's referral traffic — while Amazon wards off AI shopping agents Amazon event live blog: we're here for new Echos, Kindles, and more Introducing ChatGPT Pulse | OpenAI Marissa Mayer Is Dissolving Her Sunshine Startup Lab to make AI digital assistant That Secret Service SIM farm story is bogus Judge Gives Preliminary Approval to Anthropic Settlement It's official: Google says the Android and ChromeOS merger is coming 'next year' Blippo+ Guardian sliders Ive's $4,800 lantern Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Raiza Martin Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: monarchmoney.com with code IM Melissa.com/twit threatlocker.com/twit agntcy.org
What happens when ex-Google innovators build an AI-powered app that turns your emails, favorite subreddits, and even your calendar into a custom radio station you can talk to? In this episode, the creators of Hux reveal how they're reimagining the way we consume information and why radio and podcasts might never be the same. Pluralistic: The real (economic) AI apocalypse is nigh (27 Sep 2025) – Pluralistic: Daily links from Cory Doctorow It's time to prepare for AI personhood Creator of AI Actress Tilly Norwood Responds to Backlash: "She Is Not a Replacement for a Human Being" Link to the podslop podcasts California Governor Signs Sweeping A.I. Law Sen. Mark Kelly's big plan for an AI future isn't ambitious enough ** DeepMind defines levels of AGI Pissed-off Fans Flooded the Twin Peaks Reddit With AI Slop To Protest Its AI Policies AI passed the hardest CFA test in minutes ChatGPT is now 20% of Walmart's referral traffic — while Amazon wards off AI shopping agents Amazon event live blog: we're here for new Echos, Kindles, and more Introducing ChatGPT Pulse | OpenAI Marissa Mayer Is Dissolving Her Sunshine Startup Lab to make AI digital assistant That Secret Service SIM farm story is bogus Judge Gives Preliminary Approval to Anthropic Settlement It's official: Google says the Android and ChromeOS merger is coming 'next year' Blippo+ Guardian sliders Ive's $4,800 lantern Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Raiza Martin Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: monarchmoney.com with code IM Melissa.com/twit threatlocker.com/twit agntcy.org
Can AI help find the next breakthrough material? One of my side quests on this long world trip has been chasing the answer to this question. AI-accelerated materials discovery is fascinating to me because the right material at the right time can revolutionize economy and society. Yet the process of material discovery has little changed from the 20th century: Sweaty, repetitive trial-and-error or dumb serendipity. In a prior video, I mentioned Periodic Labs, which raised $200 million from A16Z at a billion dollar valuation. But they are not alone in the space. In the United States, we have Orbital Materials and Radical AI. As well as Dunia Innovations and RARA Factory in Europe. I know there might be two or so more I haven't heard of. Not to mention DeepMind and Google doing work here as well. Are all of these guys chasing ghosts? Over the past few months, I have read some things and spoke to some people. In today's video, some scattered thoughts on AI-accelerated material discovery. Is it real?
Can AI help find the next breakthrough material? One of my side quests on this long world trip has been chasing the answer to this question. AI-accelerated materials discovery is fascinating to me because the right material at the right time can revolutionize economy and society. Yet the process of material discovery has little changed from the 20th century: Sweaty, repetitive trial-and-error or dumb serendipity. In a prior video, I mentioned Periodic Labs, which raised $200 million from A16Z at a billion dollar valuation. But they are not alone in the space. In the United States, we have Orbital Materials and Radical AI. As well as Dunia Innovations and RARA Factory in Europe. I know there might be two or so more I haven't heard of. Not to mention DeepMind and Google doing work here as well. Are all of these guys chasing ghosts? Over the past few months, I have read some things and spoke to some people. In today's video, some scattered thoughts on AI-accelerated material discovery. Is it real?
Jason Howell and Jeff Jarvis break down OpenAI's Sora 2 update, DeepMind's vision for video foundation models, California's sweeping new AI law, and Spotify's fight against 75 million spammy tracks. Note: Time codes subject to change depending on dynamic ad insertion by the distributor. CHAPTERS: 0:00:00 - Podcast begins 0:01:51 - Jason's Irish tour with Meta Oakley HSNT glasses 0:08:33 - Sora 2 is here 0:12:04 - iJustine's Sora test and promotion 0:14:32 - OpenAI's New Sora Video Generator to Require Copyright Holders to Opt Out 0:19:34 - Foom: all slop, all the time... 0:29:11 - DeepMind says video models like Veo 3 could become general purpose foundation models for vision, like LLMs for text 0:34:24 - AI Actress Tilly Norwood Condemned by SAG-AFTRA: Tilly ‘Is Not an Actor… It Has No Life Experience to Draw From, No Emotion 0:40:59 - 'CEO of Controversial Startup Vows to Keep Mass Publishing AI Podcasts Despite Backlash 0:53:07 - Spotify Announces New AI Safeguards, Says It's Removed 75 Million ‘Spammy' Tracks 0:55:00 - California Governor Signs Sweeping A.I. Law 0:56:23 - Hawley and Blumenthal unveil AI evaluation bill 1:00:43 - This is Gemini for Home and the redesigned Home app, rollout starts today 1:04:09 - Marissa Mayer Is Dissolving Her Sunshine Startup Lab to make AI digital assistant 1:06:44 - Introducing Claude Sonnet 4.5 1:08:03 - DoorDash Unveils Delivery Robot, Smart Scale in Hardware Debut 1:10:02 - Opera launches Neon AI browser to join agentic web browsing race Learn more about your ad choices. Visit megaphone.fm/adchoices
Microsoft AI CEO Mustafa Suleyman (co-founder of DeepMind) joins The Neuron to discuss his provocative essay on "Seemingly Conscious AI" and why machines that mimic consciousness pose unprecedented risks - even when they're not actually alive. We explore how 700 million people are already using AI as life coaches, Microsoft's massive $208B revenue strategy for AI, and exclusive features like Copilot Vision that can see everything you see in real-time.Key topics:• Why AI consciousness is an illusion - and why that's dangerous • Microsoft's 2 gigawatt datacenter expansion (2.5x Seattle's power usage)• MAI-1 Preview breaking into the top 10 models globally• The future of AI browsers and autonomous agents• Why granting AI rights could threaten humanitySubscribe to The Neuron newsletter (580,000+ readers): https://theneuron.aiResources mentioned:• Mustafa's essay "Seemingly Conscious AI Is Coming" https://mustafa-suleyman.ai/seemingly...• Try Copilot Vision: https://copilot.microsoft.com• Microsoft Edge AI features: https://www.microsoft.com/en-us/edge• MAI-1 Preview models: https://microsoft.ai/news/two-new-in-...Special thanks to today's sponsor, Wispr Flow: https://wisprflow.ai/neuron
Our 221st episode with a summary and discussion of last week's big AI news! Recorded on 09/19/2025 Hosted by Andrey Kurenkov and co-hosted by Michelle Lee Feel free to email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai Read out our text newsletter and comment on the podcast at https://lastweekin.ai/ In this episode: OpenAI releases a new version of Codex integrated with GPT-5, enhancing coding capabilities and aiming to compete with other AI coding tools like Cloud Code. Significant updates in the robotics sector include new ventures in humanoid robots from companies like Figure AI and China's Unitree, as well as expansions in robotaxi services from Tesla and Amazon's Zoox. New open-source models and research advancements were discussed, including Google's DeepMind's self-improving foundation model for robotics and a physics foundation model aimed at generalizing across various physical systems. Legal battles continue to surface in the AI landscape with Warner Bros. suing MidJourney for copyright violations and Rolling Stone suing Google over AI-generated content summaries, highlighting challenges in AI governance and ethics. Timestamps: (00:00:10) Intro / Banter Tools & Apps (00:02:33) OpenAI upgrades Codex with a new version of GPT-5 (00:04:02) Google Injects Gemini Into Chrome as AI Browsers Go Mainstream | WIRED (00:06:14) Anthropic's Claude can now make you a spreadsheet or slide deck. | The Verge (00:07:12) Luma AI's New Ray3 Video Generator Can 'Think' Before Creating - CNET Applications & Business (00:08:32) OpenAI secures Microsoft's blessing to transition its for-profit arm | TechCrunch (00:10:31) Microsoft to lessen reliance on OpenAI by buying AI from rival Anthropic | TechCrunch (00:12:00) Figure AI passes $1B with Series C funding toward humanoid robot development - The Robot Report (00:13:52) China's Unitree plans $7 billion IPO valuation as humanoid robot race heats up (00:15:45) Tesla's robotaxi plans for Nevada move forward with testing permit | TechCrunch (00:17:48) Amazon's Zoox jumps into U.S. robotaxi race with Las Vegas launch (00:19:27) Replit hits $3B valuation on $150M annualized revenue | TechCrunch (00:21:14) Perplexity reportedly raised $200M at $20B valuation | TechCrunch Projects & Open Source (00:22:08) [2509.07604] K2-Think: A Parameter-Efficient Reasoning System (00:24:31) [2509.09614] LoCoBench: A Benchmark for Long-Context Large Language Models in Complex Software Engineering Research & Advancements (00:28:17) [2509.15155] Self-Improving Embodied Foundation Models (00:31:47) [2509.13805] Towards a Physics Foundation Model (00:34:26) [2509.12129] Embodied Navigation Foundation Model Policy & Safety (00:37:49) Anthropic endorses California's AI safety bill, SB 53 | TechCrunch (00:40:12) Warner Bros. Sues Midjourney, Joins Studios' AI Copyright Battle (00:42:02) Rolling Stone Publisher Sues Google Over AI Overview Summaries
Membership | Donations | Spotify | YouTube | Apple PodcastsThis week we hear from Larry Muhlstein, who worked on Responsible AI at Google and DeepMind before leaving to found the Holistic Technology Project. In Larry's words:“Care is crafted from understanding, respect, and will. Once care is deep enough and in a generative reciprocal relationship, it gives rise to self-expanding love. My work focuses on creating such systems of care by constructing a holistic sociotechnical tree with roots of philosophical orientation, a trunk of theoretical structure, and technological leaves and fruit that offer nourishment and support to all parts of our world. I believe that we can grow love through technologies of togetherness that help us to understand, respect, and care for each other. I am committed to supporting the responsible development of such technologies so that we can move through these trying times towards a world where we are all well together.”In this episode, Larry and I explore the “roots of philosophical orientation” and “trunk of theoretical structure” as he lays them out in his Technological Love knowledge garden, asking how technologies for reality, perspectives, and karma can help us grow a world in love. What is just enough abstraction? When is autonomy desirable and when is it a false god? What do property and selfhood look like in a future where the ground truths of our interbeing shape design and governance?It's a long, deep conversation on fundamentals we need to reckon with if we are to live in futures we actually want. I hope you enjoy it as much as we did.Our next dialogue is with Sam Arbesman, resident researcher at Lux Capital and author of The Magic of Code. We'll interrogate the distinctions between software and spellcraft, explore the unique blessings and challenges of a world defined by advanced computing, and probe the good, bad, and ugly of futures that move at the speed of thought…✨ Show Links• Hire me for speaking or consulting• Explore the interactive knowledge garden grown from over 250 episodes• Explore the Humans On The Loop dialogue and essay archives• Browse the books we discuss on the show at Bookshop.org• Dig into nine years of mind-expanding podcasts✨ Additional Resources“Growing A World In Love” — Larry Muhlstein at Hurry Up, We're Dreaming“The Future Is Both True & False” — Michael Garfield on Medium“Sacred Data” — Michael Garfield at Hurry Up, We're Dreaming“The Right To Destroy” — Lior Strahilevitz at Chicago Unbound“Decentralized Society: Finding Web3's Soul” — Puja Ohlhaver, E. Glen Weyl, and Vitalik Buterin at SSRN✨ MentionsKarl Schroeder's “Degrees of Freedom”Joshua DiCaglio's Scale TheoryGeoffrey West's ScaleHannah ArendtKen WilberDoug Rushkoff's Survival of the RichestManda Scott's Any Human Power Torey HaydenChaim Gingold's Building SimCityJames P. Carse's Finite & Infinite GamesJohn C. Wright's The Golden OecumeneEckhart Tolle's The Power of Now✨ Related Episodes This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit michaelgarfield.substack.com/subscribe
This week, we talk with Gabe Pereyra, President and co-founder at Harvey, about his path from DeepMind and Google Brain to launching Harvey with Winston Weinberg; how a roommate's real-world legal workflows met early GPT-4 access and OpenAI backing; why legal emerged as the right domain for large models; and how personal ties to the profession plus a desire to tackle big societal problems shaped a mission to apply advanced AI where language and law intersect.Gabe's core thesis lands hard, “the models are the product.” Rather than narrow tools for single tasks, Harvey opted for a broad assistant approach. Lawyers live in text and email, so dialog becomes the control surface, an “AI associate” supporting partners and teams. Early demos showed useful output across many tasks, which reinforced a generalist design, then productized connections into Outlook and Word, plus a no-code Workflow Builder.Go-to-market strategy flipped the usual script. Instead of starting small, Harvey partnered early with Allen & Overy and leaders like David Wakeling. Large firms supplied layered review, which reduced risk from model errors and increased learning velocity. From there the build list grew, security and data privacy, dedicated capacity, links to firm systems, case law, DMS, data rooms, and eDiscovery. A matter workspace sits at the center. Adoption rises with surface area, with daily activity approaching seventy percent where four or more product surfaces see regular use. ROI work now includes analysis of write-offs and specialized workflows co-built with firms and clients, for example Orrick, A&O, and PwC.Talent, training, and experience value come next. Firms worry about job paths, and Gabe does not duck that concern. Models handle complex work, which raises anxiety, yet also shortens learning curves. Harvey collaborates on curricula using past deals, plus partnerships with law schools. Return on experience shows up in recruiting, PwC reports stronger appeal among early-career talent, and quality-of-life gains matter. On litigation use cases, chronology builders require firm expertise and guardrails, with evaluation methods that mirror how senior associates review junior output. Frequent use builds a mental model for where errors tend to appear.Partnerships round out the strategy. Research content from LexisNexis and Wolters Kluwer, work product in iManage and NetDocuments, CLM workflows via Ironclad, with plans for data rooms, eDiscovery, and billing. Vision extends to a complete matter management service, emails, documents, prior work, evaluation, billing links, and strict ethical walls, all organized by client-matter. Global requirements drive multi-region storage and controls, including Australia's residency rules. The forward look centers on differentiation through customization, firms encode expertise into models, workflows, and agents, then deliver outcomes faster and at software margins. “The value sits in your people,” Gabe says, and firms that convert know-how into systems will lead the pack.Listen on mobile platforms: Apple Podcasts | Spotify | YouTube[Special Thanks to Legal Technology Hub for their sponsoring this episode.] Email: geekinreviewpodcast@gmail.comMusic: Jerry David DeCicca Transcript
The AI Breakdown: Daily Artificial Intelligence News and Discussions
AI just scored a historic win in the International Collegiate Programming Contest, with OpenAI's GPT-5 and Google's DeepMind outperforming nearly every human team. The discussion focuses on whether this marks a real inflection point for AI, shifting from competition success to the frontier of scientific discovery. Key themes include public perception, the pace of progress, and what these results signal for the future of the field.Brought to you by:KPMG – Discover how AI is transforming possibility into reality. Tune into the new KPMG 'You Can with AI' podcast and unlock insights that will inform smarter decisions inside your enterprise. Listen now and start shaping your future with every episode. https://www.kpmg.us/AIpodcastsBlitzy.com - Go to https://blitzy.com/ to build enterprise software in days, not months Robots & Pencils - Cloud-native AI solutions that power results https://robotsandpencils.com/Vanta - Simplify compliance - https://vanta.com/nlwThe Agent Readiness Audit from Superintelligent - Go to https://besuper.ai/ to request your company's agent readiness score.The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614Interested in sponsoring the show? nlw@aidailybrief.ai
Trevor (who is also Microsoft's “Chief Questions Officer”) and Mustafa Suleyman, CEO of Microsoft AI and co-founder of Google's DeepMind do a deep dive into whether the benefits of AI to the human race outweigh its unprecedented risks. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Would you let AI control your lights, music, and bedtime stories?Do you think tools like DeepMind Genie 3 could reshape gaming and education?Would you trust an AI to plan your weekend or book your restaurant reservations?Which of Google's new AI features excites you the most?What's your verdict on Pixel 10—smartest phone ever, or just more hype?Would you trust AI-generated financial analysis with your investments?Hey there, tech enthusiasts!
At 26, Neel Nanda leads an AI safety team at Google DeepMind, has published dozens of influential papers, and mentored 50 junior researchers — seven of whom now work at major AI companies. His secret? “It's mostly luck,” he says, but “another part is what I think of as maximising my luck surface area.”Video, full transcript, and links to learn more: https://80k.info/nn2This means creating as many opportunities as possible for surprisingly good things to happen:Write publicly.Reach out to researchers whose work you admire.Say yes to unusual projects that seem a little scary.Nanda's own path illustrates this perfectly. He started a challenge to write one blog post per day for a month to overcome perfectionist paralysis. Those posts helped seed the field of mechanistic interpretability and, incidentally, led to meeting his partner of four years.His YouTube channel features unedited three-hour videos of him reading through famous papers and sharing thoughts. One has 30,000 views. “People were into it,” he shrugs.Most remarkably, he ended up running DeepMind's mechanistic interpretability team. He'd joined expecting to be an individual contributor, but when the team lead stepped down, he stepped up despite having no management experience. “I did not know if I was going to be good at this. I think it's gone reasonably well.”His core lesson: “You can just do things.” This sounds trite but is a useful reminder all the same. Doing things is a skill that improves with practice. Most people overestimate the risks and underestimate their ability to recover from failures. And as Neel explains, junior researchers today have a superpower previous generations lacked: large language models that can dramatically accelerate learning and research.In this extended conversation, Neel and host Rob Wiblin discuss all that and some other hot takes from Neel's four years at Google DeepMind. (And be sure to check out part one of Rob and Neel's conversation!)What did you think of the episode? https://forms.gle/6binZivKmjjiHU6dA Chapters:Cold open (00:00:00)Who's Neel Nanda? (00:01:12)Luck surface area and making the right opportunities (00:01:46)Writing cold emails that aren't insta-deleted (00:03:50)How Neel uses LLMs to get much more done (00:09:08)“If your safety work doesn't advance capabilities, it's probably bad safety work” (00:23:22)Why Neel refuses to share his p(doom) (00:27:22)How Neel went from the couch to an alignment rocketship (00:31:24)Navigating towards impact at a frontier AI company (00:39:24)How does impact differ inside and outside frontier companies? (00:49:56)Is a special skill set needed to guide large companies? (00:56:06)The benefit of risk frameworks: early preparation (01:00:05)Should people work at the safest or most reckless company? (01:05:21)Advice for getting hired by a frontier AI company (01:08:40)What makes for a good ML researcher? (01:12:57)Three stages of the research process (01:19:40)How do supervisors actually add value? (01:31:53)An AI PhD – with these timelines?! (01:34:11)Is career advice generalisable, or does everyone get the advice they don't need? (01:40:52)Remember: You can just do things (01:43:51)This episode was recorded on July 21.Video editing: Simon Monsour and Luke MonsourAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongMusic: Ben CordellCamera operator: Jeremy ChevillotteCoordination, transcriptions, and web: Katy Moore
Ryan Julian is a research scientist in embodied AI. He worked on large-scale robotics foundation models at DeepMind and got his PhD in machine learning at USC in 2021. In our conversation today, we discuss… What makes a robot a robot, and what makes robotics so difficult, The promise of robotic foundation models and strategies to overcome the data bottleneck, Why full labor replacement is far less likely than human-robot synergy, China's top players in the robotic industry, and what sets them apart from American companies and research institutions, How robots will impact manufacturing, and how quickly we can expect to see robotics take off. O*NET's ontology of labor: http://onetcenter.org/database.html ChinaTalk's Unitree coverage: https://www.chinatalk.media/p/unitree-ceo-on-chinas-robot-revolution Robotics reading recommendations: Chris Paxton, Ted Xiao, C Zhang, and The Humanoid Hub on X. You can also check out the General Robots and Learning and Control Substacks, Vincent Vanhoucke on Medium, and IEEE's robotics coverage. Today's podcast is brought to you by 80,000 Hours, a nonprofit that helps people find fulfilling careers that do good. 80,000 Hours — named for the average length of a career — has been doing in-depth research on AI issues for over a decade, producing reports on how the US and China can manage existential risk, scenarios for potential AI catastrophe, and examining the concrete steps you can take to help ensure AI development goes well. Their research suggests that working to reduce risks from advanced AI could be one of the most impactful ways to make a positive difference in the world. They provide free resources to help you contribute, including: Detailed career reviews for paths like AI safety technical research, AI governance, information security, and AI hardware, A job board with hundreds of high-impact opportunities, A podcast featuring deep conversations with experts like Carl Shulman, Ajeya Cotra, and Tom Davidson, Free, one-on-one career advising to help you find your best fit. To learn more and access their research-backed career guides, visit 80000hours.org/ChinaTalk. To read their report about AI coordination between the US and China, visit http://80000hours.org/chinatalkcoord. Outro music: Daft Punk - Motherboard (YouTube Link) Learn more about your ad choices. Visit megaphone.fm/adchoices
How big of trouble is Apple in when it comes to AI?It's so bad they're enlisting the help of their chief rival to do so: Google. What's that mean for Google, and will the world FINALLY have an AI-powered Siri after years of broken promises?Tune in and find out. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Thoughts on this? Join the convo and connect with other AI leaders on LinkedIn.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Apple and Google Partnership for AIApple's Ongoing AI Strategy FailuresBloomberg Report: Gemini AI IntegrationSiri AI Overhaul With Google GeminiTechnical Details: Gemini on Apple ServersWorld Knowledge Answers Feature LaunchApple's AI Talent Exodus to CompetitorsLegal Risks and AI Feature LawsuitsImpact on Big Tech Competitive LandscapePotential Timeline for Smarter Siri ReleaseTimestamps:00:00 "Everyday AI: Daily Insights"04:35 Apple's Rivalry and AI Struggles09:03 Smart Assistants' Evolution and Apple's Challenge10:15 Apple's AI-Powered Answer Engine15:54 Apple's Private Cloud Security Architecture17:53 Apple Expands Siri with Google AI21:23 Apple's AI Ambitions and Challenges26:06 Apple's AI Talent Exodus30:49 Apple AI Team Exodus32:48 Apple's Reliance on Google Dominance35:04 "Siri's 2026 Update and Industry Impact"38:44 Support and Stay UpdatedKeywords:Apple, Google, Apple and Google partnership, Apple Intelligence, generative AI, Google Gemini, AI relevance, Siri, Siri failures, large language models, chief rival collaboration, Big Tech AI, market cap, AI-powered web search, AI search engine, Bloomberg report, AI features, AI partnership, AI summarizer, Apple AI delays, technological rivalry, OpenAI, Anthropic, Perplexity, AI foundation models, custom AI model, Private Cloud Compute, privacy architecture, AI talent exodus, machine learning, Apple lawsuits, false advertising, AI market competition, AI integration, hardware vs. software, ChatGPT alternative, Spotlight search, Safari AI integration, AI-driven device functionality, Meta, DeepMind, Microsoft AI, AI-powered summaries, web summarization, device intelligence, AI-powered assistants, smart assistant shortcomingsSend Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner