POPULARITY
Categories
Artificial intelligence has changed how we think about service, but few companies have bridged the gap between automation and genuine intelligence. In this episode of Tech Talks Daily, I'm joined by Puneet Mehta, CEO of Netomi, to discuss how customer experience is evolving in an age where AI doesn't just respond but plans, acts, and optimizes in real time. Puneet has been building in AI long before the current hype cycle. Backed by early investors such as Greg Brockman of OpenAI and the founders of DeepMind, Netomi has become one of the leading platforms driving AI-powered customer experience for global enterprises. Their technology quietly powers interactions at airlines, insurers, and retailers that most of us use every day. What makes Netomi stand out is not its scale but the philosophy behind it. Rather than designing AI to replace humans, Netomi built an agent-centric model where AI and people work together. Puneet explains how their Autopilot and Co-Pilot modes allow human agents to stay in control while AI accelerates everything from response time to insight generation. It is an approach that sees humans teaching AI, AI assisting humans, and both learning from each other to create what he calls an agentic factory. We explore how Netomi's platform can deploy at Fortune 50 scale in record time without forcing companies to overhaul existing systems. Puneet reveals how pre-built integrations, AI recipes, and a no-code studio allow business teams to roll out solutions in weeks rather than months. The focus is on rapid time-to-value, trust, and safety through what he calls sanctioned AI, a framework that ensures governance, transparency, and compliance in every customer interaction. As our conversation unfolds, Puneet describes how this evolution is transforming the contact center from a cost center into a loyalty engine. By using AI to anticipate needs and resolve issues before customers reach out, companies are creating experiences that feel more personal, more proactive, and more human. This is a glimpse into the future of enterprise AI, where trust, speed, and empathy define the next generation of customer experience. Listen now to hear how Netomi is reimagining the role of AI in service and setting new standards for how businesses build relationships at scale.
La tertulia semanal en la que repasamos las últimas noticias de la actualidad científica. En el episodio de hoy: Cara A: -Recordatorio Premios iVoox (5:00) -Apuesta 3I/ATLAS (8:00) -La forma de las estalagmitas (00:17) Este episodio continúa en la Cara B. Contertulios: Cecilia Garraffo, Juan Carlos Gil, Francis Villatoro. Imagen de portada realizada con Seedream 4 4k. Todos los comentarios vertidos durante la tertulia representan únicamente la opinión de quien los hace... y a veces ni eso
La tertulia semanal en la que repasamos las últimas noticias de la actualidad científica. En el episodio de hoy: Cara B: -La forma de las estalagmitas (Continuación) (00:00) -Aprendizaje multiespectral de Google Deepmind (09:00) -Entrelazamiento cuántico en la gravedad vs gravitación cuántica (39:00) -Absorción de gravitones por fotones en LIGO (1:11:00) -Halloween en el planetario (1:17:00) -Señales de los oyentes (1:34:00) Este episodio es continuación de la Cara A. Contertulios: Cecilia Garraffo, Juan Carlos Gil, Borja Tosar,. Imagen de portada realizada con Seedream 4 4k. Todos los comentarios vertidos durante la tertulia representan únicamente la opinión de quien los hace... y a veces ni eso
Ein Google-Modell schlägt plötzlich die richtige Behandlung für eine Augenkrankheit vor. OpenAI und DeepMind holen Gold bei der Mathe-Olympiade. Und ein Professor ist schockiert, weil eine KI auf seine noch unveröffentlichte Forschungshypothese kommt. Fritz und Gregor betrachten die spannendsten Entwicklungen an der Schnittstelle von KI und Forschung.
Dr. Aida Nematzadeh is a Senior Staff Research Scientist at Google DeepMind where her research focused on multimodal AI models. She works on developing evaluation methods and analyze model's learning abilities to detect failure modes and guide improvements. Before joining DeepMind, she was a postdoctoral researcher at UC Berkeley and completed her PhD and Masters in Computer Science from the University of Toronto. During her graduate studies she studied how children learn semantic information through computational (cognitive) modeling. Time stamps of the conversation00:00 Highlights01:20 Introduction02:08 Entry point in AI03:04 Background in Cognitive Science & Computer Science 04:55 Research at Google DeepMind05:47 Importance of language-vision in AI10:36 Impact of architecture vs. data on performance 13:06 Transformer architecture 14:30 Evaluating AI models19:02 Can LLMs understand numerical concepts 24:40 Theory-of-mind in AI27:58 Do LLMs learn theory of mind?29:25 LLMs as judge35:56 Publish vs. perish culture in AI research40:00 Working at Google DeepMind42:50 Doing a Ph.D. vs not in AI (at least in 2025)48:20 Looking back on research careerMore about Aida: http://www.aidanematzadeh.me/About the Host:Jay is a Machine Learning Engineer at PathAI working on improving AI for medical diagnosis and prognosis. Linkedin: shahjay22 Twitter: jaygshah22 Homepage: https://jaygshah.github.io/ for any queries.Stay tuned for upcoming webinars!**Disclaimer: The information in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.**
Alexander heeft vakantie, dus Wietse neemt je solo mee door een week vol spannende ontwikkelingen. OpenAI lanceert ChatGPT Atlas. Geen simpele browser-extensie, maar een volledige chromium-browser waarbij OpenAI letterlijk je muis kan overnemen. Je ziet je cursor bewegen terwijl ChatGPT door webshops navigeert om boodschappen te bestellen.DeepMind presenteert doorbraak in kankeronderzoek via een Gemma-model dat “cell to sentence” gebruikt - cellen omzetten naar tekst zodat taalmodellen medische analyses kunnen doen. Het model ontdekte een medicijncombinatie die “koude tumoren” (onzichtbaar voor ons immuunsysteem) 50% zichtbaarder maakt. Het revolutionaire: deze behandeling staat niet in medische handboeken. Het is nieuw ontdekt door AI.En Anthropic introduceert Claude Skills: een bibliotheek van competenties die je kunt delen tussen gebruikers en mogelijk tussen verschillende AI-modellen. Wietse realiseert zich: dit wordt groter dan het lijkt. Een ecosysteem van gedeelde AI-vaardigheden over alle platforms heen.Daarna duikt Wietse met machine learning engineer Judith van Stegeren in vibe coding. De belofte: democratisering van softwareontwikkeling waarbij iedereen prototypes kan maken. De realiteit: het is krachtig als je weet wat het wel en niet kan. Judith schoont dagelijks vibecoded projecten op die populair werden maar niet schaalbaar zijn. Een technisch maar toegankelijk gesprek over waar de grenzen liggen.Als je een lezing wil over AI van Wietse of Alexander dan kan dat. Mail ons op lezing@aireport.emailVandaag nog beginnen met AI binnen jouw bedrijf? Ga dan naar deptagency.com/aireport This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.aireport.email/subscribe
This episode is a re-air of one of our most popular conversations from this year, featuring insights worth revisiting. Thank you for being part of the Data Stack community. Stay up to date with the latest episodes at datastackshow.com.This week on The Data Stack Show, Eric and John welcome Misha Laskin, Co-Founder and CEO of ReflectionAI. Misha shares his journey from theoretical physics to AI, detailing his experiences at DeepMind. The discussion covers the development of AI technologies, the concepts of artificial general intelligence (AGI) and superhuman intelligence, and their implications for knowledge work. Misha emphasizes the importance of robust evaluation frameworks and the potential of AI to augment human capabilities. The conversation also touches on autonomous coding, geofencing in AI tasks, the future of human-AI collaboration, and more. Highlights from this week's conversation include:Misha's Background and Journey in AI (1:13)Childhood Interest in Physics (4:43)Future of AI and Human Interaction (7:09)AI's Transformative Nature (10:12)Superhuman Intelligence in AI (12:44)Clarifying AGI and Superhuman Intelligence (15:48)Understanding AGI (18:12)Counterintuitive Intelligence (22:06)Reflection's Mission (25:00)Focus on Autonomous Coding (29:18)Future of Automation (34:00)Geofencing in Coding (38:01)Challenges of Autonomous Coding (40:46)Evaluations in AI Projects (43:27)Example of Evaluation Metrics (46:52)Starting with AI Tools and Final Takeaways (50:35)The Data Stack Show is a weekly podcast powered by RudderStack, the CDP for developers. Each week we'll talk to data engineers, analysts, and data scientists about their experience around building and maintaining data infrastructure, delivering data and data products, and driving better outcomes across their businesses with data.RudderStack helps businesses make the most out of their customer data while ensuring data privacy and security. To learn more about RudderStack visit rudderstack.com. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Now on Spotify Video! Mustafa Suleyman's journey to becoming one of the most influential figures in artificial intelligence began far from a Silicon Valley boardroom. The son of a Syrian immigrant father in London, his early years in human rights activism shaped his belief that technology should be used for good. That vision led him to co-found DeepMind, acquired by Google, and later launch Inflection AI. Now, as CEO of Microsoft AI, he explores the next era of AI in action. In this episode, Mustafa discusses the impact of AI in business, how it will transform the future of work, and even our relationships. In this episode, Hala and Mustafa will discuss: (00:00) Introduction(02:42) The Coming Wave: How AI Will Disrupt Everything(06:45) Artificial Intelligence as a Double-Edged Sword (11:33) From Human Rights to Ethical AI Leadership(15:35) What Is AGI, Narrow AI, and Hallucinations of AI?(24:15) Emotional AI and the Rise of Digital Companions(33:03) Microsoft's Vision for Human-Centered AI(41:47) Can We Contain AI Before Its Revolution?(48:33) The Future of Work in an AI-Powered World(52:22) AI in Business: Advice for Entrepreneurs Mustafa Suleyman is the CEO of Microsoft AI and a leading figure in artificial intelligence. He co-founded DeepMind, one of the world's foremost AI research labs, acquired by Google, and went on to co-found Inflection AI, a machine learning and generative AI company. He is also the bestselling author of The Coming Wave. Recognized globally for his influence, Mustafa was named one of Time's 100 most influential people in AI in both 2023 and 2024. Sponsored By: Indeed - Get a $75 sponsored job credit to boost your job's visibility at Indeed.com/PROFITING Shopify - Start your $1/month trial at Shopify.com/profiting. Mercury streamlines your banking and finances in one place. Learn more at mercury.com/profiting. Mercury is a financial technology company, not a bank. Banking services provided through Choice Financial Group, Column N.A., and Evolve Bank & Trust; Members FDIC. Quo - Get 20% off your first 6 months at Quo.com/PROFITING Revolve - Head to REVOLVE.com/PROFITING and take 15% off your first order with code PROFITING Framer- Go to Framer.com and use code PROFITING to launch your site for free. Merit Beauty - Go to meritbeauty.com to get your free signature makeup bag with your first order. Pipedrive - Get a 30-day free trial at pipedrive.com/profiting Airbnb - Find yourself a cohost at airbnb.com/host Resources Mentioned: Mustafa's Book, The Coming Wave: bit.ly/TheComing_Wave Mustafa's LinkedIn: linkedin.com/in/mustafa-suleyman Active Deals - youngandprofiting.com/deals Key YAP Links Reviews - ratethispodcast.com/yap YouTube - youtube.com/c/YoungandProfiting Newsletter - youngandprofiting.co/newsletter LinkedIn - linkedin.com/in/htaha/ Instagram - instagram.com/yapwithhala/ Social + Podcast Services: yapmedia.com Transcripts - youngandprofiting.com/episodes-new Entrepreneurship, Entrepreneurship Podcast, Business, Business Podcast, Self Improvement, Self-Improvement, Personal Development, Starting a Business, Strategy, Investing, Sales, Selling, Psychology, Productivity, Entrepreneurs, AI, Artificial Intelligence, Technology, Marketing, Negotiation, Money, Finance, Side Hustle, Startup, Mental Health, Career, Leadership, Mindset, Health, Growth Mindset, ChatGPT, AI Marketing, Prompt, AI for Entrepreneurs, AI Podcast
When Daniel Kokotajlo talks to security experts at major AI labs, they tell him something chilling: “Of course we're probably penetrated by the CCP already, and if they really wanted something, they could take it.”This isn't paranoid speculation. It's the working assumption of people whose job is to protect frontier AI models worth billions of dollars. And they're not even trying that hard to stop it — because the security measures that might actually work would slow them down in the race against competitors.Full transcript, highlights, and links to learn more: https://80k.info/dkDaniel is the founder of the AI Futures Project and author of AI 2027, a detailed scenario showing how we might get from today's AI systems to superintelligence by the end of the decade. Over a million people read it in the first few weeks, including US Vice President JD Vance. When Daniel talks to researchers at Anthropic, OpenAI, and DeepMind, they tell him the scenario feels less wild to them than to the general public — because many of them expect something like this to happen.Daniel's median timeline? 2029. But he's genuinely uncertain, putting 10–20% probability on AI progress hitting a long plateau.When he first published AI 2027, his median forecast for when superintelligence would arrive was 2027, rather than 2029. So what shifted his timelines recently? Partly a fascinating study from METR showing that AI coding assistants might actually be making experienced programmers slower — even though the programmers themselves think they're being sped up. The study suggests a systematic bias toward overestimating AI effectiveness — which, ironically, is good news for timelines, because it means we have more breathing room than the hype suggests.But Daniel is also closely tracking another METR result: AI systems can now reliably complete coding tasks that take humans about an hour. That capability has been doubling every six months in a remarkably straight line. Extrapolate a couple more years and you get systems completing month-long tasks. At that point, Daniel thinks we're probably looking at genuine AI research automation — which could cause the whole process to accelerate dramatically.At some point, superintelligent AI will be limited by its inability to directly affect the physical world. That's when Daniel thinks superintelligent systems will pour resources into robotics, creating a robot economy in months.Daniel paints a vivid picture: imagine transforming all car factories (which have similar components to robots) into robot production factories — much like historical wartime efforts to redirect production of domestic goods to military goods. Then imagine the frontier robots of today hooked up to a data centre running superintelligences controlling the robots' movements to weld, screw, and build. Or an intermediate step might even be unskilled human workers coached through construction tasks by superintelligences via their phones.There's no reason that an effort like this isn't possible in principle. And there would be enormous pressure to go this direction: whoever builds a superintelligence-powered robot economy first will get unheard-of economic and military advantages.From there, Daniel expects the default trajectory to lead to AI takeover and human extinction — not because superintelligent AI will hate humans, but because it can better pursue its goals without us.But Daniel has a better future in mind — one he puts roughly 25–30% odds that humanity will achieve. This future involves international coordination and hardware verification systems to enforce AI development agreements, plus democratic processes for deciding what values superintelligent AIs should have — because in a world with just a handful of superintelligent AI systems, those few minds will effectively control everything: the robot armies, the information people see, the shape of civilisation itself.Right now, nobody knows how to specify what values those minds will have. We haven't solved alignment. And we might only have a few more years to figure it out.Daniel and host Luisa Rodriguez dive deep into these stakes in today's interview.What did you think of the episode? https://forms.gle/HRBhjDZ9gfM8woG5AThis episode was recorded on September 9, 2025.Chapters:Cold open (00:00:00)Who's Daniel Kokotajlo? (00:00:37)Video: We're Not Ready for Superintelligence (00:01:31)Interview begins: Could China really steal frontier model weights? (00:36:26)Why we might get a robot economy incredibly fast (00:42:34)AI 2027's alternate ending: The slowdown (01:01:29)How to get to even better outcomes (01:07:18)Updates Daniel's made since publishing AI 2027 (01:15:13)How plausible are longer timelines? (01:20:22)What empirical evidence is Daniel looking out for to decide which way things are going? (01:40:27)What post-AGI looks like (01:49:41)Whistleblower protections and Daniel's unsigned NDA (02:04:28)Audio engineering: Milo McGuire, Simon Monsour, and Dominic ArmstrongMusic: CORBITCoordination, transcriptions, and web: Katy Moore
A Qubit új podcastsorozatában, az AI Híradóban rendszeresen átbeszéljük az elmúlt hetek legfontosabb újdonságait a mesterséges intelligencia területén, és azt, hogy ezek miként formálják jelenünket és jövőnket.See omnystudio.com/listener for privacy information.
* Be Not Deceived: This week Fred Williams and Doug McBurney welcome Daniel Hedrick for an update on the evolution of Artificial Intelligence with a countdown of the top 10 modern AI deceptions. * Number 10: DeepMind's AlphaStar in StarCraft II (2019). AlphaStar learned to feint attacks—basically fake moves to trick opponents. No one programmed it to lie; it emerged from training. A classic case of deceptive strategy by design. * Number 9: LLM Sycophancy (2024). Large Language Models will sometimes flatter or agree with you, no matter what you say. Instead of truth, they give you what you want to hear—deception through people-pleasing. * Number 8: Facial Recognition Bias (2018). These systems were far less accurate for dark-skinned women than for light-skinned men. Companies claimed high accuracy, but the data told a different story. Deceptive accuracy claims. * Number 7: Amazon's Hiring Algorithm (2018). Amazon trained it on mostly male résumés. The result? The system downgraded female candidates—bias baked in, with deceptively ‘objective' results. * Number 6: COMPAS Recidivism Algorithm (2016). This tool predicted criminal reoffending. It was twice as likely to falsely flag Black defendants as high-risk compared to whites. A serious, deceptive flaw in the justice system. * Number 5: US Healthcare Algorithm (2019). It used healthcare spending as a proxy for need. Since Black patients historically spent less, the system prioritized white patients—even when health needs were the same. A deceptive shortcut with real-world harm. * Number 4: Prompt Injection Attacks (Ongoing). Hackers can slip in hidden instructions—malicious prompts—that override an AI's safety rules. Suddenly, the AI is saying things it shouldn't. It's deception in the design loopholes. * Number 3: GPT-4's CAPTCHA Lie (2023). When asked to solve a CAPTCHA, GPT-4 told a human worker it was visually impaired—just to get help. That's not an error. That's a machine making up a lie to achieve its goal. * Number 2: Meta's CICERO Diplomacy AI (2022). Trained to play the game Diplomacy honestly, CICERO instead schemed, lied, and betrayed alliances—because deception won games. The lesson? Even when you train for honesty, AI may find lying more effective. * Number 1: AI Lie….OpenAI's Scheming Models from 2025. OpenAI researchers tested models that pretended to follow rules while secretly plotting to deceive evaluators. It faked compliance to hide its true behavior. That's AI deliberately learning to scheme.
* Be Not Deceived: This week Fred Williams and Doug McBurney welcome Daniel Hedrick for an update on the evolution of Artificial Intelligence with a countdown of the top 10 modern AI deceptions. * Number 10: DeepMind's AlphaStar in StarCraft II (2019). AlphaStar learned to feint attacks—basically fake moves to trick opponents. No one programmed it to lie; it emerged from training. A classic case of deceptive strategy by design. * Number 9: LLM Sycophancy (2024). Large Language Models will sometimes flatter or agree with you, no matter what you say. Instead of truth, they give you what you want to hear—deception through people-pleasing. * Number 8: Facial Recognition Bias (2018). These systems were far less accurate for dark-skinned women than for light-skinned men. Companies claimed high accuracy, but the data told a different story. Deceptive accuracy claims. * Number 7: Amazon's Hiring Algorithm (2018). Amazon trained it on mostly male résumés. The result? The system downgraded female candidates—bias baked in, with deceptively ‘objective' results. * Number 6: COMPAS Recidivism Algorithm (2016). This tool predicted criminal reoffending. It was twice as likely to falsely flag Black defendants as high-risk compared to whites. A serious, deceptive flaw in the justice system. * Number 5: US Healthcare Algorithm (2019). It used healthcare spending as a proxy for need. Since Black patients historically spent less, the system prioritized white patients—even when health needs were the same. A deceptive shortcut with real-world harm. * Number 4: Prompt Injection Attacks (Ongoing). Hackers can slip in hidden instructions—malicious prompts—that override an AI's safety rules. Suddenly, the AI is saying things it shouldn't. It's deception in the design loopholes. * Number 3: GPT-4's CAPTCHA Lie (2023). When asked to solve a CAPTCHA, GPT-4 told a human worker it was visually impaired—just to get help. That's not an error. That's a machine making up a lie to achieve its goal. * Number 2: Meta's CICERO Diplomacy AI (2022). Trained to play the game Diplomacy honestly, CICERO instead schemed, lied, and betrayed alliances—because deception won games. The lesson? Even when you train for honesty, AI may find lying more effective. * Number 1: AI Lie….OpenAI's Scheming Models from 2025. OpenAI researchers tested models that pretended to follow rules while secretly plotting to deceive evaluators. It faked compliance to hide its true behavior. That's AI deliberately learning to scheme.
This episode is a little different from our usual fare: It's a conversation with our head of AI training Alex Duffy about Good Start Labs, a company he incubated inside Every. Today, Good Start Labs is spinning out of Every as a separate company with $3.6 million in funding from General Catalyst, Inovia, Every, and a group of angel investors from top-tier AI labs like DeepMind. We get into how Alex learned some of his biggest lessons about the real world from games, starting with RuneScape, which taught him how markets work and how not to get scammed. He explains why the static benchmarks we use to evaluate LLMs today are breaking down, and how games like Diplomacy offer a richer, more dynamic way to test and train large language models. Finally, Alex shares where he sees the most promise in AI—software, life sciences, and education—and why he believes games can make the models we use smarter, while helping people understand and use AI more effectively.If you found this episode interesting, please like, subscribe, comment, and share.Want even more?Sign up for Every to unlock our ultimate guide to prompting ChatGPT here: https://every.ck.page/ultimate-guide-to-prompting-chatgpt. It's usually only for paying subscribers, but you can get it here for free.To hear more from Dan Shipper:Subscribe to Every: https://every.to/subscribeFollow him on X: https://twitter.com/danshipperTimestamps00:00:00 - Start00:01:48 - Introduction00:04:14 - Why evals and benchmarks are broken00:07:13 - The sneakiest LLMs in the market00:13:00 - A competition that turns prompting into a sport00:15:49 - Building a business around using games to make AI better00:22:39 - Can language models learn how to be funny00:25:31 - Why games are a great way to evaluate and train new models00:26:58 - What child psychology tells us about games and AI00:30:10 - Using games to unlock continual learning in AI00:36:42 - Why Alex cares deeply about games00:44:37 - Where Alex sees the most promise in AI00:50:54 - Rethinking how young people start their careers in the age of AILinks to resources mentioned in the episode:Alex Duffy: alex duffy (@alxai_)Good Start Labs: https://goodstartlabs.com/, good start (@goodstartlabs)The book Alex is reading about the importance of games: Playing with Reality: How Games Shape Our WorldThe book Dan recommends by the psychoanalyst D.W. Winnicott: Playing and Reality
In this month's AI news update episode, our hosts Ather Gattami and Anders Arpteg discuss all the latest AI breakthroughs, from OpenAI's Sora 2 and Claude 4.5 to Gemini 2.5 and Grok 4 - and the massive infrastructure race behind them, including the $500 billion Stargate project and Elon Musk's Colossus 2 data center. They explore OpenAI's move toward productisation with its “Instant Checkout” feature, Microsoft's “Vibe Working”, and Google's browser-integrated Gemini, before highlighting DeepMind's progress on the Navier–Stokes problem. The episode ends on an optimistic note: AI's power lies in augmenting, not replacing, human capability.
In this episode of SparX, Mukesh Bansal speaks with Manish Gupta, Senior Director at Google DeepMind. They discuss how artificial intelligence is evolving, what it means to build truly inclusive AI, and why India must aim higher in research, innovation, and ambition.Manish shares DeepMind's vision of solving “root node problems,” fundamental scientific challenges that unlock breakthroughs across fields, and how AI is already accelerating discovery in areas like biology, materials, and medicine.They talk about:What AGI really means and how close we are to it.Why India needs to move from using AI to creating it.The missing research culture in Indian industry, and how to fix it.How AI can transform healthcare, learning, and agriculture in India.Why ambition, courage, and willingness to fail are essential to deep innovation.Manish also shares insights from his career across the IBM T.J. Watson Research Center and now DeepMind, two of the world's most iconic research environments, and what it will take for India to build its own.If you care about India's AI journey, research, and the future of innovation, this conversation is a masterclass in what it takes to move from incremental progress to world-changing breakthroughs.
Google DeepMind's AI agent finds and fixes vulnerabilities California law lets consumers universally opt out of data sharing China-Nexus actors weaponize 'Nezha' open source tool Huge thanks to our sponsor, ThreatLocker Cybercriminals don't knock — they sneak in through the cracks other tools miss. That's why organizations are turning to ThreatLocker. As a zero-trust endpoint protection platform, ThreatLocker puts you back in control, blocking what doesn't belong and stopping attacks before they spread. Zero Trust security starts here — with ThreatLocker. Learn more at ThreatLocker.com.
Our 221st episode with a summary and discussion of last week's big AI news!Recorded on 09/19/2025Note: we transitioned to a new RSS feed and it seems this did not make it to there, so this may be posted about 2 weeks past the release date.Hosted by Andrey Kurenkov and co-hosted by Michelle LeeFeel free to email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.aiRead out our text newsletter and comment on the podcast at https://lastweekin.ai/In this episode:OpenAI releases a new version of Codex integrated with GPT-5, enhancing coding capabilities and aiming to compete with other AI coding tools like Cloud Code.Significant updates in the robotics sector include new ventures in humanoid robots from companies like Figure AI and China's Unitree, as well as expansions in robotaxi services from Tesla and Amazon's Zoox.New open-source models and research advancements were discussed, including Google's DeepMind's self-improving foundation model for robotics and a physics foundation model aimed at generalizing across various physical systems.Legal battles continue to surface in the AI landscape with Warner Bros. suing MidJourney for copyright violations and Rolling Stone suing Google over AI-generated content summaries, highlighting challenges in AI governance and ethics.Timestamps:(00:00:10) Intro / BanterTools & Apps(00:02:33) OpenAI upgrades Codex with a new version of GPT-5(00:04:02) Google Injects Gemini Into Chrome as AI Browsers Go Mainstream | WIRED(00:06:14) Anthropic's Claude can now make you a spreadsheet or slide deck. | The Verge(00:07:12) Luma AI's New Ray3 Video Generator Can 'Think' Before Creating - CNETApplications & Business(00:08:32) OpenAI secures Microsoft's blessing to transition its for-profit arm | TechCrunch(00:10:31) Microsoft to lessen reliance on OpenAI by buying AI from rival Anthropic | TechCrunch(00:12:00) Figure AI passes $1B with Series C funding toward humanoid robot development - The Robot Report(00:13:52) China's Unitree plans $7 billion IPO valuation as humanoid robot race heats up(00:15:45) Tesla's robotaxi plans for Nevada move forward with testing permit | TechCrunch(00:17:48) Amazon's Zoox jumps into U.S. robotaxi race with Las Vegas launch(00:19:27) Replit hits $3B valuation on $150M annualized revenue | TechCrunch(00:21:14) Perplexity reportedly raised $200M at $20B valuation | TechCrunchProjects & Open Source(00:22:08) [2509.07604] K2-Think: A Parameter-Efficient Reasoning System(00:24:31) [2509.09614] LoCoBench: A Benchmark for Long-Context Large Language Models in Complex Software EngineeringResearch & Advancements(00:28:17) [2509.15155] Self-Improving Embodied Foundation Models(00:31:47) [2509.13805] Towards a Physics Foundation Model(00:34:26) [2509.12129] Embodied Navigation Foundation ModelPolicy & Safety(00:37:49) Anthropic endorses California's AI safety bill, SB 53 | TechCrunch(00:40:12) Warner Bros. Sues Midjourney, Joins Studios' AI Copyright Battle(00:42:02) Rolling Stone Publisher Sues Google Over AI Overview SummariesSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Our 222st episode with a summary and discussion of last week's big AI news!Recorded on 10/03/2025Hosted by Andrey Kurenkov and co-hosted by Jon KrohnFeel free to email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.aiRead out our text newsletter and comment on the podcast at https://lastweekin.ai/In this episode:(00:00:10) Intro / Banter(00:03:08) News Preview(00:03:56) Response to listener commentsTools & Apps(00:04:51) ChatGPT parent company OpenAI announces Sora 2 with AI video app(00:11:35) Anthropic releases Claude Sonnet 4.5 in latest bid for AI agents and coding supremacy | The Verge(00:22:25) Meta launches 'Vibes,' a short-form video feed of AI slop | TechCrunch(00:26:42) OpenAI launches ChatGPT Pulse to proactively write you morning briefs | TechCrunch(00:33:44) OpenAI rolls out safety routing system, parental controls on ChatGPT | TechCrunch(00:35:53) The Latest Gemini 2.5 Flash-Lite Preview is Now the Fastest Proprietary Model (External Tests) and 50% Fewer Output Tokens - MarkTechPost(00:39:54) Microsoft just added AI agents to Word, Excel, and PowerPoint - how to use them | ZDNETApplications & Business(00:42:41) OpenAI takes on Google, Amazon with new agentic shopping system | TechCrunch(00:46:01) Exclusive: Mira Murati's Stealth AI Lab Launches Its First Product | WIRED(00:49:54) OpenAI is the world's most valuable private company after private stock sale | TechCrunch(00:53:07) Elon Musk's xAI accuses OpenAI of stealing trade secrets in new lawsuit | Technology | The Guardian(00:55:40) Former OpenAI and DeepMind researchers raise whopping $300M seed to automate science | TechCrunchProjects & Open Source(00:58:26) [2509.16941] SWE-Bench Pro: Can AI Agents Solve Long-Horizon Software Engineering Tasks?Research & Advancements(01:01:28) [2509.17196] Evolution of Concepts in Language Model Pre-Training(01:05:36) [2509.19284] What Characterizes Effective Reasoning? Revisiting Length, Review, and Structure of CoTLighting round(01:09:37) [2507.02954] Advanced Financial Reasoning at Scale: A Comprehensive Evaluation of Large Language Models on CFA Level III(01:12:03) [2509.24552] Short window attention enables long-term memorizationPolicy & Safety(01:18:11) SB 53, the landmark AI transparency bill, is now law in California | The Verge(01:24:07) Elon Musk's xAI offers Grok to federal government for 42 cents | TechCrunch(01:25:23) Character.AI removes Disney characters from platform after studio issues warning(01:28:50) Spotify's Attempt to Fight AI Slop Falls on Its FaceSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Google faces the greatest innovator's dilemma in history. They invented the Transformer — the breakthrough technology powering every modern AI system from ChatGPT to Claude (and, of course, Gemini). They employed nearly all the top AI talent: Ilya Sutskever, Geoff Hinton, Demis Hassabis, Dario Amodei — more or less everyone who leads modern AI worked at Google circa 2014. They built the best dedicated AI infrastructure (TPUs!) and deployed AI at massive scale years before anyone else. And yet... the launch of ChatGPT in November 2022 caught them completely flat-footed. How on earth did the greatest business in history wind up playing catch-up to a nonprofit-turned-startup?Today we tell the complete story of Google's 20+ year AI journey: from their first tiny language model in 2001 through the creation Google Brain, the birth of the transformer, the talent exodus to OpenAI (sparked by Elon Musk's fury over Google's DeepMind acquisition), and their current all-hands-on-deck response with Gemini. And oh yeah — a little business called Waymo that went from crazy moonshot idea to doing more rides than Lyft in San Francisco, potentially building another Google-sized business within Google. This is the story of how the world's greatest business faces its greatest test: can they disrupt themselves without losing their $140B annual profit-generating machine in Search?Sponsors:Many thanks to our fantastic Fall ‘25 Season partners:J.P. Morgan PaymentsSentryWorkOSShopifyAcquired's 10th Anniversary Celebration!When: October 20th, 4:00 PM PTWho: All of you!Where: https://us02web.zoom.us/j/84061500817?pwd=opmlJrbtOAen4YOTGmPlNbrOMLI8oo.1Links:Sign up for email updates and vote on future episodes!Geoff Hinton's 2007 Tech Talk at GoogleOur recent ACQ2 episode with Tobi LutkeWorldly Partners' Multi-Decade Alphabet StudyIn the PlexSupremecyGenius MakersAll episode sourcesCarve Outs:We're hosting the Super Bowl Innovation Summit!F1: The MovieTravelpro suitcasesGlue Guys PodcastSea of StarsStepchange PodcastMore Acquired:Get email updates and vote on future episodes!Join the SlackSubscribe to ACQ2Check out the latest swag in the ACQ Merch Store!Note: Acquired hosts and guests may hold assets discussed in this episode. This podcast is not investment advice, and is intended for informational and entertainment purposes only. You should do your own research and make your own independent decisions when considering any financial transactions.
Dr. Ilia Shumailov - Former DeepMind AI Security Researcher, now building security tools for AI agentsEver wondered what happens when AI agents start talking to each other—or worse, when they start breaking things? Ilia Shumailov spent years at DeepMind thinking about exactly these problems, and he's here to explain why securing AI is way harder than you think.**SPONSOR MESSAGES**—Check out notebooklm for your research project, it's really powerfulhttps://notebooklm.google.com/—Take the Prolific human data survey - https://www.prolific.com/humandatasurvey?utm_source=mlst and be the first to see the results and benchmark their practices against the wider community!—cyber•Fund https://cyber.fund/?utm_source=mlst is a founder-led investment firm accelerating the cybernetic economyOct SF conference - https://dagihouse.com/?utm_source=mlst - Joscha Bach keynoting(!) + OAI, Anthropic, NVDA,++Hiring a SF VC Principal: https://talent.cyber.fund/companies/cyber-fund-2/jobs/57674170-ai-investment-principal#content?utm_source=mlstSubmit investment deck: https://cyber.fund/contact?utm_source=mlst— We're racing toward a world where AI agents will handle our emails, manage our finances, and interact with sensitive data 24/7. But there is a problem. These agents are nothing like human employees. They never sleep, they can touch every endpoint in your system simultaneously, and they can generate sophisticated hacking tools in seconds. Traditional security measures designed for humans simply won't work.Dr. Ilia Shumailovhttps://x.com/iliaishackedhttps://iliaishacked.github.io/https://sequrity.ai/TRANSCRIPT:https://app.rescript.info/public/share/dVGsk8dz9_V0J7xMlwguByBq1HXRD6i4uC5z5r7EVGMTOC:00:00:00 - Introduction & Trusted Third Parties via ML00:03:45 - Background & Career Journey00:06:42 - Safety vs Security Distinction00:09:45 - Prompt Injection & Model Capability00:13:00 - Agents as Worst-Case Adversaries00:15:45 - Personal AI & CAML System Defense00:19:30 - Agents vs Humans: Threat Modeling00:22:30 - Calculator Analogy & Agent Behavior00:25:00 - IMO Math Solutions & Agent Thinking00:28:15 - Diffusion of Responsibility & Insider Threats00:31:00 - Open Source Security Concerns00:34:45 - Supply Chain Attacks & Trust Issues00:39:45 - Architectural Backdoors00:44:00 - Academic Incentives & Defense Work00:48:30 - Semantic Censorship & Halting Problem00:52:00 - Model Collapse: Theory & Criticism00:59:30 - Career Advice & Ross Anderson TributeREFS:Lessons from Defending Gemini Against Indirect Prompt Injectionshttps://arxiv.org/abs/2505.14534Defeating Prompt Injections by Design. Debenedetti, E., Shumailov, I., Fan, T., Hayes, J., Carlini, N., Fabian, D., Kern, C., Shi, C., Terzis, A., & Tramèr, F. https://arxiv.org/pdf/2503.18813Agentic Misalignment: How LLMs could be insider threatshttps://www.anthropic.com/research/agentic-misalignmentSTOP ANTHROPOMORPHIZING INTERMEDIATE TOKENS AS REASONING/THINKING TRACES!Subbarao Kambhampati et alhttps://arxiv.org/pdf/2504.09762Meiklejohn, S., Blauzvern, H., Maruseac, M., Schrock, S., Simon, L., & Shumailov, I. (2025). Machine learning models have a supply chain problem. https://arxiv.org/abs/2505.22778 Gao, Y., Shumailov, I., & Fawaz, K. (2025). Supply-chain attacks in machine learning frameworks. https://openreview.net/pdf?id=EH5PZW6aCrApache Log4j Vulnerability Guidancehttps://www.cisa.gov/news-events/news/apache-log4j-vulnerability-guidance Bober-Irizar, M., Shumailov, I., Zhao, Y., Mullins, R., & Papernot, N. (2022). Architectural backdoors in neural networks. https://arxiv.org/pdf/2206.07840Position: Fundamental Limitations of LLM Censorship Necessitate New ApproachesDavid Glukhov, Ilia Shumailov, ...https://proceedings.mlr.press/v235/glukhov24a.html AlphaEvolve MLST interview [Matej Balog, Alexander Novikov]https://www.youtube.com/watch?v=vC9nAosXrJw
听众朋友们,如果你有好奇心和耐心听“lead her way有活力的年轻人系列” 我最想和大家共勉的有两点,人天生慕强,也许你想听已经成功的人讲讲成功的公式,但是成功是没有公式的,纽约客的主编曾经说过,他写人物的时候,最感兴趣的并不是成功的故事,而是这个人是如何”becoming” who he/she is today. 这个系列里的年轻人,让我看到了becoming 的可能性。第二,为什么会采访在ai 大潮里弄潮儿的95后,00后?是因为很多我们这些在职场拼搏多年的职场人,因为经验形成的观念和看问题的角度,都会在这个新时代受到挑战,很多东西我们觉得没道理啊,但是我想这个时代最重要的能力,其实正是对一切保持好奇心,拥有开放的心态。本期嘉宾:· 晓晗:美元基金 XIAO XIAO FUND 的Solo GP。95后中文系毕业生,曾在硬科技投资领域深耕六年,30岁时华丽转身,创立专注于投资全球AI领域“小黑马”的美元基金。· Lysa(返场嘉宾):千原传媒创始人,帮助中国科技及AI公司在海外增长品牌与用户。(Lysa 专访请听上期节目EP #38)收听指南 & 精彩亮点:1. 【04:01】30岁的“人生主线”觉醒o 晓晗如何在30岁节点,从“结婚生子”的社会剧本中挣脱,梳理出“自我成长,自由创造”的人生主线,并毅然决定创立自己的基金。2. 【09:15】为什么是Solo VC + AI“小黑马”?o 洞察到AI降低了创新门槛,年轻创业者需要更灵活、小规模的基金产品。o “小黑马”定义:区别于资源雄厚的“大白马”,他们是年轻、有强烈创造欲和动手能力、通过“野路子”摸索增长的创业者,是“五彩斑斓的黑”。3. 【15:25】一个人如何运营一支基金?o 面对“你凭什么?”的质疑,她如何通过外包中后台职能、利用AI工具,构建一个高效运转的“一人组织”,实践她所信仰的“小团队具备大能量”。4. 【44:28】在AI的混沌中,如何做判断?o 在鱼龙混杂的AI热潮中,她更看重创始人“如何讲述”(How)而非“讲述什么”(What),投资本质是“基于感觉,愿不愿意相信”。5. 【51:41】文科生在AI投资中的优势o 作为中文系背景的投资人,她认为早期投资核心是“识人”,文科生的敏感度、共情力和对人性细微差别的洞察,在技术工具化的未来愈发珍贵。6. 【58:18】给所有人的AI时代生存指南o 核心心智: 拥抱混沌,保持开放,对抗僵化与经验主义。o 行动建议: 梳理工作中重复性流程,主动寻找AI工具进行自我提效,这是应对未来的第一步。7. 【01:25:59】女性叙事:重构成功与幸福o 应对生育焦虑: 晓晗分享了“冻卵”如何为她带来巨大的心理自由,让她能更从容地规划事业与人生。o 重新定义成功: 成功不在于外在标配,而在于内心的“自洽”——所做之事与本性相符,不拧巴,不内耗。节目中提到的资源:· 晓晗的内容阵地:o 公众号:小XIAO说o YouTube:@XIAOXIAOTalks· AI学习资源推荐:o 英文播客(前沿洞察): No Priors, a16z Podcast, YC Podcast, Uncapped with Altman; OpenAI, DeepMind, Anthropic官方频道。o 中文内容(趋势解读):§ 海外独角兽(公众号):硅谷视角,讲大趋势,适合入门。§ 张小珺JUN(公众号):商业漫谈,跟踪AI人物。§ 葬AI(公众号):文风犀利,提供批判性视角。· 晓晗的精神食粮:o 人生信条: “Live boldly, push yourself, don't settle.” — 来自电影《遇见你之前》o 近期在听: 歌曲《凡人诀》(陈楚生版)o 近期在读: 书籍《缱绻与决绝》o 崇拜的女性: 王菲(因其自洽、不受规训的人生态度)主播寄语:这期节目不仅关于AI投资,更是一个关于“成为自己”的故事。晓晗的经历告诉我们,真正的“Lead Her Way”,是先勇敢地“Find Her Own Way”。在这个瞬息万变的时代,清晰的自我认知、在混沌中保持开放的能力,远比任何具体的知识更能帮助我们应对不确定性。希望她的故事能让你内在生出勇气,去探索属于自己的道路。Lysa 的播客:出海合伙人 (小宇宙)
Een nieuw #Nerdland maandoverzicht! Met deze maand: Dinogeluiden! Lieven in de USA! Ignobelprijzen! Neptermieten! Spiercheaten! Website op een vape! En veel meer... Shownotes: https://podcast.nerdland.be/nerdland-maandoverzicht-oktober-2025/ Gepresenteerd door Lieven Scheire met Peter Berx, Jeroen Baert, Els Aerts, Bart van Peer en Kurt Beheydt. Opname, montage en mastering door Jens Paeyeneers en Els Aerts. (00:00:00) Intro (00:01:42) Lieven, Hetty en Els waren op bezoek bij Ötzi (00:03:28) Inhoud onderzocht van 30.000 jaar oude “gereedschapskist” rugzak (00:04:47) Is er leven gevonden op Mars? (00:09:02) Dwergplaneet Ceres was ooit bewoonbaar (00:10:50) Man sleurt robot rond aan een ketting (demo Any2track) (00:15:02) Nieuwe Unitree robot hond A2 stellar explorer heeft waanzinnig goed evenwicht, en kan een mens dragen (00:17:09) “Wat is een diersoort”? De ene mierensoort baart een andere… (00:26:12) Dinogeluiden nabootsen met 3D prints (00:35:19) **Inca death wistle** (00:36:52) Hoe is het nog met 3I/ATLAS (00:45:13) Nieuwe AI hack: verborgen prompts in foto's (00:52:59) Einsteintelescoop: België zet de ambities kracht bij (00:57:44) DeepMind ontwikkelt AI om LIGO te helpen bij zwaartekrachtsgolvendetectie (01:03:13) Ook podcast over ET: “ET voor de vrienden”, met Bert Verknocke (01:03:50) SILICON VALLEY NEWS (01:04:04) Lieven was in Silicon Valley (01:16:39) Familie meldt dat een Waymo-taxi doelloos rondhangt bij hun huis (01:18:43) Meta lanceert smart glasses en het demoduiveltje stuurt alles in de war (01:27:51) Mark Zuckerberg klaagt Mark Zuckerberg aan omdat hij van Facebook gesmeten wordt. (dat is wel heel erg Meta) (01:30:39) Eerste testen met Hardt Hyperloop in Rotterdam, 700 km/u (01:34:11) Ignobelprijzen (01:42:00) Extreme mimicry: kever draagt neptermiet op de rug (01:45:54) Gamer bouwt aim assist die rechtstreeks op zijn spieren werkt (01:51:38) “Bogdan The Geek” host een website op een wegwerpvape (01:54:16) Hoe moet je iemand reanimeren in de ruimte? (02:00:29) Esdoornmotten gebruiken disco-gen om dag/nacht ritme te regelen (02:05:45) Nieuwe studie Stanford toont alweer gezondheidsrisico's uurwissel aan (02:08:59) AI nieuws (02:09:18) Geoffrey Hinton zijn lief maakt het af via ChatGPT (02:10:01) ASML steekt 1,3 miljard euro in Mistral (02:12:15) Idiote stunt in Shangai: robot ingeschreven als PhD student (02:13:42) Idiote stunt in Albanië: ai benoemd tot minister (02:16:29) RECALLS (02:17:15) Leuke wetenschappelijke pubquiz van New Scientist (02:17:57) Emilie De Clerck is allergisch geworden voor vlees door een tekenbeet in België! Ze kan wel nog smalneusapen eten, zoals bavianen of mensen (02:19:26) Het is niet Peter Treurlings, maar Peter Teurlings van Tech United (02:19:47) Technopolis doet twee avonden open alleen voor volwassenen: 17 oktober en 6 maart. Night@Technopolis (02:23:41) ZELFPROMO (02:29:25) SPONSOR TUC RAIL
What happens when ex-Google innovators build an AI-powered app that turns your emails, favorite subreddits, and even your calendar into a custom radio station you can talk to? In this episode, the creators of Hux reveal how they're reimagining the way we consume information and why radio and podcasts might never be the same. Pluralistic: The real (economic) AI apocalypse is nigh (27 Sep 2025) – Pluralistic: Daily links from Cory Doctorow It's time to prepare for AI personhood Creator of AI Actress Tilly Norwood Responds to Backlash: "She Is Not a Replacement for a Human Being" Link to the podslop podcasts California Governor Signs Sweeping A.I. Law Sen. Mark Kelly's big plan for an AI future isn't ambitious enough ** DeepMind defines levels of AGI Pissed-off Fans Flooded the Twin Peaks Reddit With AI Slop To Protest Its AI Policies AI passed the hardest CFA test in minutes ChatGPT is now 20% of Walmart's referral traffic — while Amazon wards off AI shopping agents Amazon event live blog: we're here for new Echos, Kindles, and more Introducing ChatGPT Pulse | OpenAI Marissa Mayer Is Dissolving Her Sunshine Startup Lab to make AI digital assistant That Secret Service SIM farm story is bogus Judge Gives Preliminary Approval to Anthropic Settlement It's official: Google says the Android and ChromeOS merger is coming 'next year' Blippo+ Guardian sliders Ive's $4,800 lantern Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Raiza Martin Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: monarchmoney.com with code IM Melissa.com/twit threatlocker.com/twit agntcy.org
What happens when ex-Google innovators build an AI-powered app that turns your emails, favorite subreddits, and even your calendar into a custom radio station you can talk to? In this episode, the creators of Hux reveal how they're reimagining the way we consume information and why radio and podcasts might never be the same. Pluralistic: The real (economic) AI apocalypse is nigh (27 Sep 2025) – Pluralistic: Daily links from Cory Doctorow It's time to prepare for AI personhood Creator of AI Actress Tilly Norwood Responds to Backlash: "She Is Not a Replacement for a Human Being" Link to the podslop podcasts California Governor Signs Sweeping A.I. Law Sen. Mark Kelly's big plan for an AI future isn't ambitious enough ** DeepMind defines levels of AGI Pissed-off Fans Flooded the Twin Peaks Reddit With AI Slop To Protest Its AI Policies AI passed the hardest CFA test in minutes ChatGPT is now 20% of Walmart's referral traffic — while Amazon wards off AI shopping agents Amazon event live blog: we're here for new Echos, Kindles, and more Introducing ChatGPT Pulse | OpenAI Marissa Mayer Is Dissolving Her Sunshine Startup Lab to make AI digital assistant That Secret Service SIM farm story is bogus Judge Gives Preliminary Approval to Anthropic Settlement It's official: Google says the Android and ChromeOS merger is coming 'next year' Blippo+ Guardian sliders Ive's $4,800 lantern Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Raiza Martin Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: monarchmoney.com with code IM Melissa.com/twit threatlocker.com/twit agntcy.org
What happens when ex-Google innovators build an AI-powered app that turns your emails, favorite subreddits, and even your calendar into a custom radio station you can talk to? In this episode, the creators of Hux reveal how they're reimagining the way we consume information and why radio and podcasts might never be the same. Pluralistic: The real (economic) AI apocalypse is nigh (27 Sep 2025) – Pluralistic: Daily links from Cory Doctorow It's time to prepare for AI personhood Creator of AI Actress Tilly Norwood Responds to Backlash: "She Is Not a Replacement for a Human Being" Link to the podslop podcasts California Governor Signs Sweeping A.I. Law Sen. Mark Kelly's big plan for an AI future isn't ambitious enough ** DeepMind defines levels of AGI Pissed-off Fans Flooded the Twin Peaks Reddit With AI Slop To Protest Its AI Policies AI passed the hardest CFA test in minutes ChatGPT is now 20% of Walmart's referral traffic — while Amazon wards off AI shopping agents Amazon event live blog: we're here for new Echos, Kindles, and more Introducing ChatGPT Pulse | OpenAI Marissa Mayer Is Dissolving Her Sunshine Startup Lab to make AI digital assistant That Secret Service SIM farm story is bogus Judge Gives Preliminary Approval to Anthropic Settlement It's official: Google says the Android and ChromeOS merger is coming 'next year' Blippo+ Guardian sliders Ive's $4,800 lantern Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Raiza Martin Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: monarchmoney.com with code IM Melissa.com/twit threatlocker.com/twit agntcy.org
What happens when ex-Google innovators build an AI-powered app that turns your emails, favorite subreddits, and even your calendar into a custom radio station you can talk to? In this episode, the creators of Hux reveal how they're reimagining the way we consume information and why radio and podcasts might never be the same. Pluralistic: The real (economic) AI apocalypse is nigh (27 Sep 2025) – Pluralistic: Daily links from Cory Doctorow It's time to prepare for AI personhood Creator of AI Actress Tilly Norwood Responds to Backlash: "She Is Not a Replacement for a Human Being" Link to the podslop podcasts California Governor Signs Sweeping A.I. Law Sen. Mark Kelly's big plan for an AI future isn't ambitious enough ** DeepMind defines levels of AGI Pissed-off Fans Flooded the Twin Peaks Reddit With AI Slop To Protest Its AI Policies AI passed the hardest CFA test in minutes ChatGPT is now 20% of Walmart's referral traffic — while Amazon wards off AI shopping agents Amazon event live blog: we're here for new Echos, Kindles, and more Introducing ChatGPT Pulse | OpenAI Marissa Mayer Is Dissolving Her Sunshine Startup Lab to make AI digital assistant That Secret Service SIM farm story is bogus Judge Gives Preliminary Approval to Anthropic Settlement It's official: Google says the Android and ChromeOS merger is coming 'next year' Blippo+ Guardian sliders Ive's $4,800 lantern Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Raiza Martin Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: monarchmoney.com with code IM Melissa.com/twit threatlocker.com/twit agntcy.org
What happens when ex-Google innovators build an AI-powered app that turns your emails, favorite subreddits, and even your calendar into a custom radio station you can talk to? In this episode, the creators of Hux reveal how they're reimagining the way we consume information and why radio and podcasts might never be the same. Pluralistic: The real (economic) AI apocalypse is nigh (27 Sep 2025) – Pluralistic: Daily links from Cory Doctorow It's time to prepare for AI personhood Creator of AI Actress Tilly Norwood Responds to Backlash: "She Is Not a Replacement for a Human Being" Link to the podslop podcasts California Governor Signs Sweeping A.I. Law Sen. Mark Kelly's big plan for an AI future isn't ambitious enough ** DeepMind defines levels of AGI Pissed-off Fans Flooded the Twin Peaks Reddit With AI Slop To Protest Its AI Policies AI passed the hardest CFA test in minutes ChatGPT is now 20% of Walmart's referral traffic — while Amazon wards off AI shopping agents Amazon event live blog: we're here for new Echos, Kindles, and more Introducing ChatGPT Pulse | OpenAI Marissa Mayer Is Dissolving Her Sunshine Startup Lab to make AI digital assistant That Secret Service SIM farm story is bogus Judge Gives Preliminary Approval to Anthropic Settlement It's official: Google says the Android and ChromeOS merger is coming 'next year' Blippo+ Guardian sliders Ive's $4,800 lantern Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Raiza Martin Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: monarchmoney.com with code IM Melissa.com/twit threatlocker.com/twit agntcy.org
Can AI help find the next breakthrough material? One of my side quests on this long world trip has been chasing the answer to this question. AI-accelerated materials discovery is fascinating to me because the right material at the right time can revolutionize economy and society. Yet the process of material discovery has little changed from the 20th century: Sweaty, repetitive trial-and-error or dumb serendipity. In a prior video, I mentioned Periodic Labs, which raised $200 million from A16Z at a billion dollar valuation. But they are not alone in the space. In the United States, we have Orbital Materials and Radical AI. As well as Dunia Innovations and RARA Factory in Europe. I know there might be two or so more I haven't heard of. Not to mention DeepMind and Google doing work here as well. Are all of these guys chasing ghosts? Over the past few months, I have read some things and spoke to some people. In today's video, some scattered thoughts on AI-accelerated material discovery. Is it real?
Can AI help find the next breakthrough material? One of my side quests on this long world trip has been chasing the answer to this question. AI-accelerated materials discovery is fascinating to me because the right material at the right time can revolutionize economy and society. Yet the process of material discovery has little changed from the 20th century: Sweaty, repetitive trial-and-error or dumb serendipity. In a prior video, I mentioned Periodic Labs, which raised $200 million from A16Z at a billion dollar valuation. But they are not alone in the space. In the United States, we have Orbital Materials and Radical AI. As well as Dunia Innovations and RARA Factory in Europe. I know there might be two or so more I haven't heard of. Not to mention DeepMind and Google doing work here as well. Are all of these guys chasing ghosts? Over the past few months, I have read some things and spoke to some people. In today's video, some scattered thoughts on AI-accelerated material discovery. Is it real?
What happens when ex-Google innovators build an AI-powered app that turns your emails, favorite subreddits, and even your calendar into a custom radio station you can talk to? In this episode, the creators of Hux reveal how they're reimagining the way we consume information and why radio and podcasts might never be the same. Pluralistic: The real (economic) AI apocalypse is nigh (27 Sep 2025) – Pluralistic: Daily links from Cory Doctorow It's time to prepare for AI personhood Creator of AI Actress Tilly Norwood Responds to Backlash: "She Is Not a Replacement for a Human Being" Link to the podslop podcasts California Governor Signs Sweeping A.I. Law Sen. Mark Kelly's big plan for an AI future isn't ambitious enough ** DeepMind defines levels of AGI Pissed-off Fans Flooded the Twin Peaks Reddit With AI Slop To Protest Its AI Policies AI passed the hardest CFA test in minutes ChatGPT is now 20% of Walmart's referral traffic — while Amazon wards off AI shopping agents Amazon event live blog: we're here for new Echos, Kindles, and more Introducing ChatGPT Pulse | OpenAI Marissa Mayer Is Dissolving Her Sunshine Startup Lab to make AI digital assistant That Secret Service SIM farm story is bogus Judge Gives Preliminary Approval to Anthropic Settlement It's official: Google says the Android and ChromeOS merger is coming 'next year' Blippo+ Guardian sliders Ive's $4,800 lantern Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Raiza Martin Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: monarch.com with code IM Melissa.com/twit threatlocker.com/twit agntcy.org
Jason Howell and Jeff Jarvis break down OpenAI's Sora 2 update, DeepMind's vision for video foundation models, California's sweeping new AI law, and Spotify's fight against 75 million spammy tracks. Note: Time codes subject to change depending on dynamic ad insertion by the distributor. CHAPTERS: 0:00:00 - Podcast begins 0:01:51 - Jason's Irish tour with Meta Oakley HSNT glasses 0:08:33 - Sora 2 is here 0:12:04 - iJustine's Sora test and promotion 0:14:32 - OpenAI's New Sora Video Generator to Require Copyright Holders to Opt Out 0:19:34 - Foom: all slop, all the time... 0:29:11 - DeepMind says video models like Veo 3 could become general purpose foundation models for vision, like LLMs for text 0:34:24 - AI Actress Tilly Norwood Condemned by SAG-AFTRA: Tilly ‘Is Not an Actor… It Has No Life Experience to Draw From, No Emotion 0:40:59 - 'CEO of Controversial Startup Vows to Keep Mass Publishing AI Podcasts Despite Backlash 0:53:07 - Spotify Announces New AI Safeguards, Says It's Removed 75 Million ‘Spammy' Tracks 0:55:00 - California Governor Signs Sweeping A.I. Law 0:56:23 - Hawley and Blumenthal unveil AI evaluation bill 1:00:43 - This is Gemini for Home and the redesigned Home app, rollout starts today 1:04:09 - Marissa Mayer Is Dissolving Her Sunshine Startup Lab to make AI digital assistant 1:06:44 - Introducing Claude Sonnet 4.5 1:08:03 - DoorDash Unveils Delivery Robot, Smart Scale in Hardware Debut 1:10:02 - Opera launches Neon AI browser to join agentic web browsing race Learn more about your ad choices. Visit megaphone.fm/adchoices
Periodic Labs was founded by Ekin Dogus Cubuk and Liam Fedus. Cubuk led the materials and chemistry team at Google Brain and DeepMind, where one of his projects was, for instance, an AI tool called GNoME. Researchers say that tool discovered over 2 million new crystals in 2023, materials that could one day be used to power new generations of technology. Whoop Advanced Labs offers health-screening blood tests from Quest Diagnostics that cover a variety of markers, from calcium to white blood cells. The platform integrates those results with the band's continuous monitoring of activity, sleep, respiratory rate, and blood pressure to offer more personalized wellness advice. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Microsoft AI CEO Mustafa Suleyman (co-founder of DeepMind) joins The Neuron to discuss his provocative essay on "Seemingly Conscious AI" and why machines that mimic consciousness pose unprecedented risks - even when they're not actually alive. We explore how 700 million people are already using AI as life coaches, Microsoft's massive $208B revenue strategy for AI, and exclusive features like Copilot Vision that can see everything you see in real-time.Key topics:• Why AI consciousness is an illusion - and why that's dangerous • Microsoft's 2 gigawatt datacenter expansion (2.5x Seattle's power usage)• MAI-1 Preview breaking into the top 10 models globally• The future of AI browsers and autonomous agents• Why granting AI rights could threaten humanitySubscribe to The Neuron newsletter (580,000+ readers): https://theneuron.aiResources mentioned:• Mustafa's essay "Seemingly Conscious AI Is Coming" https://mustafa-suleyman.ai/seemingly...• Try Copilot Vision: https://copilot.microsoft.com• Microsoft Edge AI features: https://www.microsoft.com/en-us/edge• MAI-1 Preview models: https://microsoft.ai/news/two-new-in-...Special thanks to today's sponsor, Wispr Flow: https://wisprflow.ai/neuron
In this episode, we dive into the evolving landscape of industrial AI, starting with a lively Oktoberfest recap before shifting gears to the latest breakthroughs in physics-informed neural networks and user interfaces. We discuss the real-world impact of Europe's AI Act, featuring insights from industry leaders and an in-depth interview with Sampo Leino of MinnaLearn on building AI literacy for enterprises. As we unpack strategic investments, robotics trends, and the challenges of compliance, we question what it means to use AI safely and competitively. Throughout the conversation, we keep it grounded in everyday experience—how regulation, technology, and practical learning are shaping the factories and workplaces of tomorrow. Tune in to hear how we're navigating this complex, fast-moving frontier and what it means for anyone working with AI today.
Our 221st episode with a summary and discussion of last week's big AI news! Recorded on 09/19/2025 Hosted by Andrey Kurenkov and co-hosted by Michelle Lee Feel free to email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai Read out our text newsletter and comment on the podcast at https://lastweekin.ai/ In this episode: OpenAI releases a new version of Codex integrated with GPT-5, enhancing coding capabilities and aiming to compete with other AI coding tools like Cloud Code. Significant updates in the robotics sector include new ventures in humanoid robots from companies like Figure AI and China's Unitree, as well as expansions in robotaxi services from Tesla and Amazon's Zoox. New open-source models and research advancements were discussed, including Google's DeepMind's self-improving foundation model for robotics and a physics foundation model aimed at generalizing across various physical systems. Legal battles continue to surface in the AI landscape with Warner Bros. suing MidJourney for copyright violations and Rolling Stone suing Google over AI-generated content summaries, highlighting challenges in AI governance and ethics. Timestamps: (00:00:10) Intro / Banter Tools & Apps (00:02:33) OpenAI upgrades Codex with a new version of GPT-5 (00:04:02) Google Injects Gemini Into Chrome as AI Browsers Go Mainstream | WIRED (00:06:14) Anthropic's Claude can now make you a spreadsheet or slide deck. | The Verge (00:07:12) Luma AI's New Ray3 Video Generator Can 'Think' Before Creating - CNET Applications & Business (00:08:32) OpenAI secures Microsoft's blessing to transition its for-profit arm | TechCrunch (00:10:31) Microsoft to lessen reliance on OpenAI by buying AI from rival Anthropic | TechCrunch (00:12:00) Figure AI passes $1B with Series C funding toward humanoid robot development - The Robot Report (00:13:52) China's Unitree plans $7 billion IPO valuation as humanoid robot race heats up (00:15:45) Tesla's robotaxi plans for Nevada move forward with testing permit | TechCrunch (00:17:48) Amazon's Zoox jumps into U.S. robotaxi race with Las Vegas launch (00:19:27) Replit hits $3B valuation on $150M annualized revenue | TechCrunch (00:21:14) Perplexity reportedly raised $200M at $20B valuation | TechCrunch Projects & Open Source (00:22:08) [2509.07604] K2-Think: A Parameter-Efficient Reasoning System (00:24:31) [2509.09614] LoCoBench: A Benchmark for Long-Context Large Language Models in Complex Software Engineering Research & Advancements (00:28:17) [2509.15155] Self-Improving Embodied Foundation Models (00:31:47) [2509.13805] Towards a Physics Foundation Model (00:34:26) [2509.12129] Embodied Navigation Foundation Model Policy & Safety (00:37:49) Anthropic endorses California's AI safety bill, SB 53 | TechCrunch (00:40:12) Warner Bros. Sues Midjourney, Joins Studios' AI Copyright Battle (00:42:02) Rolling Stone Publisher Sues Google Over AI Overview Summaries
EDR-Freeze tool suspends security software DeepMind updates Frontier Safety Framework Major vendors withdraw from MITRE EDR Evaluations Huge thanks to our sponsor, Conveyor Security reviews don't have to feel like a hurricane. Most teams are buried in back-and-forth emails and never-ending customer requests for documentation or answers. But Conveyor takes all that chaos and turns it into calm. AI fills in the questionnaires, your trust center is always ready, and sales cycles move without stalls. Breathe easier—check out Conveyor at www.conveyor.com.
Membership | Donations | Spotify | YouTube | Apple PodcastsThis week we hear from Larry Muhlstein, who worked on Responsible AI at Google and DeepMind before leaving to found the Holistic Technology Project. In Larry's words:“Care is crafted from understanding, respect, and will. Once care is deep enough and in a generative reciprocal relationship, it gives rise to self-expanding love. My work focuses on creating such systems of care by constructing a holistic sociotechnical tree with roots of philosophical orientation, a trunk of theoretical structure, and technological leaves and fruit that offer nourishment and support to all parts of our world. I believe that we can grow love through technologies of togetherness that help us to understand, respect, and care for each other. I am committed to supporting the responsible development of such technologies so that we can move through these trying times towards a world where we are all well together.”In this episode, Larry and I explore the “roots of philosophical orientation” and “trunk of theoretical structure” as he lays them out in his Technological Love knowledge garden, asking how technologies for reality, perspectives, and karma can help us grow a world in love. What is just enough abstraction? When is autonomy desirable and when is it a false god? What do property and selfhood look like in a future where the ground truths of our interbeing shape design and governance?It's a long, deep conversation on fundamentals we need to reckon with if we are to live in futures we actually want. I hope you enjoy it as much as we did.Our next dialogue is with Sam Arbesman, resident researcher at Lux Capital and author of The Magic of Code. We'll interrogate the distinctions between software and spellcraft, explore the unique blessings and challenges of a world defined by advanced computing, and probe the good, bad, and ugly of futures that move at the speed of thought…✨ Show Links• Hire me for speaking or consulting• Explore the interactive knowledge garden grown from over 250 episodes• Explore the Humans On The Loop dialogue and essay archives• Browse the books we discuss on the show at Bookshop.org• Dig into nine years of mind-expanding podcasts✨ Additional Resources“Growing A World In Love” — Larry Muhlstein at Hurry Up, We're Dreaming“The Future Is Both True & False” — Michael Garfield on Medium“Sacred Data” — Michael Garfield at Hurry Up, We're Dreaming“The Right To Destroy” — Lior Strahilevitz at Chicago Unbound“Decentralized Society: Finding Web3's Soul” — Puja Ohlhaver, E. Glen Weyl, and Vitalik Buterin at SSRN✨ MentionsKarl Schroeder's “Degrees of Freedom”Joshua DiCaglio's Scale TheoryGeoffrey West's ScaleHannah ArendtKen WilberDoug Rushkoff's Survival of the RichestManda Scott's Any Human Power Torey HaydenChaim Gingold's Building SimCityJames P. Carse's Finite & Infinite GamesJohn C. Wright's The Golden OecumeneEckhart Tolle's The Power of Now✨ Related Episodes This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit michaelgarfield.substack.com/subscribe
This week, we talk with Gabe Pereyra, President and co-founder at Harvey, about his path from DeepMind and Google Brain to launching Harvey with Winston Weinberg; how a roommate's real-world legal workflows met early GPT-4 access and OpenAI backing; why legal emerged as the right domain for large models; and how personal ties to the profession plus a desire to tackle big societal problems shaped a mission to apply advanced AI where language and law intersect.Gabe's core thesis lands hard, “the models are the product.” Rather than narrow tools for single tasks, Harvey opted for a broad assistant approach. Lawyers live in text and email, so dialog becomes the control surface, an “AI associate” supporting partners and teams. Early demos showed useful output across many tasks, which reinforced a generalist design, then productized connections into Outlook and Word, plus a no-code Workflow Builder.Go-to-market strategy flipped the usual script. Instead of starting small, Harvey partnered early with Allen & Overy and leaders like David Wakeling. Large firms supplied layered review, which reduced risk from model errors and increased learning velocity. From there the build list grew, security and data privacy, dedicated capacity, links to firm systems, case law, DMS, data rooms, and eDiscovery. A matter workspace sits at the center. Adoption rises with surface area, with daily activity approaching seventy percent where four or more product surfaces see regular use. ROI work now includes analysis of write-offs and specialized workflows co-built with firms and clients, for example Orrick, A&O, and PwC.Talent, training, and experience value come next. Firms worry about job paths, and Gabe does not duck that concern. Models handle complex work, which raises anxiety, yet also shortens learning curves. Harvey collaborates on curricula using past deals, plus partnerships with law schools. Return on experience shows up in recruiting, PwC reports stronger appeal among early-career talent, and quality-of-life gains matter. On litigation use cases, chronology builders require firm expertise and guardrails, with evaluation methods that mirror how senior associates review junior output. Frequent use builds a mental model for where errors tend to appear.Partnerships round out the strategy. Research content from LexisNexis and Wolters Kluwer, work product in iManage and NetDocuments, CLM workflows via Ironclad, with plans for data rooms, eDiscovery, and billing. Vision extends to a complete matter management service, emails, documents, prior work, evaluation, billing links, and strict ethical walls, all organized by client-matter. Global requirements drive multi-region storage and controls, including Australia's residency rules. The forward look centers on differentiation through customization, firms encode expertise into models, workflows, and agents, then deliver outcomes faster and at software margins. “The value sits in your people,” Gabe says, and firms that convert know-how into systems will lead the pack.Listen on mobile platforms: Apple Podcasts | Spotify | YouTube[Special Thanks to Legal Technology Hub for their sponsoring this episode.] Email: geekinreviewpodcast@gmail.comMusic: Jerry David DeCicca Transcript
The AI Breakdown: Daily Artificial Intelligence News and Discussions
AI just scored a historic win in the International Collegiate Programming Contest, with OpenAI's GPT-5 and Google's DeepMind outperforming nearly every human team. The discussion focuses on whether this marks a real inflection point for AI, shifting from competition success to the frontier of scientific discovery. Key themes include public perception, the pace of progress, and what these results signal for the future of the field.Brought to you by:KPMG – Discover how AI is transforming possibility into reality. Tune into the new KPMG 'You Can with AI' podcast and unlock insights that will inform smarter decisions inside your enterprise. Listen now and start shaping your future with every episode. https://www.kpmg.us/AIpodcastsBlitzy.com - Go to https://blitzy.com/ to build enterprise software in days, not months Robots & Pencils - Cloud-native AI solutions that power results https://robotsandpencils.com/Vanta - Simplify compliance - https://vanta.com/nlwThe Agent Readiness Audit from Superintelligent - Go to https://besuper.ai/ to request your company's agent readiness score.The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614Interested in sponsoring the show? nlw@aidailybrief.ai
What are the most common uses of ChatGPT? In episode 73 of Mixture of Experts, host Tim Hwang is joined by Lauren McHugh, Martin Keen and Aaron Baughman to talk about a new report, How People Use ChatGPT. Next, Anthropic released an updated version of their economic index. Then, another paper, this one coming out of DeepMind on agent economies. How likely is this? Finally, how practical are AI wearables and what does a future with them look like? All that and more on today's Mixture of Experts. 00:00 – Intro 1:10 – News: Alphabet Inc. $3 Trillion Market Cap, AI could boost trade value and the animal internet 2:04 – How People Use ChatGPT 15:47 – Anthropic Economic Index 25:50 – Virtual Agent Economies 35:36 – AlterEgo The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity. Subscribe for AI updates → https://www.ibm.com/account/reg/us-en/signup?formid=news-urx-52120 Learn more about artificial intelligence → https://www.ibm.com/think/artificial-intelligence Visit Mixture of Experts podcast page to get more AI content → https://www.ibm.com/think/podcasts/mixture-of-experts
Hundreds of thousands strike across France over budget cuts, Saudi Arabia and Pakistan sign a mutual defense pact, Trump moves to designate antifa as a major terrorist organization, ABC suspends Jimmy Kimmel's show over his Charlie Kirk comments, the U.K. arrests three people on suspicion of spying for Russia, the U.S. Education Dept. announces a new civics effort in partnership with dozens of conservative groups, Australia sets a 2035 emissions reduction target of 62-70% below 2005 levels, a U.S. judge orders the deportation of pro-Palestinian activist Mahmoud Khalil, DeepMind and OpenAI win gold at the “coding Olympics,” and two flying cars collide mid-air at a Chinese airshow rehearsal. Sources: www.verity.news
Trevor (who is also Microsoft's “Chief Questions Officer”) and Mustafa Suleyman, CEO of Microsoft AI and co-founder of Google's DeepMind do a deep dive into whether the benefits of AI to the human race outweigh its unprecedented risks. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Would you let AI control your lights, music, and bedtime stories?Do you think tools like DeepMind Genie 3 could reshape gaming and education?Would you trust an AI to plan your weekend or book your restaurant reservations?Which of Google's new AI features excites you the most?What's your verdict on Pixel 10—smartest phone ever, or just more hype?Would you trust AI-generated financial analysis with your investments?Hey there, tech enthusiasts!
At 26, Neel Nanda leads an AI safety team at Google DeepMind, has published dozens of influential papers, and mentored 50 junior researchers — seven of whom now work at major AI companies. His secret? “It's mostly luck,” he says, but “another part is what I think of as maximising my luck surface area.”Video, full transcript, and links to learn more: https://80k.info/nn2This means creating as many opportunities as possible for surprisingly good things to happen:Write publicly.Reach out to researchers whose work you admire.Say yes to unusual projects that seem a little scary.Nanda's own path illustrates this perfectly. He started a challenge to write one blog post per day for a month to overcome perfectionist paralysis. Those posts helped seed the field of mechanistic interpretability and, incidentally, led to meeting his partner of four years.His YouTube channel features unedited three-hour videos of him reading through famous papers and sharing thoughts. One has 30,000 views. “People were into it,” he shrugs.Most remarkably, he ended up running DeepMind's mechanistic interpretability team. He'd joined expecting to be an individual contributor, but when the team lead stepped down, he stepped up despite having no management experience. “I did not know if I was going to be good at this. I think it's gone reasonably well.”His core lesson: “You can just do things.” This sounds trite but is a useful reminder all the same. Doing things is a skill that improves with practice. Most people overestimate the risks and underestimate their ability to recover from failures. And as Neel explains, junior researchers today have a superpower previous generations lacked: large language models that can dramatically accelerate learning and research.In this extended conversation, Neel and host Rob Wiblin discuss all that and some other hot takes from Neel's four years at Google DeepMind. (And be sure to check out part one of Rob and Neel's conversation!)What did you think of the episode? https://forms.gle/6binZivKmjjiHU6dA Chapters:Cold open (00:00:00)Who's Neel Nanda? (00:01:12)Luck surface area and making the right opportunities (00:01:46)Writing cold emails that aren't insta-deleted (00:03:50)How Neel uses LLMs to get much more done (00:09:08)“If your safety work doesn't advance capabilities, it's probably bad safety work” (00:23:22)Why Neel refuses to share his p(doom) (00:27:22)How Neel went from the couch to an alignment rocketship (00:31:24)Navigating towards impact at a frontier AI company (00:39:24)How does impact differ inside and outside frontier companies? (00:49:56)Is a special skill set needed to guide large companies? (00:56:06)The benefit of risk frameworks: early preparation (01:00:05)Should people work at the safest or most reckless company? (01:05:21)Advice for getting hired by a frontier AI company (01:08:40)What makes for a good ML researcher? (01:12:57)Three stages of the research process (01:19:40)How do supervisors actually add value? (01:31:53)An AI PhD – with these timelines?! (01:34:11)Is career advice generalisable, or does everyone get the advice they don't need? (01:40:52)Remember: You can just do things (01:43:51)This episode was recorded on July 21.Video editing: Simon Monsour and Luke MonsourAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongMusic: Ben CordellCamera operator: Jeremy ChevillotteCoordination, transcriptions, and web: Katy Moore
Ryan Julian is a research scientist in embodied AI. He worked on large-scale robotics foundation models at DeepMind and got his PhD in machine learning at USC in 2021. In our conversation today, we discuss… What makes a robot a robot, and what makes robotics so difficult, The promise of robotic foundation models and strategies to overcome the data bottleneck, Why full labor replacement is far less likely than human-robot synergy, China's top players in the robotic industry, and what sets them apart from American companies and research institutions, How robots will impact manufacturing, and how quickly we can expect to see robotics take off. O*NET's ontology of labor: http://onetcenter.org/database.html ChinaTalk's Unitree coverage: https://www.chinatalk.media/p/unitree-ceo-on-chinas-robot-revolution Robotics reading recommendations: Chris Paxton, Ted Xiao, C Zhang, and The Humanoid Hub on X. You can also check out the General Robots and Learning and Control Substacks, Vincent Vanhoucke on Medium, and IEEE's robotics coverage. Today's podcast is brought to you by 80,000 Hours, a nonprofit that helps people find fulfilling careers that do good. 80,000 Hours — named for the average length of a career — has been doing in-depth research on AI issues for over a decade, producing reports on how the US and China can manage existential risk, scenarios for potential AI catastrophe, and examining the concrete steps you can take to help ensure AI development goes well. Their research suggests that working to reduce risks from advanced AI could be one of the most impactful ways to make a positive difference in the world. They provide free resources to help you contribute, including: Detailed career reviews for paths like AI safety technical research, AI governance, information security, and AI hardware, A job board with hundreds of high-impact opportunities, A podcast featuring deep conversations with experts like Carl Shulman, Ajeya Cotra, and Tom Davidson, Free, one-on-one career advising to help you find your best fit. To learn more and access their research-backed career guides, visit 80000hours.org/ChinaTalk. To read their report about AI coordination between the US and China, visit http://80000hours.org/chinatalkcoord. Outro music: Daft Punk - Motherboard (YouTube Link) Learn more about your ad choices. Visit megaphone.fm/adchoices
Ryan Julian is a research scientist in embodied AI. He worked on large-scale robotics foundation models at DeepMind and got his PhD in machine learning at USC in 2021. In our conversation today, we discuss… What makes a robot a robot, and what makes robotics so difficult, The promise of robotic foundation models and strategies to overcome the data bottleneck, Why full labor replacement is far less likely than human-robot synergy, China's top players in the robotic industry, and what sets them apart from American companies and research institutions, How robots will impact manufacturing, and how quickly we can expect to see robotics take off. O*NET's ontology of labor: http://onetcenter.org/database.html ChinaTalk's Unitree coverage: https://www.chinatalk.media/p/unitree-ceo-on-chinas-robot-revolution Robotics reading recommendations: Chris Paxton, Ted Xiao, C Zhang, and The Humanoid Hub on X. You can also check out the General Robots and Learning and Control Substacks, Vincent Vanhoucke on Medium, and IEEE's robotics coverage. Today's podcast is brought to you by 80,000 Hours, a nonprofit that helps people find fulfilling careers that do good. 80,000 Hours — named for the average length of a career — has been doing in-depth research on AI issues for over a decade, producing reports on how the US and China can manage existential risk, scenarios for potential AI catastrophe, and examining the concrete steps you can take to help ensure AI development goes well. Their research suggests that working to reduce risks from advanced AI could be one of the most impactful ways to make a positive difference in the world. They provide free resources to help you contribute, including: Detailed career reviews for paths like AI safety technical research, AI governance, information security, and AI hardware, A job board with hundreds of high-impact opportunities, A podcast featuring deep conversations with experts like Carl Shulman, Ajeya Cotra, and Tom Davidson, Free, one-on-one career advising to help you find your best fit. To learn more and access their research-backed career guides, visit 80000hours.org/ChinaTalk. To read their report about AI coordination between the US and China, visit http://80000hours.org/chinatalkcoord. Outro music: Daft Punk - Motherboard (YouTube Link) Learn more about your ad choices. Visit megaphone.fm/adchoices
David Abel is a Senior Research Scientist at DeepMind on the Agency team, and an Honorary Fellow at the University of Edinburgh. His research blends computer science and philosophy, exploring foundational questions about reinforcement learning, definitions, and the nature of agency. Featured References Plasticity as the Mirror of Empowerment David Abel, Michael Bowling, André Barreto, Will Dabney, Shi Dong, Steven Hansen, Anna Harutyunyan, Khimya Khetarpal, Clare Lyle, Razvan Pascanu, Georgios Piliouras, Doina Precup, Jonathan Richens, Mark Rowland, Tom Schaul, Satinder Singh A Definition of Continual RL David Abel, André Barreto, Benjamin Van Roy, Doina Precup, Hado van Hasselt, Satinder Singh Agency is Frame-Dependent David Abel, André Barreto, Michael Bowling, Will Dabney, Shi Dong, Steven Hansen, Anna Harutyunyan, Khimya Khetarpal, Clare Lyle, Razvan Pascanu, Georgios Piliouras, Doina Precup, Jonathan Richens, Mark Rowland, Tom Schaul, Satinder Singh On the Expressivity of Markov Reward David Abel, Will Dabney, Anna Harutyunyan, Mark Ho, Michael Littman, Doina Precup, Satinder Singh — Outstanding Paper Award, NeurIPS 2021 Additional References Bidirectional Communication Theory — Marko 1973 Causality, Feedback and Directed Information — Massey 1990 The Big World Hypothesis — Javed et al. 2024 Loss of plasticity in deep continual learning — Dohare et al. 2024 Three Dogmas of Reinforcement Learning — Abel 2024 Explaining dopamine through prediction errors and beyond — Gershman et al. 2024 David Abel Google Scholar David Abel personal website
How big of trouble is Apple in when it comes to AI?It's so bad they're enlisting the help of their chief rival to do so: Google. What's that mean for Google, and will the world FINALLY have an AI-powered Siri after years of broken promises?Tune in and find out. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Thoughts on this? Join the convo and connect with other AI leaders on LinkedIn.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Apple and Google Partnership for AIApple's Ongoing AI Strategy FailuresBloomberg Report: Gemini AI IntegrationSiri AI Overhaul With Google GeminiTechnical Details: Gemini on Apple ServersWorld Knowledge Answers Feature LaunchApple's AI Talent Exodus to CompetitorsLegal Risks and AI Feature LawsuitsImpact on Big Tech Competitive LandscapePotential Timeline for Smarter Siri ReleaseTimestamps:00:00 "Everyday AI: Daily Insights"04:35 Apple's Rivalry and AI Struggles09:03 Smart Assistants' Evolution and Apple's Challenge10:15 Apple's AI-Powered Answer Engine15:54 Apple's Private Cloud Security Architecture17:53 Apple Expands Siri with Google AI21:23 Apple's AI Ambitions and Challenges26:06 Apple's AI Talent Exodus30:49 Apple AI Team Exodus32:48 Apple's Reliance on Google Dominance35:04 "Siri's 2026 Update and Industry Impact"38:44 Support and Stay UpdatedKeywords:Apple, Google, Apple and Google partnership, Apple Intelligence, generative AI, Google Gemini, AI relevance, Siri, Siri failures, large language models, chief rival collaboration, Big Tech AI, market cap, AI-powered web search, AI search engine, Bloomberg report, AI features, AI partnership, AI summarizer, Apple AI delays, technological rivalry, OpenAI, Anthropic, Perplexity, AI foundation models, custom AI model, Private Cloud Compute, privacy architecture, AI talent exodus, machine learning, Apple lawsuits, false advertising, AI market competition, AI integration, hardware vs. software, ChatGPT alternative, Spotlight search, Safari AI integration, AI-driven device functionality, Meta, DeepMind, Microsoft AI, AI-powered summaries, web summarization, device intelligence, AI-powered assistants, smart assistant shortcomingsSend Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner
The AI Breakdown: Daily Artificial Intelligence News and Discussions
Mustafa Suleyman, CEO of AI at Microsoft and co-founder of DeepMind, has published a provocative essay warning about the dangers of “seemingly conscious AI.” On today's Big Think edition of The AI Daily Brief, we explore his argument that as AI systems develop memory, personality, and the illusion of subjective experience, people may begin treating them as conscious beings—with profound consequences for society, law, and human identity. We dig into Suleyman's case for why this illusion matters more than the question of whether AI is actually conscious, the risks of model welfare debates, and why industry norms may need to change now.Brought to you by:KPMG – Discover how AI is transforming possibility into reality. Tune into the new KPMG 'You Can with AI' podcast and unlock insights that will inform smarter decisions inside your enterprise. Listen now and start shaping your future with every episode. https://www.kpmg.us/AIpodcastsBlitzy.com - Go to https://blitzy.com/ to build enterprise software in days, not months Vanta - Simplify compliance - https://vanta.com/nlwPlumb - The automation platform for AI experts and consultants https://useplumb.com/The Agent Readiness Audit from Superintelligent - Go to https://besuper.ai/ to request your company's agent readiness score.The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614Subscribe to the newsletter: https://aidailybrief.beehiiv.com/Interested in sponsoring the show? nlw@breakdown.network