Podcasts about google deepmind

  • 592PODCASTS
  • 1,132EPISODES
  • 42mAVG DURATION
  • 1DAILY NEW EPISODE
  • Mar 21, 2026LATEST
google deepmind

POPULARITY

20192020202120222023202420252026


Best podcasts about google deepmind

Show all podcasts related to google deepmind

Latest podcast episodes about google deepmind

Imagination Skyway
Disney Imagineer Moritz Baecher | World of Frozen | Olaf Robotics | AI Physics

Imagination Skyway

Play Episode Listen Later Mar 21, 2026 65:47


World of Frozen at Disney Adventure World (Disneyland Paris) opened with an Olaf autonomous robotic character. Developed by Walt Disney Imagineering Research and Development in partnership with Walt Disney Animations Studios, NVIDIA, and Google DeepMind, Olaf was developed in months using Newton, an AI open-source, extensible physics engine that advances robot learning and development. In this episode, I interview Moritz Baecher about his work on this new Disney Parks robotic character inspired by Frozen, and I chat with James Grosch from Guide2WDW about his preview of World of Frozen. Get ad-free episodes, bonus episodes, in-depth news analysis, and premium content at patreon.com/imaginationskyway. To plan a trip, be sure to work with KMV Travel.   Read Matt's Imagineering column in WDW Magazine.   Imagination Skyway is a Disney Parks and Imagineering podcast. Episodes explore attraction design, recap Disney news, and dive into the stories behind the magic, including interviews with Disney Imagineers, Disney Legends, and other Disney creators. Not affiliated with or endorsed by The Walt Disney Company. Disney is a trademark of The Walt Disney Company.   Tag me and join the conversation below. Instagram: www.instagram.com/imaginationskyway Facebook: www.facebook.com/imaginationskyway YouTube: https://www.youtube.com/@imaginationskyway Email: matthew.krul@imaginationskyway.com  How to Support the Show Share the podcast with your friends Rate and review on Apple Podcasts or Spotify Join our Patreon Group - https://www.patreon.com/imaginationskyway Enjoy the show!

Investor Fuel Real Estate Investing Mastermind - Audio Version
AI for Real Estate Pros: How to Build an AI-First Business | Steve Brown

Investor Fuel Real Estate Investing Mastermind - Audio Version

Play Episode Listen Later Mar 18, 2026 20:44


In this episode of the Real Estate Pros Podcast, host Scott Bursey sits down with AI futurist and former Google DeepMind executive Steve Brown to discuss how artificial intelligence is transforming the way businesses operate. Steve shares insights from his decades of experience in technology, including his time at Intel and Google DeepMind, and explains why AI is the most impactful technology shaping the future of business.   Professional Real Estate Investors - How we can help you: Investor Fuel Mastermind:  Learn more about the Investor Fuel Mastermind, including 100% deal financing, massive discounts from vendors and sponsors you're already using, our world class community of over 150 members, and SO much more here: http://www.investorfuel.com/apply   Investor Machine Marketing Partnership:  Are you looking for consistent, high quality lead generation? Investor Machine is America's #1 lead generation service professional investors. Investor Machine provides true 'white glove' support to help you build the perfect marketing plan, then we'll execute it for you…talking and working together on an ongoing basis to help you hit YOUR goals! Learn more here: http://www.investormachine.com   Coaching with Mike Hambright:  Interested in 1 on 1 coaching with Mike Hambright? Mike coaches entrepreneurs looking to level up, build coaching or service based businesses (Mike runs multiple 7 and 8 figure a year businesses), building a coaching program and more. Learn more here: https://investorfuel.com/coachingwithmike   Attend a Vacation/Mastermind Retreat with Mike Hambright: Interested in joining a "mini-mastermind" with Mike and his private clients on an upcoming "Retreat", either at locations like Cabo San Lucas, Napa, Park City ski trip, Yellowstone, or even at Mike's East Texas "Big H Ranch"? Learn more here: http://www.investorfuel.com/retreat   Property Insurance: Join the largest and most investor friendly property insurance provider in 2 minutes. Free to join, and insure all your flips and rentals within minutes! There is NO easier insurance provider on the planet (turn insurance on or off in 1 minute without talking to anyone!), and there's no 15-30% agent mark up through this platform!  Register here: https://myinvestorinsurance.com/   New Real Estate Investors - How we can work together: Investor Fuel Club (Coaching and Deal Partner Community): Looking to kickstart your real estate investing career? Join our one of a kind Coaching Community, Investor Fuel Club, where you'll get trained by some of the best real estate investors in America, and partner with them on deals! You don't need $ for deals…we'll partner with you and hold your hand along the way! Learn More here: http://www.investorfuel.com/club   —--------------------

People and Projects Podcast: Project Management Podcast
PPP 502 | When Process Is Not Enough: The Human Side of Project Leadership, with Brett Harned

People and Projects Podcast: Project Management Podcast

Play Episode Listen Later Mar 17, 2026 44:24


Summary In this episode, Andy talks with Brett Harned, founder of the Digital PM Community and the Digital PM Summit, and author of Project Management for Humans: Helping People Get Things Done. Brett has spent years coaching project leaders and helping organizations rethink what project management really is. His core conviction: the human side of the work is not a nice-to-have. It is the work. In this conversation, you'll hear how Brett fell into project management and what early experiences shaped his perspective on people and projects. You'll learn the patterns he sees repeated across teams and industries, practical habits for when projects feel messy or start to drift, and why he believes project management is a leadership role that most organizations still undervalue. Brett also shares his candid take on AI, what it can and cannot do for project leaders, and what advice he would give his younger self. If you lead projects or teams, whether or not you have a PM title, this episode is for you! Sound Bites "Often with PMs, it's finding or receiving or feeling the permission to lead like a human instead of like a machine or a robot." "Projects fail because conversations didn't happen or they happened way too late." "Project management is a leadership role and too often organizations don't see it as a leadership role the way that they should." "Project managers are quietly carrying emotional labor that no one really acknowledges." "You can't earn trust by being invisible." "The role has become less about task tracking and more about judgment, good communication and trust building." "If you call people on your team resources, they have every right to call you overhead." "Slowing conversations down before speeding up the work is like the biggest thing." "Drift isn't usually about effort. It's about misaligned understanding." "AI is not going to replace a really good leader." "AI is great at admin. It's terrible at the leadership stuff. It can't read the room, it can't navigate tension, it can't earn trust." "Say the thing now. Saying something early is almost always safer than saying it too late." "The job of a project manager isn't to absorb chaos. It's to make it a conversation." "Caring about people and building relationships is a skill, and it's a skill that's necessary for this career." Chapters 00:00 Introduction 01:52 Start of Interview 01:57 How Brett Describes What He Does 03:29 When the People Side Became Clear 06:52 Patterns Across Teams and Organizations 10:32 How Expectations of the PM Role Have Changed 12:28 The Impact of Remote and Hybrid Work 15:26 Practices for When Projects Feel Messy 18:20 How to Name What Is Happening Out Loud 21:30 A Question for When Projects Start to Drift 23:43 How AI Will and Won't Change the PM Role 25:50 Practical Ways Brett Uses AI 30:21 Advice to Younger Brett 33:40 How PM Skills Show Up Outside of Work 35:58 The PM Squad and Same Team Partners 38:01 End of Interview 38:22 Andy Comments After the Interview 41:30 Outtakes Learn More You can learn more about Brett and his work at SameTeamPartners.com and BrettHarned.com. For more learning on this topic, check out: Episode 336 with Clint Padgett. During the interview with Brett, Andy mentioned the weakness of using only percent complete or status colors. That's something Clint and Andy talked about in episode 336. Episode 99 with Mike Roberto. The topic of conflict came up several times in this discussion. In episode 99, Mike and Andy talk about managing the tension between conflict and consensus. It's a discussion worth hearing, especially if you grew up thinking conflict is mostly a negative. Episode 500 with Steve Brown, former Google DeepMind futurist. Andy and Steve talk about AI and the future of work, and it's a discussion highly recommended for anyone leading projects today. Chat with PMeLa You can chat directly with PMeLa—the podcast's AI persona—to get episode recommendations and answers to your project management and leadership questions. Visit PeopleAndProjectsPodcast.com/PMeLa to chat with her. Pass the PMP Exam If you or someone you know is thinking about getting PMP certified, we've put together a helpful guide called The 5 Best Resources to Help You Pass the PMP Exam on Your First Try. We've helped thousands of people earn their certification, and we'd love to help you too. It's totally free, and it's a great way to get a head start. Just go to 5BestResources.PeopleAndProjectsPodcast.com to grab your copy. I'd love to help you get your PMP this year! Join Us for LEAD52 I know you want to be a more confident leader–that's why you listen to this podcast. LEAD52 is a global community of people like you who are committed to transforming their ability to lead and deliver. It's 52 weeks of leadership learning, delivered right to your inbox, taking less than 5 minutes a week. And it's all for free. Learn more and sign up at GetLEAD52.com. Thanks! Thank you for joining me for this episode of The People and Projects Podcast! Talent Triangle: Power Skills Topics: Project Management, Leadership, Team Dynamics, Communication, Emotional Labor, Human-Centered Leadership, Conflict Management, AI, Future of Work, Stakeholder Management, Psychological Safety, Remote Work, Project Recovery The following music was used for this episode: Music: Echo by Alexander Nakarada License (CC BY 4.0): https://filmmusic.io/standard-license Music: Synthiemania by Frank Schroeter License (CC BY 4.0): https://filmmusic.io/standard-license

Rental Property Owner & Real Estate Investor Podcast
Steve Brown Keynote Preview: The State Of AI And What It Means For Your Business—MREIC 2026

Rental Property Owner & Real Estate Investor Podcast

Play Episode Listen Later Mar 16, 2026 30:44


AI is no longer theoretical. It is already changing how real estate decisions get made in underwriting, leasing, operations, marketing, asset management, and communication. In this special episode, I sit down with Steve Brown, AI futurist, bestselling author, and former senior leader at Google DeepMind and Intel. Steve is the Opening Keynote Speaker at the 2026 Midwest Real Estate Investor Conference (MREIC), and his message is clear: this shift is happening faster than most people realize—and "wait and see" is not a strategy. Steve's lens is built for real-world operating environments with real assets, teams, capital, and risk. He breaks down what AI changes at the workflow level, where it creates real leverage, and what actually matters now versus what can wait, so you do not waste time, money, or focus chasing the wrong thing. Steve explains what's accelerating in AI right now, what's likely to change over the next few years, and how real estate entrepreneurs and business owners should think about AI as a capability shift, not a collection of tools. In This Episode, Steve Breaks Down: Why AI is moving faster than even experts expected Why buying AI licenses is not a strategy (and what is) How smaller teams can operate like they're 5x or 10x their size The shift from "people are the engine" to "AI becomes the engine" What it takes to rethink workflows, not just adopt tools The biggest fears, misconceptions, and early implementation mistakes leaders make We also dig into the real competitive risk ahead: you won't lose to AI, you'll lose to someone who uses AI better than you. That applies directly to real estate investors. If you own rental properties, manage assets, raise capital, or run a real estate business, this conversation will sharpen how you think about leverage, workflows, and long-term competitiveness. MREIC 2026: Opening Keynote Details Midwest Real Estate Investor Conference (MREIC) DeVos Place Conference Center | Grand Rapids, Michigan April 27–28, 2026 Steve Brown Opening Keynote: Monday, April 27 at 9:00 AM Keynote Title: Navigating What's Next: Practical AI Strategy for Real Estate Investors and Operators Steve will also host a VIP Luncheon at 12:30 PM for attendees who want to go deeper on practical AI strategy and real-world implementation. Learn more and get tickets: midwestreiconference.com About Steve Brown Steve Brown is an AI futurist and former executive at Google DeepMind and Intel. He spent decades helping Fortune 100 companies navigate digital transformation and now advises organizations on how to thrive in the AI era. He is the author of The AI Ultimatum, written specifically for business leaders who want to understand and implement AI strategically. Learn more at: stevebrown.ai Today's episode is brought to you by Green Property Management, managing everything from single family homes to apartment complexes in the West Michigan area. www.livegreenlocal.com And RCB & Associates, helping Michigan-based real estate investors and small business owners navigate the complex world of health insurance and Medicare benefits. www.rcbassociatesllc.com

Marketing Against The Grain
This One Chart Exposes Why Most Companies Are Failing At AI

Marketing Against The Grain

Play Episode Listen Later Mar 10, 2026 17:18


Get our AI Adoption playbook to redesign your business with AI: https://clickhubspot.com/kfwm Ep. 407 Did you know that 84% of people surveyed have never used AI? Kipp and Kieran dive into why the biggest threat to your business isn't the competition or the economy—it's your own inability to integrate AI, and how closing the gap between potential and reality is the key to outsized growth. Learn more on why new AI models aren't the game-changer you think, what most businesses get wrong when adopting AI, and the practical steps you can take to redesign your workflows and build an AI-native company. Mentions OpenAI https://openai.com/ Perplexity https://www.perplexity.ai/ ChatGPT https://chatgpt.com/ Claude https://claude.ai/ Google DeepMind https://deepmind.google/ Loom https://www.loom.com/ Get our guide to build your own Custom GPT: https://clickhubspot.com/customgpt We're creating our next round of content and want to ensure it tackles the challenges you're facing at work or in your business. To understand your biggest challenges we've put together a survey and we'd love to hear from you! https://bit.ly/matg-research Resource [Free] Steal our favorite AI Prompts featured on the show! Grab them here: https://clickhubspot.com/aip We're on Social Media! Follow us for everyday marketing wisdom straight to your feed YouTube: ​​https://www.youtube.com/channel/UCGtXqPiNV8YC0GMUzY-EUFg  Twitter: https://twitter.com/matgpod  TikTok: https://www.tiktok.com/@matgpod  Join our community https://landing.connect.com/matg Thank you for tuning into Marketing Against The Grain! Don't forget to hit subscribe and follow us on Apple Podcasts (so you never miss an episode)! https://podcasts.apple.com/us/podcast/marketing-against-the-grain/id1616700934   If you love this show, please leave us a 5-Star Review https://link.chtbl.com/h9_sjBKH and share your favorite episodes with friends. We really appreciate your support. Host Links: Kipp Bodnar, https://twitter.com/kippbodnar   Kieran Flanagan, https://twitter.com/searchbrat  ‘Marketing Against The Grain' is a HubSpot Original Podcast // Brought to you by Hubspot Media // Produced by Darren Clarke.

Darede Cast
Cloud.IA - EP4 | Temporada 1 | O que está acontecendo com a IA agora? 5 notícias que explicam tudo

Darede Cast

Play Episode Listen Later Mar 9, 2026 37:42


A inteligência artificial está evoluindo rapidamente. Algumas novidades recentes mostram que estamos entrando em uma nova fase dessa tecnologia.Neste episódio analisamos algumas das principais notícias do momento no universo da IA .Você vai entender:• A contratação do criador do OpenClaw pela OpenAI• Sistemas capazes de simular sociedades inteiras com agentes• O impacto de vídeos hiper-realistas gerados por IA e a reação de Hollywood• O debate sobre como monetizar modelos de inteligência artificial• O novo modelo de mundo da Google DeepMind que cria ambientes 3D interativosEssas mudanças mostram que a IA está evoluindo de ferramentas que respondem perguntas para sistemas capazes de agir, simular e criar ambientes complexos.Assista ao episódio completo e entenda o que essas mudanças podem significar para o futuro da tecnologia.

People and Projects Podcast: Project Management Podcast
PPP 500 | When AI Becomes a Digital Colleague: What Leaders Need to Know, with former Google DeepMind Futurist Steve Brown

People and Projects Podcast: Project Management Podcast

Play Episode Listen Later Mar 6, 2026 41:27


Summary Welcome to our 500th episode! To celebrate this milestone, Andy talks with Steve Brown, AI futurist, keynote speaker, and author of The AI Ultimatum: Preparing for a World of Intelligent Machines and Radical Transformation. Steve brings a rare perspective shaped by years at Intel and Google DeepMind, and today helps organizations navigate two vital questions: what future do you want to build with AI, and what future do you want to avoid? They explore why waiting isn't actually the safe option it feels like, how to think about the different "flavors" of AI beyond just generative tools, and what it really means to orchestrate humans, AI agents, and robots together in the workplace. Steve introduces three types of AI agents—offload, elevate, and extend—and explains the crucial difference between automating tasks and truly transforming how work gets done. You'll also hear his candid take on the fear of being replaced and why doubling down on your humanity is the smartest career move you can make right now. If you're looking for a practical, empowering guide to leading through the AI revolution—without the hype—this episode is for you! Sound Bites "The difference between an AI-enabled or AI-first company and an AI laggard is going to be so great that if you don't get on the train, you may get to the point where you can never catch up." "Your competitors who have embraced AI faster than you are going to be just kicking your butt all over town." "There's a serious cost to inaction in that you can become made irrelevant." "The danger with that is you may automate yourself. It may automate away all of the differentiation you have in your brand and your company." "AI is this sort of amplification technology, and the challenge is to balance cost-cutting and value creation." "Each flavor of AI is useful for solving a different type of business problem." "It feels like a digital employee, right? A digital worker that works for you." "It's taking the suck out of your job." "The real opportunity here, is to transform the way you do work rather than just try and automate away tasks or people." "The workplace of the future is going to be three groups. Humans will still be in the workforce. Great! Go us!" "You won't be replaced by an AI or a robot. You'll be replaced by someone who knows how to use AI better than you do." "Double down on your humanity." "Focus on building the skills that cannot be replaced, or at least won't be replaced by machines anytime soon." "At the end of all of this is going to be lives of abundance, where we have the things that we need." Chapters 00:00 Introduction 01:45 Start of Interview 01:54 Steve's Career Journey from Intel to DeepMind 05:00 Understanding the AI Ultimatum 08:23 Our First AI Moments 09:32 The Flavors of AI 13:54 Three Pathways to Creating Value with AI 15:11 Automation vs. Transformation 17:10 Orchestrating Humans, AI, and Robots 19:01 Real-World Examples of AI Agents 21:33 Physically Intelligent Robots in the Workplace 24:13 Addressing Fear and Resistance to AI 26:44 Preparing the Next Generation for the AI Age 29:56 Where to Learn More About Steve 31:01 End of Interview 31:38 Andy Comments After the Interview 36:23 Outtakes Learn More You can learn more about Steve and his work at SteveBrown.ai. For more learning on this topic, check out: Episode 479 with Matt Mong. It's a discussion about the AI skills you need to stay relevant. Episode 454 with Christie Smith. She talks about how AI is changing leadership, and what we can do about that now. Episode 437 with Nada Sanders. It's a discussion about future-prepping your career in an age of AI. You can also chat directly with PMeLa—the podcast's AI persona—to get episode recommendations and answers to your project management and leadership questions. Visit PeopleAndProjectsPodcast.com/PMeLa to chat with her. Level Up Your AI Skills Join other listeners from around the world who are taking our AI Made Simple course to prepare for an AI-infused future. Just go to ai.PeopleAndProjectsPodcast.com. Thanks! Pass the PMP Exam This Year If you or someone you know is thinking about getting PMP certified, we've put together a helpful guide called The 5 Best Resources to Help You Pass the PMP Exam on Your First Try. We've helped thousands of people earn their certification, and we'd love to help you too. It's totally free, and it's a great way to get a head start. Just go to 5BestResources.PeopleAndProjectsPodcast.com to grab your copy. I'd love to help you get your PMP this year! Join Us for LEAD52 I know you want to be a more confident leader–that's why you listen to this podcast. LEAD52 is a global community of people like you who are committed to transforming their ability to lead and deliver. It's 52 weeks of leadership learning, delivered right to your inbox, taking less than 5 minutes a week. And it's all for free. Learn more and sign up at GetLEAD52.com. Thanks! Thank you for joining me for this episode of The People and Projects Podcast! Talent Triangle: Business Acumen Topics: Artificial Intelligence, Leadership, Future of Work, AI Strategy, Digital Transformation, Agentic AI, Automation, Organizational Change, AI Ethics, Competitive Advantage, Human-AI Collaboration, Technology Adoption The following music was used for this episode: Music: Lullaby of Light featuring Cory Friesenhan by Sascha Ende License (CC BY 4.0): https://filmmusic.io/standard-license Music: Fashion Corporate by Frank Schroeter License (CC BY 4.0): https://filmmusic.io/standard-license

The AI Policy Podcast
A Crash Course on AI Standards with Google DeepMind's Owen Larter

The AI Policy Podcast

Play Episode Listen Later Mar 6, 2026 38:44


In this episode, we're joined by Owen Larter, Head of Frontier Policy and Public Affairs at Google DeepMind, to explore the often-overlooked world of AI standards and the role they play in shaping how AI is developed and governed. We discuss what standards are and why they matter for technological progress (2:53), how standards are developed and the key organizations involved (16:05), the relationship between standards and AI regulation like the EU AI Act (26:58), and more.

Thriving on Overload
Ross Dawson on Humans + AI Agentic Systems (AC Ep34)

Thriving on Overload

Play Episode Listen Later Mar 4, 2026 19:12


“Transparency has to be built into the structure so that you know where the decision is made, what authorizations are given, and have an audit trail visible so you can always see what is going on.” –Ross Dawson About Ross Dawson Ross Dawson is a futurist, keynote speaker, strategy advisor, author, and host of Amplifying Cognition podcast. He is Chairman of the Advanced Human Technologies group of companies and Founder of Humans + AI startup Informivity. He has delivered keynote speeches and strategy workshops in 33 countries and is the bestselling author of 5 books, most recently Thriving on Overload. LinkedIn Profile: Ross Dawson What you will learn How human-AI teams outperform human-only teams in productivity and efficiency The crucial role of understanding AI strengths and limitations when designing collaborative workflows Ways AI collaboration can lead to output homogenization and strategies to preserve human creativity Key principles of intelligent delegation within multi-agent AI systems, including dynamic assessment and trust Understanding accountability, transparency, and auditability in decision-making with autonomous AI agents How user intent and ‘machine fluency’ impact the effectiveness of AI agents in economic and organizational contexts The emergence of an ‘agentic economy’ and its implications for fairness, capability gaps, and representation Counterintuitive findings on AI-mediated negotiation, particularly advantages for women, and what it reveals about AI-human interaction Episode Resources Transcript Ross Dawson: This episode is a little bit different. Instead of doing an interview with somebody remarkable, as usual, today I’m going to just share a bit of an update and then share insights from three recent research papers that dig into something which I think is exceptionally important, which is how humans work with AI agentic systems. And we’ll look at a few different layers of that, from how small humans plus agent teams work through to how we can delegate decisions to AI through to some of the broader implications. But first, a bit of an update. 2026 seems to be moving exceptionally fast. It’s a very interesting time to be alive, and I think it’s pretty even hard to see what the end of this year is going to look like. So for me, I am doing my client work as usual. So I’ve got keynotes around the world on usually various things related to AI, the future of AI, humans plus AI, and so on. A few industry-specific ones in financial services and so on. And also doing some work as an advisor on AI transformation programs, so helping organizations and their leaders to frame the pathways, drawing on my AI roadmap framework in how it is you look at the phases, mapping those out, working out the issues, and being able to guide and coach the leaders to do that effectively. But the rest of my time is focused on three ventures, and I’ll share some more about these later on. But these are fairly evidently tied to my core interests. Fractious is our AI for strategy app. So this was really building a way in which we can capture the detailed nuance of the strategic thinking of leaders of the organization, to disambiguate it, to clarify it, and enable that to then be built into strategic options, strategic hypotheses, and to be able to evolve effectively. So that’ll be in beta soon. Please reach out if you’re interested in being part of the beta program, and that’ll go to market. So that’s deeply involved in that. We also have our Thought Weaver software, rebuilding previous software which had already built on AI-augmented thinking workflows. So again, that’ll be going to beta. That’s more an individual tool that will be going into beta in the next weeks. So again, go to Thought Weaver. Actually, don’t—the website isn’t updated yet—but I’ll let you know when it’s out, or keep posted for updates on that. And also building an enterprise course on humans plus AI teaming. It’s my fundamental belief that we’ve kind of been through the phase of augmentation of individuals, and we still need to work hard at doing that better. But the next phase for organizations is to focus on teams. How do you work with teams where we have both human members and AI Agentic members? And it creates a whole different series of dynamics and new skills and capabilities. It really calls for how to participate in the humans plus AI team and how to lead humans plus AI teams. And that is again going into the first few test organizations in the next month or so. So again, just let me know. So today what we’re going to look at is this theme: teams of humans working with AI agents. So not individual AI as in chat, but where we have a lot of agents with various degrees of autonomy, but also agentic systems where these agents are interacting with each other as well as with humans. So there are three papers which I want to just talk about, just give you a quick overview, and please go and check out the papers in more detail if you’re interested. There’ll be links in the show notes. First is Collaborating with AI Agents: A Field Experiment on Teamwork, Productivity and Performance, by Harang Ju at Johns Hopkins and Sinan Aral at MIT. So this, there was an experiment which had over 2,300 participants who were working on creating advertisements. And they had a whole array of humans plus AI, human-human teams, human-AI teams, sort of quite small or just in duos and so on, working on being able to create those which were then assessed in terms of quality and how they worked. So a few particularly interesting findings from that. So individually, just having a human-AI team essentially enhanced performance significantly compared to just human-only teams. And so they were able to move faster and to complete more of their tasks, and the quality was strong. But there’s a phrase which is commonly used around the jagged frontier of capability of AI, and it was quite clear that there were some domains where AI does very well and others where it didn’t. And so this comes to the part where, in terms of the design of the tasks, the design of the human-AI systems, and also the understanding by the human users of what AI is good at or not, is fundamental in being able to do that. And so in some cases, if AI was used in some domains such as image quality, they actually decreased quality. So we need to understand where and how both to apply AI in this jagged frontier and design the systems around that. This changes the role of the humans, of course. Humans then tend to delegate more. And there’s one of the things which they tested for, which is how do you behave differently if you know your teammate is an AI as opposed to not knowing whether a human or AI. And it changes. So they become more task-oriented. They are less using the social cues to interact, and they are essentially becoming more efficient. But some of these social cues which are valuable in the human-human collaboration started to disappear. And this automation process meant that there was not, in the end, as much creative diversity. Now I’ve often pointed to the role of AI in creativity tasks. It depends fundamentally on the architecture—where does the AI sit in terms of initial ideas which are then sorted by filtered by humans and then are involved, or where it sits in that process. But in this particular structure, they found that humans plus AI teams started to create more and more similar-type outputs. So this homogenization of outputs in these human-AI teams was very notable and significant. And so this again creates a design factor for how it is that we build human-AI systems which actually do not lead to homogeneous output. And we’re making sure that we are ensuring that the human diversity is maintained. Often that can be done by being able to have human outputs first without AI then blunting or narrowing the breadth of the creative outputs of humans. Second paper I’d like to point to is called Intelligent AI Delegation, from a team at Google DeepMind. So this is this point where we now have not just single AI agents to delegate decisions to or problems to, but in fact systems of AI. And so this creates a different challenge. And the key point is, I’m saying this, is around you are delegating tasks, but when you are delegating tasks it’s more than just saying, okay, which agent gets the task. You have to understand responsibility. So where does accountability reside? Who is responsible for that? How clarity around the roles of the agents, what are the boundaries of what it is they can do and cannot do, the clarity of the intent, and how that’s communicated and cascaded through the agents, and the critical role of trust and appropriate degrees of trust in the systems. So this means that we have to define what are the different characteristics of the task. And in the paper it goes through quite a few different characteristics. And a few of the critical ones was the degree of uncertainty around the task. Obviously, if it is very clear that can be appropriately delegated, but many tasks and problems are uncertain. And so this creates a different dynamic. Whether verifiable, as you know you have high-quality information, or whether that’s the degree of uncertainty around whether decisions are reversible, the degree of subjectivity, because not everything is data-driven. And so assessing these task characteristics start to define where human judgment plays a role, how do you create those checks, and how do you build that. So this creates a system so intelligent delegation is not just how the humans delegate, but in turn the structure of how that cascades down through the agents. So this requires this idea of dynamic assessment. So you’re not just setting and forgetting. You are continuously reassessing what is happening with the context, what is changing in the stakes, any uncertainty. So you’re coming back to be able to ensure there’s not just a single delegation structure, but you’re changing it over time. And you’ll continue to adapt as you’re executing, and be able to monitor, replan, and set. So transparency has to be built into the structure so that you have where the decision is made, what authorizations are given, you know where the audit trail is visible so you can always see what is going on in those structures. And being able to scale how you are coordinating the systems. And if it’s just small scale that’s fine, but you want to be able to build something which has been able to move across many agents. And so this requires a way of being able to discover which agents are most appropriate and be able to essentially establish the delegation of a particular task to them again on a dynamic basis. And essentially this final principle of systemic resilience, where you have to expect that things will go wrong. So there’s continuing monitoring, being able to understand that these systems can be attacked in various ways and being able to recover. So, very solid paper, quite deep, but really giving some very good principles for how it is we can delegate to AI systems. So the final of the three papers goes to a bit of a higher level. It’s called Agentic Interactions, and it’s from Alex Imas, Sanjog Misra of the University of Chicago, and Kevin Lee at the University of Michigan. And what they’re looking at is what happens on a macro scale when increasingly decisions are delegated to AI agents. So this is the agent economy that I’ve been talking about for a very long time, which is now very much coming to the fore. And so what they do is they look at what happens when we start to delegate more and more economic decisions, such as buying and selling decisions. So what they found is extraordinarily interesting. They found that the AI agents in fact do behave very similarly to their human creators. And in fact what you can observe is that there are differences in the agents where you can infer the gender and the personality of the person who is delegating the agent. Even though there is no information, the agent doesn’t even know what the gender or the personality is, they are actually flowing through. So in fact agents represent us in the market as it were, potentially very accurately. But this goes directly to the second point where this idea of machine fluency. And so AI fluency is very much a term in vogue at the moment. So the authors talk about this idea of machine fluency which is how well can a user put their intent and align that with the agent so the agent is aligned with them. And in fact they found that there’s very significant degrees of difference in those. And those people who are better at being able to get their agents to express their wishes could in fact amplify the economic outcomes of these people. And related to that in fact they showed there was a correlation that higher educational levels mean that you were able to better delegate to AI, and your AI agents performed better and gave you better returns. So again pointing to these ways in which we’re starting to see potentials for aggravation of differences in the agentic economy when our agents who act for us in the economy start to reflect among other things educational differences or capabilities in how it is we express our results and our intentions through AI. There was one very interesting and I suppose counterintuitive result. Women get better outcomes in negotiation when using AI agents than they do in human-to-human interactions. Again this is without the AI agents knowing that they are representing a woman or not. But in fact this shows that the style and the way on the machine fluency the ways in which women are able to instruct and put their intent into the AI agents is in this study superior to those of males. And there’s of course in the real world unfortunately a bias towards male performance in negotiation. And that was inversed in the study. So exceptionally interesting. So just pulling back some of the common themes of these three papers. We increasingly want a world where humans have relationships to agents. We are starting to work with them in teams and systems. And we’re starting to build economies where humans are represented by agents. And essentially our relationship to those agents and our ability to delegate effectively is driving value of course to the individual but also across these agentic systems that are emerging. So this is early on because the realities of these agentic human-agent systems are pretty early at this point. But this starts to point to some of the potential, some of the challenges, some of the opportunities, and some of the work that we have to do. So I will be sharing more on these kinds of topics in my interviews with people and also of course on the Humans Plus AI website. So just go to humansplus.ai. Actually to be frank it hasn’t been updated a lot recently but we will be sharing a lot more there. Or LinkedIn is where I share the most actually, and getting back on Twitter as well if you’re interested. But I’ll be diving deep and trying to share what I find is useful as well as interesting in helping us to create a world where humans are first. AI complements us. The reality is we are moving to humans plus AI systems. And if we design that well with the right intentions we can make this absolutely one which drives human value first. So glad to have you on the journey. Have a wonderful rest of your day. The post Ross Dawson on Humans + AI Agentic Systems (AC Ep34) appeared first on Humans + AI.

Deep State Radio
AI, Energy and Climate: India AI Impact Summit: Arunabha Ghosh

Deep State Radio

Play Episode Listen Later Mar 3, 2026 42:40


More than 35,000 people attended the recent India AI Impact Summit in Delhi, which featured speeches from more than 20 heads of state and dozens of technology company leaders including Sam Altman of OpenAI, Dario Amodei of Anthropic and Demis Hassabis of Google DeepMind. In this episode, host David Sandalow offers his reflections on the Summit and speaks with Arunabha Ghosh, President of CEEW, a leading Delhi-based public policy think tank. Ghosh offers his views on the Summit, data center construction in India and around the world and the role of AI in sustainable development, among other topics.   This material is distributed by TRG Advisory Services, LLC on behalf of the Embassy of the United Arab Emirates in the U.S.. Additional information is available at the Department of Justice, Washington, DC. Learn more about your ad choices. Visit megaphone.fm/adchoices

The Neuron: AI Explained
AI Is Helping Build the Power Source It Desperately Needs (Brandon Sorbom w/ Commonwealth Fusion Systems)

The Neuron: AI Explained

Play Episode Listen Later Mar 3, 2026 63:59


AI data centers are going to double their power consumption by 2030—so where's all that energy coming from? One answer is fusion, the same process that powers the sun.In this episode of The Neuron, we're joined by Brandon Sorbom, Chief Science Officer and Co-founder of Commonwealth Fusion Systems, to explore how his company is racing to build the world's first commercial fusion power plant—and how AI is helping them get there faster.Brandon explains why fusion has been "30 years away" for decades, what changed with high-temperature superconducting magnets, and why fusion is fundamentally safer than fission (hint: fusion is "default off"). We dive into CFS's collaborations with Google DeepMind and NVIDIA, what it takes to wrangle 10,000 unique parts, and when we might actually see fusion on the grid.You'll learn:• What fusion actually is (and why it's not nuclear fission)• Why high-temperature superconducting magnets changed everything• How AI is accelerating plasma control and simulation• The safety profile that makes fusion regulated like an MRI, not a reactor• When CFS expects to hit Q > 1 (net energy) and beyondTo learn more about Commonwealth Fusion Systems, visit https://cfs.energy.For more practical, grounded conversations on AI and emerging tech, subscribe to The Neuron newsletter at https://theneuron.ai

Deep State Radio
AI, Energy and Climate: India AI Impact Summit: Arunabha Ghosh

Deep State Radio

Play Episode Listen Later Mar 3, 2026 42:40


More than 35,000 people attended the recent India AI Impact Summit in Delhi, which featured speeches from more than 20 heads of state and dozens of technology company leaders including Sam Altman of OpenAI, Dario Amodei of Anthropic and Demis Hassabis of Google DeepMind. In this episode, host David Sandalow offers his reflections on the Summit and speaks with Arunabha Ghosh, President of CEEW, a leading Delhi-based public policy think tank. Ghosh offers his views on the Summit, data center construction in India and around the world and the role of AI in sustainable development, among other topics.   This material is distributed by TRG Advisory Services, LLC on behalf of the Embassy of the United Arab Emirates in the U.S.. Additional information is available at the Department of Justice, Washington, DC. Learn more about your ad choices. Visit megaphone.fm/adchoices

AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
Unified Latents (UL): How to train your latents (Teaser for Feb 28th Technical Update)

AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store

Play Episode Listen Later Feb 28, 2026 2:04


Listen to Full Audio at https://podcasts.apple.com/us/podcast/scientist-vs-storyteller-benchmarking-gpt-5-2-claude/id1684415169?i=1000752001078For years, Latent Diffusion Models—the tech behind Stable Diffusion and DALL-E—have relied on a bit of an 'art form' called KL-regularization. Basically, researchers had to manually guess how much to compress an image before the AI started to lose the details. If you compressed too much, the image got blurry. Too little, and the model became too expensive to train.Enter Unified Latents, or UL.In a new paper out of DeepMind Amsterdam, researchers have introduced a framework that replaces that guesswork with a single, cohesive mathematical objective. Instead of training the compressor and the generator separately, UL trains the Encoder, the Prior, and the Decoder all at once.The 'Secret Sauce' here is something called Fixed Gaussian Noise Encoding. By injecting a constant, specific amount of noise during the encoding process, DeepMind has created a 'Maximum Precision Link.' This forces the encoder to be incredibly efficient, focusing only on the most important structures of an image.The results are staggering: UL achieved a state-of-the-art Video Distance score on the Kinetics-600 dataset and hit a competitive 1.4 FID on ImageNet—all while using significantly less computational power than traditional methods.This episode is made possible by our sponsors:

The Neuron: AI Explained
Gemini 3 Flash (Smartest, Cheapest AI) with Google DeepMind's Logan Kilpatrick

The Neuron: AI Explained

Play Episode Listen Later Feb 27, 2026 119:28


Google just dropped Gemini 3 Flash—a model that outperforms Gemini 2.5 Pro (their last top model) while running 3x faster at less than 1/4 the cost. It's frontier-level reasoning at Flash-level speed, and it's rolling out globally right now.We're sitting down with Logan Kilpatrick from Google DeepMind to explore what this actually means for developers, knowledge workers, and anyone trying to figure out how AI fits into their workflow.What we'll cover:

Google SRE Prodcast
The One With Damion Yates and Building AI systems

Google SRE Prodcast

Play Episode Listen Later Feb 26, 2026 31:27


How do you introduce Site Reliability Engineering to an AI research lab, bringing concepts of scale to engineers who are at the leading edge of AI systems? In the latest episode of The Prodcast, hosts Steve McGhee and Florian Rathgeber chat with Damion Yates, who helped establish the reliability engineering culture at Google DeepMind. Damion shares his journey of bringing scalable infrastructure to DeepMind, supporting massive machine learning experiments. Discover the unique challenges of supporting AI research, such as managing highly expensive "lockstep" training models where a single machine failure halts the entire process. Damion also explains why he believes "luck is our enemy" in systems engineering, and why protecting a research scientist's time is the ultimate metric for success.

touch point podcast
TP476: Good Enough for People Is Not Good Enough for Machines

touch point podcast

Play Episode Listen Later Feb 25, 2026 36:55


Health systems have spent 20 years optimizing for the patient who searches, clicks, and reads. They are not optimizing for the agent that queries, evaluates, and routes. Those are two different audiences — and most organizations are only ready for one of them. The digital front door was built on a human assumption: that discovery begins with a search, passes through a website, and ends in conversion. Agentic AI doesn't use doors. It uses structured pathways, machine-readable attributes, and decision logic that operates entirely outside your owned channel. The routing is already happening. The question is whether health systems are in the decision set - or invisible to it. The infrastructure making this possible isn't speculative. Model Context Protocol (MCP), now an open standard backed by Anthropic, OpenAI, and Google DeepMind, defines how AI agents connect to external tools and data sources. NLWeb, launched by Microsoft in May 2025, turns websites into machine-queryable endpoints. Together, they create an execution layer on top of your digital ecosystem. And most hospital websites aren't built to be legible to it. Chris Boyer and Reed Smith work through what this shift actually requires: Why the patient journey now runs conversation → AI interpretation → machine routing → conversion — and health systems control only the last step What breaks when machines encounter unstructured provider bios, inconsistent service line naming, and scheduling availability gaps Why brand strength built on emotional resonance doesn't translate to machine-readable signals — and what does The gap between "78% of health systems engaged in AI projects" and the 52% that feel operationally ready to implement them What a practical machine readiness audit looks like, and who inside the organization should own it The organizational problem is as hard as the technical one. Marketing owns content but rarely owns schema. IT owns infrastructure but rarely thinks in terms of machine-readable patient experience. Someone has to own machine readiness as a cross-functional problem. Right now, almost no one does. If your digital strategy was designed for the patient who searches, clicks, and reads -  it was not designed for the agent that queries, evaluates, and routes. Mentions From the Show:  Dean Browell on LinkedIn Danny Fell on LinkedIn Reed Smith on LinkedIn Chris Boyer on LinkedIn Chris Boyer website Chris Boyer on BlueSky Reed Smith on BlueSky Learn more about your ad choices. Visit megaphone.fm/adchoices

Týdeník Respekt • Podcasty
Stanislav Fort: AI bublina je mýtus. Vývoj je rychlý a lidé možná jen neví, jak dobré už modely jsou

Týdeník Respekt • Podcasty

Play Episode Listen Later Feb 24, 2026 96:12


Se Stanislavem Fortem o nástupu AI agentů, limitech a rizicích umělé inteligenci, ochraně a opravování softwarových katedrál a budování kyberbezpečnostního startupu Aisle v Praze. Moderuje Štěpán Sedláček.Pravděpodobně prožíváme technologickou revoluci, jejíž rychlost, rozsah a potenciálních dopady na lidský život a práci nemají obdoby, ať už skončí jakkoli. Nástup velkých jazykových modelů a generativní AI je čím dál patrnější napříč různými sférami lidské činnosti od programování po umění. Otázky, které dříve řešila poměrné malá skupina lidí spojených s výzkumem a vývojem umělé inteligence nebo science fiction, jsou dnes často ve středu zájmu celospolečenské debaty, byť by si možná zasloužily ještě více pozornosti a to i ze strany států. Otázku po tom, jestli někdy bude k dispozici umělá inteligence, která předčí člověka, dnes spíše přebíjí otázka, jestli nás od ní dělí rok, několik let nebo víc času. Stanislav Fort je matematik, fyzik a expert na umělou inteligenci a velké jazykové modely (LLM), který dříve působil v předních světových společnostech v oboru Google DeepMind nebo Anthropic. Jak vidí letošní rok na poli AI?„Myslím, že letos si většina lidí uvědomí, že AI funguje a je schopná dělat užitečnou intelektuální práci. V roce 2025 se staly mainstreamem přemýšlecí (tzv. reasoning) modely zejména v souvislosti s nástupem modelu R1 od společnosti DeepSeek. Během té doby se modely extrémně zlepšily a začaly být schopné řešit dlouhé a obtížné intelektuální úkoly napříč obory u nichž je třeba koordinovat přemýšlení přes dlouhé časové horizonty. A ty se měsíc po měsíci prodlužovaly rapidním tempem. Dnes si už většina lidí v programování i softwarovém inženýrství a odvětvích, která silně závisejí na využití počítačů, uvědomuje, že jsme na hraně toho, kdy tyto věci dokáží pracovat na podobných věcech jako elitní lidé a nepotřebují příliš supervize. Rok 2026 bude rokem, kdy AI agenti a přemýšlecí modely, které je pohánějí, začnou fungovat v reálných ekonomicky důležitých činnostech,“ říká expert Stanislav Fort, který společně s Ondřejem Vlčkem a Jayou Baloo založil firmu Aisle, kde působí jako hlavní vědec.Podařilo se jim vytvořit autonomní AI nástroj, který umí rychle nacházet a opravovat bezpečnostní chyby ve složitých softwarových systémech jako je protokol OpenSSL, který šifruje většinu komunikace na webu. Jaké mají po roce fungování na poli kybernetické bezpečnosti cíle? Jaký zásadní problém se jim podařilo vyřešit? Co říká na nástup AI agentů dění kolem sítě Moltbook? Vidí nějaké fundamentální limity ve vývoji umělé inteligence? Co si myslí o AI bublině na trzích? Jak by se měla Evropa postavit k aktuálním závodům ve vývoji AI? A jaká úskalí má zakládání kyberbezpečnostního startupu v Praze? Nejen na to se ptá v podcastu Zeitgeist Štěpán Sedláček.

Everyday AI Podcast – An AI and ChatGPT Podcast
Ep 719: Google Gemini 3.1 tops charts, Claude Sonnet 4.6 impresses, New OpenAI leaks reveal their massive AI hardware plans and more

Everyday AI Podcast – An AI and ChatGPT Podcast

Play Episode Listen Later Feb 23, 2026 43:45


✅ Two major model releases from Google and Anthropic ✅ The usual AI drama ✅ Surprising AI updates no one saw coming ✅ AI leaks and reports that if true, could change how we workYeah, there was a lot to follow this week in AI. If you missed anything, we've got you covered. Google Gemini 3.1 tops charts, Claude Sonnet 4.6 impresses, New OpenAI leaks reveal their massive AI hardware plans and more -- An Everyday AI Chat with Jordan WilsonNewsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion on LinkedIn: Thoughts on this? Join the convo on LinkedIn and connect with other AI leaders.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Anthropic Revenue Growth vs OpenAI ProjectionsOpenAI's 2030 Hardware and Revenue PlansOpenAI and Anthropic Beef at India SummitAI Global Summit: New Delhi Declaration OverviewGoogle Gemini 3.1 Pro Three-Tier Reasoning SystemGemini 3.1 Pro Benchmark and Performance ScoreClaude Sonnet 4.6 Release and Benchmark ResultsAnthropic Model Tier Comparisons: Haiku, Sonnet, OpusGoogle Pameli Photoshoot AI for Product ImagesAI Job Automation Concerns: Andrew Yang AnalysisOpenAI Consumer Hardware: Speaker, Glasses, LightWeekly AI Model Updates and Feature RolloutsTimestamps:00:00 "Anthropic vs OpenAI Revenue Race"04:00 Anthropic vs OpenAI Revenue Battle07:39 Anthropic's API Usage Decline11:03 AI Summit Sparks Debate and Criticism16:37 "Gemini 3.1 Pro Dominates Benchmarks"18:23 "Google's Edge in AI Race"20:56 "SONNET 4.6 Outperforms Opus"24:13 "Google's AI Photoshoot Tool"29:57 "AI's Impact on Jobs"31:13 AI Dominance & OpenAI Hardware35:03 AI Revenue Risks and Competition41:10 "Subscribe for AI Updates"42:08 "Subscribe to Everyday AI Updates"Keywords: Gemini 3.1, Google DeepMind, AI news, Large Language Model, OpenAI, Anthropic, Claude Sonnet 4.6, Claude Opus 4.6, ChatGPT, Sam Altman, Dario Amodei, Global AI Summit, AI Impact Summit India, AI powered hardware, Smart speaker, Smart glasses, AI chip spending, Compute infrastructure, Revenue growth,Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Start Here ▶️Not sure where to start when it comes to AI? Start with our Start Here Series. You can listen to the first drop -- Episode 691 -- or get free access to our Inner Cricle community and access all episodes there: StartHereSeries.com 

Startup Island TAIWAN Podcast
EP3-26 | 【AI News】Demis Hassabis on "AI Renaissance"

Startup Island TAIWAN Podcast

Play Episode Listen Later Feb 23, 2026 22:50


This episode explores the vision of Demis Hassabis, CEO of Google DeepMind and recipient of the 2024 Nobel Prize in Chemistry. Hassabis argues that 2026 marks a pivotal turning point in human history, as we enter what he describes as an “AI Renaissance”—an era whose impact could be ten times greater than the Industrial Revolution, unfolding at ten times the speed. He predicts that Artificial General Intelligence (AGI) could be achieved before 2030, while cautioning that today's AI systems remain in a state of “jagged intelligence,” still lacking robust reasoning and long-term planning capabilities. As the industry enters a phase of consolidation, Hassabis is focused on transforming AI into a scientific engine. Through breakthroughs such as AlphaFold and initiatives like Isomorphic Labs, he aims to reshape drug discovery, while collaborations with the U.S. Department of Energy—such as the “Genesis Project”—seek to accelerate progress in energy innovation. At the core of his vision is the concept of “Radical Abundance.” As AI drives the marginal cost of healthcare and energy toward near zero, society may begin to transition into a post-scarcity era. To navigate this shift, Hassabis proposes new social mechanisms, including a “Global Abundance Dividend,” and emphasizes that AI governance must extend beyond technologists, requiring international cooperation to ensure these technologies benefit all of humanity.本集的內容將帶您深入探索 Google DeepMind 執行長、2024 年諾貝爾化學獎得主 戴米斯·哈薩比斯 (Demis Hassabis) 的遠見。哈薩比斯指出 2026 年是人類歷史的轉折點,我們正進入一個「AI 文藝復興」時代,其影響力將是工業革命的十倍,且發展速度快上十倍。 哈薩比斯預測通用人工智能 (AGI) 可能在 2030 年前實現,但警告現今 AI 仍處於「參差不齊的智能」狀態,必須克服基礎推理與長期規劃的缺陷。隨著行業進入「洗牌期」,他致力於將 AI 轉化為科學引擎,透過 AlphaFold 與 Isomorphic Labs 變革藥物研發,並與美國能源部合作「創世紀任務」以加速能源突破。 他最核心的觀點是 「激進豐饒」(Radical Abundance):當 AI 讓醫療與能源成本趨近於零,人類將邁向「後稀缺」社會。為應對此轉變,他提出「全球豐饒紅利」等社會機制,並強調 AI 治理不能僅留給技術專家,需透過國際合作確保這項技術能造福全人類。 Powered by Firstory Hosting

早安英文-最调皮的英语电台
外刊精讲 | AI十年内将治愈所有疾病!DeepMind又一神作,人类生命密码即将被改写?

早安英文-最调皮的英语电台

Play Episode Listen Later Feb 22, 2026 24:16


【欢迎订阅】每天早上5:30,准时更新。【阅读原文】标题:Google DeepMind unleashes new AI to investigate DNA's ‘dark matter'DeepMind's AlphaGenome AI model could help solve the problem of predicting how variations in noncoding DNA shape gene expression正文:DNA is the blueprint for life, influencing our health. We know that our genes, the genetic “words” that encode proteins, play a major role in health and disease. But more than 98 percent of our genome consists of DNA that doesn't build proteins. Once disregarded as “junk DNA,” scientists now know that this molecular dark matter is crucial for determining gene activity in ways that keep us healthy—or cause disease.知识点:encode v. /ɪnˈkoʊd/to contain the instructions to produce a protein or function 编码• A single gene can encode multiple proteins through alternative splicing. 单个基因可通过可变剪接编码多种蛋白质。• Only about 2% of the human genome actually encodes proteins. 人类基因组中仅有约2%实际编码蛋白质。获取外刊的完整原文以及精讲笔记,请关注微信公众号「早安英文」,回复“外刊”即可。更多有意思的英语干货等着你!【节目介绍】《早安英文-每日外刊精读》,带你精读最新外刊,了解国际最热事件:分析语法结构,拆解长难句,最接地气的翻译,还有重点词汇讲解。所有选题均来自于《经济学人》《纽约时报》《华尔街日报》《华盛顿邮报》《大西洋月刊》《科学杂志》《国家地理》等国际一线外刊。【适合谁听】1、关注时事热点新闻,想要学习最新最潮流英文表达的英文学习者2、任何想通过地道英文提高听、说、读、写能力的英文学习者3、想快速掌握表达,有出国学习和旅游计划的英语爱好者4、参加各类英语考试的应试者(如大学英语四六级、托福雅思、考研等)【你将获得】1、超过1000篇外刊精读课程,拓展丰富语言表达和文化背景2、逐词、逐句精确讲解,系统掌握英语词汇、听力、阅读和语法3、每期内附学习笔记,包含全文注释、长难句解析、疑难语法点等,帮助扫除阅读障碍。

Squawk Box Europe Express
Brent surpasses $70bbl as Iran tensions heighten

Squawk Box Europe Express

Play Episode Listen Later Feb 20, 2026 26:41


Crude prices move higher with Brent now surpassing $70 a barrel after President Trump warns of potential consequences should Iran fail to reach a deal over its nuclear programme. U.S.-based private credit group Blue Owl announces it will halt investor withdrawals from a debt fund for retail traders, causing shares to slump across the sector. We are live at the A.I. Impact summit in New Delhi where CNBC learns that Nvidia is launching a new $30bn investment into OpenAI. Google DeepMind co-founder and CEO Demis Hassabis says the sector is suffering from a shortfall of memory and chips. And in aviation news, Airbus cuts its output target causing shares to fall but AF-KLM posts more than €2bn in FY profit.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Big Technology Podcast
How Google DeepMind Operates & Experiments — With Lila Ibrahim and James Manyika

Big Technology Podcast

Play Episode Listen Later Feb 18, 2026 50:01


Lila Ibrahim is the COO of Google DeepMind. James Manyika is the senior Vice President for Research, Technology, and Society at Google. The two join Big Technology Podcast to discuss how Google's AI effort operates and runs experiments. In this conversation, we discuss the fundamental operating structure of DeepMind, how Google proper has become more experimental with the revival of Labs and other programs, and how the company is thinking about AI and education. We also cover weather and flood prediction at global scale, and training AI in space. Hit play for a deep inside look at the mechanics behind Google's AI research machine and the big ideas it's betting on next. Take back your personal data with Incogni! Go to incogni.com/bigtechpod and Use code bigtechpod at checkout, our code will get you 60% off on annual plans. Go check it out! Learn more about your ad choices. Visit megaphone.fm/adchoices

The Tech Blog Writer Podcast
Atlassian On Why AI Must Deliver Measurable Business Outcomes

The Tech Blog Writer Podcast

Play Episode Listen Later Feb 18, 2026 23:11


At Davos this year, some of the biggest names in tech sent a clear signal. AI is no longer a novelty. It is no longer a proof-of-concept exercise. As Demis Hassabis of Google DeepMind suggested, AI will shape more meaningful work. And Satya Nadella of Microsoft was even more direct. AI only matters if it improves real outcomes for people. So what does that look like inside the enterprise? In this episode of Tech Talks Daily, I'm joined by Andrew Boyagi, Customer CTO at Atlassian, to unpack how the conversation has shifted from experimentation to execution. Developers, in many ways, are the perfect lens for understanding this moment. Over the last two decades, their role has expanded far beyond writing code. They now own products, infrastructure, operations, and business outcomes. AI is simply the next chapter in that evolution. Andrew argues that AI will not replace engineers. It will raise expectations. As intelligent tools absorb repetitive work, the real value moves up the stack. System design. Architectural thinking. Reviewing and refining AI-generated output and orchestrating solutions that solve genuine business problems. And through it all, humans remain firmly in the loop. We also explore what this means for leadership, why mindset is starting to matter more than technical skill alone, how organizations can avoid layering AI on top of broken processes. And why the companies pulling ahead are treating AI as a strategic discipline, not a feature upgrade. This is a conversation grounded in reality. It speaks to product leaders, CTOs, CIOs, and anyone asking a simple but powerful question. If we are investing in AI, what are we actually getting back? And before we close, we look ahead to Team '26 and the themes Andrew and his team are already working on.  If this year has been about proving value, what will the next chapter demand from enterprise leaders? As always, I'd love to hear your thoughts. Are you seeing proof of value in your organization yet, or are you still working through the pilot phase?

Moneycontrol Podcast
5043: India rolls out three desi AI models; Governments ignoring AI job fears, says Yoshua Bengio and more | MC Tech3

Moneycontrol Podcast

Play Episode Listen Later Feb 18, 2026 8:29


In today's Tech3 from Moneycontrol, we bring you a quick wrap from the India AI Impact Summit in Delhi. India's much-anticipated AI models are unveiled by Sarvam AI, Gnani.ai and the BharatGen consortium. Wikipedia co-founder Jimmy Wales speaks on AI and neutrality, while AI pioneer Yoshua Bengio warns about risk management and job displacement. We also track Google DeepMind's new partnership with Indian institutions and Demis Hassabis on the road to artificial general intelligence.

Jimmy's Jobs of the Future
Musk, Zuckerberg & The Future of AI Dex Hunter-Torricke

Jimmy's Jobs of the Future

Play Episode Listen Later Feb 17, 2026 60:45


Dex Hunter-Torricke has worked with some of the most influential people in Tech over the last 15 years. But now he's sounding the alarm.  In this episode of Jobs of the Future, we sit down with a true Silicon Valley insider who has spent the last 15 years at the epicentre of the tech revolution. From serving as the first executive speechwriter for Eric Schmidt at Google to leading communications for Mark Zuckerberg at Facebook and Elon Musk at SpaceX, our guest has had a front-row seat to the decisions shaping our modern world. Most recently, he served as a senior leader at Google DeepMind, the world's premier AI lab, during the most pivotal moments in the race toward Artificial General Intelligence (AGI). 03:36 - His Tech Industry Journey06:30 - Being at The Front Lines of AGI 07:05  -The Reality Check 09:09 - Why AI is So Different to Every Other Technology 11:05 - The AGI Countdown 12:14 - The Death of the "Good Life" 13:41 - The Geopolitics of Sovereignty 14:46 - Future-Proofing Your Career 18:39 - The Economy of Meaning 21:29 - The 60% Job Vulnerability 25:23 - The Brittle Power of Tech Giants 32:15 - Launching the Center for Tomorrow 52:30 - Redefining Success 57:00 - A Philosophy for Interdependence ********** Follow us on socials! Instagram: https://www.instagram.com/jimmysjobs Tiktok: https://www.tiktok.com/@jimmysjobsofthefuture Twitter / X: https://www.twitter.com/JimmyM Linkedin: https://www.linkedin.com/in/jimmy-mcloughlin-obe/ Want to come on the show? hello@jobsofthefuture.co Sponsor the show or Partner with us: sunny@jobsofthefuture.co Credits: Host / Exec Producer: Jimmy McLoughlin OBE Producer: Sunny Winter https://www.linkedin.com/in/sunnywinter/ Junior Producer: Thuy Dong Edited by: Ben Alexander Kippen Learn more about your ad choices. Visit podcastchoices.com/adchoices

World vs Virus
The day after AGI: Two 'rock stars' of AI on what it will mean for humanity

World vs Virus

Play Episode Listen Later Feb 12, 2026 50:44


Artificial general intelligence (AGI) is that point in the future when the machines can do pretty much everything better than humans. When will it happen, what will it look like, and what will be the impact on humanity? Two of the brightest minds working in AI today, Demis Hassabis, Co-Founder and CEO of Google DeepMind, and Dario Amodei, Co-Founder and CEO of Anthropic, speak to Zanny Minton Beddoes, Editor-in-Chief of The Economist. Benjamin Larsen, an expert in AI at the World Economic Forum, introduces the conversation and gives us a primer on AGI. You can watch the conversation from the Annual Meeting 2026 in Davos here: https://www.weforum.org/meetings/world-economic-forum-annual-meeting-2026/sessions/the-day-after-agi/ Links: Centre for AI Excellence: https://centres.weforum.org/centre-for-ai-excellence/home AI Global Alliance: https://initiatives.weforum.org/ai-global-alliance/home Global Future Council on Artificial General Intelligence:  https://initiatives.weforum.org/global-future-council-on-artificial-general-intelligence/home Related podcasts: Check out all our podcasts on wef.ch/podcasts:  YouTube: - https://www.youtube.com/@wef/podcasts Radio Davos - subscribe: https://pod.link/1504682164 Meet the Leader - subscribe: https://pod.link/1534915560 Agenda Dialogues - subscribe: https://pod.link/1574956552

SHINY HAPPY PEOPLE with Vinay Kumar
Episode 179: Steve Brown on the AI Ultimatum

SHINY HAPPY PEOPLE with Vinay Kumar

Play Episode Listen Later Feb 12, 2026 51:21


Send a textWe are in the ‘Intelligence Age' and the ‘humans versus AI' debates are everywhere. So, we thought we'd bring in an AI futurist and a leading voice in AI, digital transformation, and how AI will shape business, education and society. Meet Steve Brown, entrepreneur and former Google DeepMind and Intel executive who has helped brands like Bank of America, Lenovo, Nespresso, Cameco, and Intuit prepare for what he calls The Intelligence Age. Drawing upon his decades of experience in artificial intelligence and high tech to help leaders build winning AI strategies that fuel innovation, boost performance, and drive growth, Steve succinctly explains the radical global transformation with AI, in his book: 'The AI Ultimatum: Preparing for a World of Intelligent Machines and Radical Transformation'Hit play for Steve's take on the AI ultimatum and key takeaways from his book. [2:46s] Genesis of Steve as an AI futurist[10:09s] On ‘The AI Ultimatum: Preparing for a World of Intelligent Machines and Radical Transformation'[22:08s] On AI innovation, ethical AI, regulations [36:16s] Top 3 future trends in AIRWL: Steve's book 'The AI Ultimatum: Preparing for a World of Intelligent Machines and Radical Transformation'Connect with Steve on LinkedInConnect with Vinay on X and LinkedIn What did you think about this episode? What would you like to hear more about? Or simply, write in and say hello! podcast@c2cod.comSubscribe to us on your favorite platforms – Google Podcasts, Apple Podcasts, Spotify, Overcast, Tune In Alexa, Amazon Music, Pandora, TuneIn + Alexa, Stitcher, Jio Saavn and more.  This podcast is sponsored by C2C-OD, your Organizational Development consulting partner ‘Bringing People and Strategy Together'. Follow @c2cod on Twitter, LinkedIn, Instagram, Facebook 

The Metacast
AAA Publisher Struggles, Roblox's Growth and Transmedia Success

The Metacast

Play Episode Listen Later Feb 10, 2026 69:30


After a volatile few months across games, tech, and public markets, it's time for a grounded check-in on where the industry actually stands. Host Devin Becker is joined by Aaron Bush (Managing Partner & Co-Founder, Naavik) to unpack the latest signals – from AAA publisher performance and what recent EA earnings suggest for big franchises like Battlefield, to Ubisoft's ongoing restructuring, studio closures, and the push to reframe its future through initiatives like Vantage Studios.Next, they dig into Roblox's continued growth and what its recent results imply, even as age-related scrutiny and safety conversations remain part of the narrative.From there, the discussion widens to the state of the console market: the early momentum around Switch 2 sales, the trajectory of Xbox hardware, and why Sony appears to be holding its ground.Devin and Aaron also look at how transmedia is shaping perception and demand, including Nintendo's recent moves and what releases like an upcoming Mario Galaxy movie – and the surprise success of Iron Lung this month – reveal about IP leverage, audience crossover, and timing.They close with addressing the market whiplash around the reveal of Google DeepMind's Genie 3, and a “buy, sell, or hold” round covering Microsoft, Krafton, AAA vs. AA, and PC gaming to highlight where near-term opportunities and risks may be emerging.We'd like to thank Heroic Labs for making this episode possible! Thousands of studios have trusted Heroic Labs to help them focus on their games and not worry about gametech or scaling for success. To learn more and reach out, visit https://heroiclabs.com/?utm_source=Naavik&utm_medium=CPC&utm_campaign=Podcast If you like the episode, please help others find us by leaving a 5-star rating or review! And if you have any comments, requests, or feedback shoot us a note at podcast@naavik.co. Watch the episode: YouTube ChannelFor more episodes and details: Podcast WebsiteFree newsletter: Naavik DigestFollow us: Twitter | LinkedIn | WebsiteSound design by Gavin Mc Cabe.

Category Visionaries
How WindBorne Systems landed their first Air Force contract through Defense Innovation Unit | John Dean

Category Visionaries

Play Episode Listen Later Feb 10, 2026 18:06


WindBorne Systems is transforming global weather forecasting by deploying long-duration weather balloons that fly for weeks instead of hours. What began as a Stanford Student Space Initiative project has scaled to 100 balloons aloft simultaneously, targeting 500 by end of next year, with an end goal of 10,000 balloons monitoring Earth's atmosphere. In this episode of BUILDERS, I sat down with John Dean, Co-Founder and CEO of WindBorne Systems, to explore how the company secured its first government contract in under three years without lobbyists, achieved 4x annual manufacturing growth, and built Weather Mesh—an AI weather model that outperforms competitors from Google DeepMind. Topics Discussed: The technical evolution from Stanford project to operational constellation of altitude-controlled balloons Strategic decision to pursue government revenue before building B2B forecasting products Navigating Defense Innovation Unit and Air Force Lifecycle Management Center procurement as a founder Timeline from founding to first grants (within six months) and first data delivery contract (two and a half years) Current roughly 50/50 revenue split between civilian agencies (NOAA, international weather services) and Department of Defense Building Weather Mesh after Huawei's Pangu Weather validated end-to-end AI forecasting viability Transitioning from founder-led sales by promoting a Palantir hire from proposal writer to public sector growth leader The 30-year vision of millions of fingernail-sized atmospheric sensors creating a planetary nervous system GTM Lessons For B2B Founders: Study the bureaucracy's incentive structures before pitching product value: John spent years mapping how government procurement actually works rather than leading with product capabilities. The critical insight: in DoD sales, the warfighter (end user) doesn't control purchasing decisions. Success requires understanding each stakeholder's specific mandate and aligning your solution to their organizational incentives, not just operational needs. For civilian agencies like NOAA, the dynamics differ entirely. Founders entering govtech should invest 6-12 months learning procurement mechanics before expecting revenue. Use government contracts as non-dilutive scaling capital for hardware businesses: WindBorne secured SBIR grants within six months, then landed their first Air Force data delivery contract through Defense Innovation Unit at the two-and-a-half-year mark. John explicitly treated early grants as equivalent to venture funding but without equity dilution. For companies building physical infrastructure at scale (satellites, hardware networks, manufacturing operations), government contracts provide the runway to reach technical milestones that unlock larger B2B opportunities. This sequencing—government funding first, then B2B products built on that foundation—proves more capital-efficient than attempting to raise massive venture rounds upfront for unproven hardware. Integrate with legacy systems rather than attempting wholesale replacement: WindBorne doesn't aim to replace the 1,000 radiosondes launched daily worldwide—they're expanding coverage from the current 15% of Earth (where humans can launch traditional balloons) to 100%. The hardware is revolutionary (weeks of flight versus two hours), but the go-to-market integrates into existing weather agency workflows and feeds into established models like GFS and ECMWF. This approach accelerated adoption because agencies could add WindBorne data without overhauling their entire forecasting infrastructure. The displacement of radiosondes becomes economically inevitable long-term, but only after proving the system at scale. Move fast once adjacent technology validates your thesis: WindBorne wasn't investing in AI-based weather forecasting until Huawei's Pangu Weather paper demonstrated that end-to-end neural weather models could compete with physics-based simulations. Once that validation appeared, John's team moved immediately—adopting the open architecture and expanding it into Weather Mesh before the approach became widely adopted. The lesson isn't to wait for competitors, but to monitor adjacent technological developments and move decisively when validation emerges. They built a top-performing model by being early to a proven approach, not first to an unproven one. Hire for mid-level roles and promote based on demonstrated judgment: John hired Dana from Palantir as a proposal writer, not as a sales executive. He watched her demonstrate strong opinions that consistently proved correct, then promoted her to build and lead the entire public sector growth organization. This internal promotion model worked better than external executive hires because the person already understood WindBorne's technology, customers, and internal culture. For specialized domains like government sales, bringing in experienced operators at individual contributor levels and promoting them as they prove their judgment builds more effective organizations than hiring executives to parachute in. // Sponsors: Front Lines — We help B2B tech companies launch, manage, and grow podcasts that drive demand, awareness, and thought leadership. www.FrontLines.io The Global Talent Co. — We help tech startups find, vet, hire, pay, and retain amazing marketing talent that costs 50-70% less than the US & Europe. www.GlobalTalent.co // Don't Miss: New Podcast Series — How I Hire Senior GTM leaders share the tactical hiring frameworks they use to build winning revenue teams. Hosted by Andy Mowat, who scaled 4 unicorns from $10M to $100M+ ARR and launched Whispered to help executives find their next role. Subscribe here: https://open.spotify.com/show/53yCHlPfLSMFimtv0riPyM

Deconstructor of Fun
TWiG #369: Genie 3 Explodes, Game Stores Die, & High Guard Stumbles

Deconstructor of Fun

Play Episode Listen Later Feb 5, 2026 64:40


From industry hype to hard reality checks, this week we discuss Shooter Monthly and the latest on High Guard, analyze Aries Interactive's funding, and explore the monumental impact of Google DeepMind's Genie 3 on game development, including industry resistance to new AI tools. We also tackle the decline of physical game stores, address a potentially distorted view of the UK market, and provide a deep dive into Arc Night Enfield's market performance and the future of 3D RPGs, before pinpointing High Guard's marketing missteps and wrapping up with concluding thoughts.02:13 | Shills03:38 | Shooter Monthly and High Guard Discussion06:33 | Aries Interactive Funding and Analysis16:27 | Google DeepMind's Genie 3 Impact30:58 | AI Tools and Industry Resistance32:45 | The Decline of Physical Game Stores34:44 | A Distorted View of the UK36:40 | Arc Night En Field: A Deep Dive40:10 | Enfield's Market Performance40:59 | The Future of 3D RPGs51:11 | High Guard's Marketing Missteps01:03:41 | Concluding Thoughts and Farewell

TD Ameritrade Network
De-Risking Effect: Watching NVDA, GOOGL, QCOM & Bitcoin

TD Ameritrade Network

Play Episode Listen Later Feb 5, 2026 8:55


Kevin Green kicks off Thursday's market coverage with his eyes on the weakness in tech and comm. services on the heels of Alphabet (GOOGL) earnings. He says the ripple effects could impact other names, adding $170 "has to hold" for Nvidia (NVDA). For GOOGL, he says the re-rated "aggressively higher" capex spend was up sharply from market expectations, as Google Deepmind and its AI capabilities continue to spend heavily. KG also examines Qualcomm's (QCOM) downward post-earnings move and Bitcoin's (/BTC) continued fall so far this month. For the S&P 500 (SPX), KG says "keep your head on a swivel" while he projects a wide range today with 6750 to the downside and 6930 to the upside.======== Schwab Network ========Empowering every investor and trader, every market day.Subscribe to the Market Minute newsletter - https://schwabnetwork.com/subscribeDownload the iOS app - https://apps.apple.com/us/app/schwab-network/id1460719185Download the Amazon Fire Tv App - https://www.amazon.com/TD-Ameritrade-Network/dp/B07KRD76C7Watch on Sling - https://watch.sling.com/1/asset/191928615bd8d47686f94682aefaa007/watchWatch on Vizio - https://www.vizio.com/en/watchfreeplus-exploreWatch on DistroTV - https://www.distro.tv/live/schwab-network/Follow us on X – / schwabnetwork Follow us on Facebook – / schwabnetwork Follow us on LinkedIn - / schwab-network About Schwab Network - https://schwabnetwork.com/about

Artificiality
Ellie Pavlick: The AI Paradigm Shift

Artificiality

Play Episode Listen Later Feb 5, 2026 55:49


In this conversation, we explore the foundations of artificial intelligence with Ellie Pavlick, Assistant Professor of Computer Science at Brown University, a Research Scientist at Google Deepmind, and Director of ARIA, an NSF-funded institute examining AI's role in mental health support. Ellie's trajectory—from undergraduate degrees in economics and saxophone performance to pioneering research at the intersection of AI and cognitive science—reflects the kind of interdisciplinary thinking increasingly essential for understanding what these systems are and what they mean for us.Ellie represents a generation of researchers grappling with what she calls a "paradigm shift" in how we understand both artificial and human intelligence. Her work challenges long-held assumptions in cognitive science while refusing to accept easy answers about what AI systems can or cannot do. As she observes, we're witnessing concepts like "intelligence," "meaning," and "understanding" undergo the kind of radical redefinition that historically accompanies major scientific revolutions—where old terms become relics of earlier theories or get repurposed to mean something fundamentally different.Key themes we explore:- The Grounding Question: How Ellie's thinking evolved from believing AI fundamentally lacked meaning without embodied sensory experience to recognizing that grounding itself is a more complex and empirically testable question than either side of the debate typically acknowledges- Symbols Without Symbolism: Her recent collaborative work with Tom Griffiths, Brenden Lake, and others demonstrating that large language models exhibit capabilities previously thought to require explicit symbolic architectures—challenging decades of cognitive science orthodoxy about human cognition- The Measurability Problem: Why AI's apparent success on standardized tests reveals more about the inadequacy of our metrics than the adequacy of the systems, and how education, hiring, and relationships have always resisted quantification in ways we conveniently forget when evaluating AI- Intelligence as Moving Target: Ellie's argument that "intelligence" functions as a placeholder term for "the thing we don't yet understand"—always retreating as scientific progress advances, much like obsolete scientific concepts such as ether- The Value Frontier: Why the aspects of human experience that resist quantification may be definitionally human—not because they're inherently unmeasurable, but because they represent whatever currently sits beyond our measurement capabilities- Mental Health as Hard Problem: Why her new institute focuses on arguably the most challenging application domain for AI, where getting memory, co-adaptation, transparency, and long-term human impact right isn't optional but essentialEllie consistently pushes back against premature conclusions—whether it's claims that AI definitively lacks meaning or assertions that passing standardized tests proves human-level capability. Her approach emphasizes asking "are these processes similar or different?" rather than making sweeping judgments about whether systems "really" understand or "truly" have intelligence. As Ellie notes, we're at the "tip of the iceberg" in understanding these systems—we haven't yet pushed them to their breaking point or discovered their full potential.Her work on ARIA demonstrates this philosophy in practice. Rather than avoiding mental health applications because they're ethically fraught, she's leaning into the difficulty precisely because it forces confrontation with all the hard questions—from how memory works to how repeated human-AI interaction fundamentally changes both parties over time. It's research that refuses to wait a generation to see if we've "screwed up a whole generation."

Silicon Carne, un peu de picante dans la Tech
Davos 2026 : le jour où la Silicon Valley a avoué la vérité sur l'IA !

Silicon Carne, un peu de picante dans la Tech

Play Episode Listen Later Feb 4, 2026 74:30


Aujourd'hui dans Silicon Carne on revient sur le Forum Économique Mondial de Davos 2026 où les déclarations des patrons de la Tech ont été explosives. Alors, qu'est-ce qu'ils nous ont dit sur l'avenir de l'IA, de l'emploi et de notre civilisation ?

People Solve Problems
Steve Brown of Google DeepMind fame on Leading AI Transformation

People Solve Problems

Play Episode Listen Later Feb 4, 2026 27:13


Steve Brown has spent years helping organizations see around corners. As a former executive at both Intel Labs and Google DeepMind, where he served as their in-house futurist, Steve brings a unique perspective on what happens when rapid technological change collides with practical business reality. In this conversation, he challenges leaders to move beyond fear and cost-cutting mentality to embrace AI as a tool for genuine value creation. Steve explains that being a futurist isn't about making predictions—that's for fortune tellers. Instead, it's a discipline of examining trends, understanding how they intersect over time, and mapping possible futures. But the landscape has grown increasingly complex. The pace of AI development has accelerated so dramatically that projecting even six months ahead has become challenging. What makes AI particularly difficult to forecast isn't just the technology itself, but the ripple effects of having powerful intelligence available on demand at low cost. As Steve puts it, this changes everything about everything. When it comes to implementation, Steve grounds his approach in a framework he calls "possibility and purpose." He sees AI creating an enormous landscape of what's possible, but warns that the real leadership challenge is figuring out what not to do. By finding the intersection between corporate purpose and this expanded possibility space, organizations can focus their efforts where they'll create the most value. Steve offers a fresh perspective on AI's relationship with human qualities, such as empathy. While acknowledging that AI simulates rather than truly experiences emotions, he points to promising applications like AI therapists that can reach people who would never seek human help. The key is understanding when simulation serves a genuine need versus when it creates friction in developing essential human skills—like learning to navigate relationships and failures. The heart of Steve's message centers on reimagining AI not as a replacement for humans, but as a collaborative teammate. He describes three types of AI agents organizations should consider: offload agents that handle boring repetitive work, elevate agents that amplify human capabilities, and extend agents that enable people to do things they couldn't do before. This framework transforms workforce planning from a zero-sum game into an expansion strategy. Steve points to Jensen Huang's vision at NVIDIA—growing from 30,000 employees to 50,000, supported by 100 million AI assistants—as an example of thinking about amplification rather than reduction. Steve argues that AI project failures typically stem from three core issues: immature technology, poor change management, and messy data. Organizations succeed when they start small with bounded projects, balance short-term wins with medium and long-term initiatives, and treat AI implementation as fundamentally a change management challenge rather than just a technology deployment. He emphasizes that everyone owns the AI transition—from line of business to HR to IT—though having a Chief AI Officer can help drive the organizational transformation required. Rather than obsessing over traditional ROI calculations, Steve encourages leaders to focus on the human challenges that AI can solve. When the average knowledge worker spends 32 days per year just searching for information, cutting that time in half represents massive value that goes beyond simple efficiency metrics. Learn more about Steve's work and access his several resources: AI Resources https://beacons.ai/aifuturist AI Course https://www.stevebrown.ai/ai-course AI Workshops https://www.stevebrown.ai/workshop Keynotes https://www.stevebrown.ai/keynotes YouTube www.youtube.com/@futureofai Amazon book “The AI Ultimatum: Preparing for a World of Intelligent Machines and Radical Transformation.” https://a.co/d/1YoFV5C Connect with him on LinkedIn at https://www.linkedin.com/in/futuresteve/

Lex Fridman Podcast
#490 – State of AI in 2026: LLMs, Coding, Scaling Laws, China, Agents, GPUs, AGI

Lex Fridman Podcast

Play Episode Listen Later Feb 1, 2026


Nathan Lambert and Sebastian Raschka are machine learning researchers, engineers, and educators. Nathan is the post-training lead at the Allen Institute for AI (Ai2) and the author of The RLHF Book. Sebastian Raschka is the author of Build a Large Language Model (From Scratch) and Build a Reasoning Model (From Scratch). Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep490-sc See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc. Transcript: https://lexfridman.com/ai-sota-2026-transcript CONTACT LEX: Feedback – give feedback to Lex: https://lexfridman.com/survey AMA – submit questions, videos or call-in: https://lexfridman.com/ama Hiring – join our team: https://lexfridman.com/hiring Other – other ways to get in touch: https://lexfridman.com/contact SPONSORS: To support this podcast, check out our sponsors & get discounts: Box: Intelligent content management platform. Go to https://box.com/ai Quo: Phone system (calls, texts, contacts) for businesses. Go to https://quo.com/lex UPLIFT Desk: Standing desks and office ergonomics. Go to https://upliftdesk.com/lex Fin: AI agent for customer service. Go to https://fin.ai/lex Shopify: Sell stuff online. Go to https://shopify.com/lex CodeRabbit: AI-powered code reviews. Go to https://coderabbit.ai/lex LMNT: Zero-sugar electrolyte drink mix. Go to https://drinkLMNT.com/lex Perplexity: AI-powered answer engine. Go to https://perplexity.ai/ OUTLINE: (00:00) – Introduction (01:39) – Sponsors, Comments, and Reflections (16:29) – China vs US: Who wins the AI race? (25:11) – ChatGPT vs Claude vs Gemini vs Grok: Who is winning? (36:11) – Best AI for coding (43:02) – Open Source vs Closed Source LLMs (54:41) – Transformers: Evolution of LLMs since 2019 (1:02:38) – AI Scaling Laws: Are they dead or still holding? (1:18:45) – How AI is trained: Pre-training, Mid-training, and Post-training (1:51:51) – Post-training explained: Exciting new research directions in LLMs (2:12:43) – Advice for beginners on how to get into AI development & research (2:35:36) – Work culture in AI (72+ hour weeks) (2:39:22) – Silicon Valley bubble (2:43:19) – Text diffusion models and other new research directions (2:49:01) – Tool use (2:53:17) – Continual learning (2:58:39) – Long context (3:04:54) – Robotics (3:14:04) – Timeline to AGI (3:21:20) – Will AI replace programmers? (3:39:51) – Is the dream of AGI dying? (3:46:40) – How AI will make money? (3:51:02) – Big acquisitions in 2026 (3:55:34) – Future of OpenAI, Anthropic, Google DeepMind, xAI, Meta (4:08:08) – Manhattan Project for AI (4:14:42) – Future of NVIDIA, GPUs, and AI compute clusters (4:22:48) – Future of human civilization

AI For Humans
The AI Holodeck Just Got Real: Google's Project Genie

AI For Humans

Play Episode Listen Later Jan 30, 2026 53:42


Google DeepMind dropped Project Genie and you can now walk around AI-generated 3D worlds. DeepMind CEO Demis Hassabis says this is the path to the holodeck. He's not wrong. Meanwhile Clawd aka ClawdBot and now Moltbot is giving people AI superpowers… spawning agents, teaching itself skills, connecting to everything in your life. It's also a massive security risk and people are spending thousands on API calls. You *probably* shouldn't use it. Plus… Grok video is suddenly really good, KREA real-time generation, a robot that hip-checks dishwasher drawers, Anthropic CEO Dario Amodei's sobering new essay, and the dead internet theory is no longer a theory. THE ROBOTS ARE DOING CHORES NOW. WE'RE SO BACK. #ai #ainews #projectgenie Come to our Discord: https://discord.gg/muD2TYgC8f Join our Patreon: https://www.patreon.com/AIForHumansShow AI For Humans Newsletter: https://aiforhumans.beehiiv.com/ Follow us for more on X @AIForHumansShow Join our TikTok @aiforhumansshow To book us for speaking, please visit our website: https://www.aiforhumans.show/   // Show Links // Google Project Genie: Playable Worlds (formerly Genie 3) https://blog.google/innovation-and-ai/models-and-research/google-deepmind/project-genie/ Josh Woodward (VP of AI Studio and more @ Google) low poly 3D cowboy https://x.com/joshwoodward/status/2016921839038255210?s=20 Theoretically Media: Realistic Hollywood Blvd & New York https://x.com/TheoMediaAI/status/2016919987428991107?s=20 Ethan Mollack: Otter Pilot in an Airport https://x.com/emollick/status/2016919989865840906?s=20 Clawdbot (Now MoltBot) Insanity https://clawd.bot/ Creator Peter Steinberger On TBPN https://x.com/tbpn/status/2016306566077755714?s=20 Good long post on the pros / cons & safety concerns https://x.com/Andrey__HQ/status/2016228427901370760?s=20 White Hat Hacker Shows Exactly How Bad It Can Get https://x.com/theonej vo/status/2016510190464675980?s=20 MoltBook - Social Network For Bots, By Bots https://x.com/MattPRD/status/2016560277333168540?s=20 Kimi K2.5 launches  https://www.kimi.com/ Dario's Amodei's Essay Shows Potential AI Risks https://www.darioamodei.com/essay/the-adolescence-of-technology Lucy DecartAI Demo https://lucy.decart.ai/ KREA Real-time https://www.krea.ai/realtime New Grok Imagine Model https://x.com/xai/status/2016745652739363129 Figure Introduces Helix 2 (the AI model running the robot) https://x.com/Figure_robot/status/2016207013236375661?s=20 Pokemon-style Interactive Game For Podcast Content Quiz https://x.com/lennysan/status/2016584174421897590?s=20 Small Builder Alert: Racing App For Data https://x.com/ShinjaeJung/status/2015980232667435048?s=20 Blackfiles: AI Generated Visual YouTube Channel Long Form https://youtu.be/2n01_rt2vKg?si=QvP3vYgGV6_Y7pua Mureka V8 AI Music Model https://x.com/EHuanglu/status/2016668882644156863?s=20 https://x.com/Mureka_AI/status/2016544920283365831 Isometric NYC https://x.com/_coenen/status/2014359718697799989 https://cannoneyed.com/isometric-nyc/  

DisrupTV
AI at Machine Speed: What Leaders Are Getting Wrong in 2026 | DisrupTV Ep. 426

DisrupTV

Play Episode Listen Later Jan 30, 2026 54:59


This week on DisrupTV, we go beyond the AI hype and into the decisions shaping 2026. Peter Danenberg, Senior Software Engineer at Google DeepMind, joins us to unpack the rapid evolution of multimodal models like Gemini, the race toward AGI, and what he's seeing from inside one of the world's most influential AI labs. We're also joined by David Bray, PhD, Distinguished Chair of the Accelerator at The Stimson Center & Principal/CEO, LDA Ventures Inc., to examine the dangerous assumptions executives are making post-Davos — from AI-driven workforce disruption and machine-speed cyber threats to hardware security risks and why geopolitics can no longer be ignored by business leaders. From meetups and models to geopolitics and governance, this episode is a must-watch for leaders navigating AI at scale.

Beyond The Valley
Can We Control AI? DeepMind's Plan for Responsible AI

Beyond The Valley

Play Episode Listen Later Jan 29, 2026 44:25


Google DeepMind's Dawn Bloxwich and Tom Lue join "The Tech Download" to explore one of the biggest questions in technology today: Can we control AI? They break down how DeepMind is building safeguards, stress‑testing its models and working with global regulators to ensure advanced AI develops responsibly.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Business Pants
The feckless Minnesota CEO response: George Floyd vs. Alex Pretti

Business Pants

Play Episode Listen Later Jan 28, 2026 60:19


At the beginning of December 2026: ICE announced an enforcement surge in the Twin Cities.January 6, 2026: DHS announced what it called the largest immigration enforcement operation ever carried out, sending 2,000 agents to the Minneapolis–Saint Paul metropolitan area. January 7, 2026: ICE agent Jonathan Ross fatally shoots Renée Nicole GoodJanuary 8–14, 2026: Protests, vigils, and marches continue in Minneapolis against ICE and Operation Metro SurgeJanuary 13, 2026: ‘Madness': two US citizens violently detained by ICE in Minnesota, officials say. Two Target employees forced to the ground, then into SUV, then dumped in different parking lotJanuary 14, 2026: A different ICE agent shoots and injures a man in north Minneapolis; the man survives after being shot in the leg. This second shooting further intensifies public anger and calls for an end to the federal surgeJanuary 17, 2026: National Anger Spills Into Target Stores, AgainJanuary 22, 2026: Target Store Staff Are Skipping Work Over ICE's Crackdown in MinnesotaJanuary 23, 2026: A statewide Day of Truth & Freedom / Minnesota general strike is held, described as the first U.S. general strike in about 80 years, explicitly targeting ICE operations and Operation Metro Surge. On that day, many workers, businesses, schools, and institutions in Minneapolis and across Minnesota participate in work stoppages, marches, and large rallies against federal immigration enforcement.January 24, 2026: Federal Border Patrol agents assigned to the metro surge shoot and kill Alex Jeffrey PrettiJanuary 25, 2026: The Minnesota Chamber of Commerce released this letter on behalf of more than 60 CEOs of Minnesota-based companies today.Eight people have died in dealings with ICE so far in 2026. Keith Porter, Parady La, Heber Sanchaz Domínguez, Victor Manuel Diaz, Luis Beltran Yanez-Cruz, Luis Gustavo Nunez Caceres, and Geraldo Lunas Campos. The high-profile fatal shootings follow the deaths of at least 32 people in ICE custody in 2025 – the highest number since 2004.Minnesota CEOs Seek De-Escalation After Border Police Shooting“The business community in Minnesota prides itself in providing leadership and solving problems to ensure a strong and vibrant state. The recent challenges facing our state have created widespread disruption and tragic loss of life. For the past several weeks, representatives of Minnesota's business community have been working every day behind the scenes with federal, state and local officials to advance real solutions. These efforts have included close communication with the Governor, the White House, the Vice President and local mayors. There are ways for us to come together to foster progress. With yesterday's tragic news, we are calling for an immediate deescalation of tensions and for state, local and federal officials to work together to find real solutions. We have been working for generations to build a strong and vibrant state here in Minnesota and will do so in the months and years ahead with equal and even greater commitment. In this difficult moment for our community, we call for peace and focused cooperation among local, state and federal leaders to achieve a swift and durable solution that enables families, businesses, our employees, and communities across Minnesota to resume our work to build a bright and prosperous future. “3M – William Brown, Chairman and CEOAmeriprise Financial – James Cracchiolo, Chairman and CEOAPi Group – Russell Becker, CEOBest Buy – Corie Barry, CEO C.H. Robinson – Dave Bozeman, President and CEODeluxe Corporation – Barry McCarthy, President and CEODonaldson Company, Inc. – Tod Carpenter, Chairman and CEOEcolab – Christophe Beck, Chairman and CEOGeneral Mills – Jeff Harmening, Chairman and CEOH.B. Fuller – On behalf of our entire organization [CEO Celeste Mastin]Hormel – Jeff Ettinger, Interim CEOMedtronic – Geoff Martha, CEO and ChairmannVent – Beth Wozniak, Chair and CEO Patterson Companies – Robert Rajalingam, CEOPentair – John L. Stauch, President and CEOPiper Sandler – Chad Abraham, Chairman and CEOSleep Number – Linda Findley, CEO (4/2025)Solventum – Bryan Hanson, CEOSPS Commerce – Chad Collins, CEO SunOpta – Brian Kocher, CEOTarget – Michael Fiddelke, Incoming CEO Tennant Company – Dave Huml, CEOThe Toro Company – Rick Olson, Chairman and CEOU.S. Bancorp – Gunjan Kedia, CEOWinnebago Industries – Michael Happe, CEOXcel Energy – Bob Frenzel, Chairman and CEO Keith Rabois, Managing director of Khosla Ventures: “no law enforcement has shot an innocent person. illegals are committing violent crimes everyday.”Khosla Ventures: “We prefer brutal honesty to hypocritical politeness.”“Technology and innovation have reshaped our world and disrupted the way we all live and work. The future may not be knowable, but it is inventable—and it belongs to those who dare to imagine what's possible.”Managing Directors: 5 dudes (3 stanford; 3 harvard)Founder Vinod Khosla: “I agree with @EthanChoi7. Macho ICE vigilantes running amuck empowered by a conscious-less administration. The video was sickening to watch and the storytelling without facts or with invented fictitious facts by authorities almost unimaginable in a civilized society. ICE personnel must have ice water running thru their veins to treat other human beings this way. There is politics but humanity should transcend that”Target's incoming CEO Michael Fiddelke in a video message sent to employees (January 26, 2026): “Right now, as someone who is raising a family here in the Twin Cities and as a leader of this hometown company I want to acknowledge where we are. The violence and loss of life in our community is incredibly painful. I know it's weighing heavily on many of you across the country, as it is with me. What's happening affects us not just as a company but as people, as neighbors, friends and family members.”A company spokesman declined to comment. Still nothing official on website.Lloyd Vogel, CEO Garage Grown Gear: said he felt compelled to condemn the shootings in a LinkedIn post because he lives and works in the Twin Cities. "My primary rationale was to show solidarity with my community," he told Business Insider. "It's also just bad for business when people are afraid to leave their homes.""There's so much fear in Minnesota right now," he said. "It would just be cowardice to not have a perspective on this."JPMorgan Chase CEO and Chair Jamie Dimon 1/22/26 Davos): ″I don't like what I'm seeing, five grown men beating up a little old lady. So I think we should calm down a little bit on the internal anger about immigration… We need these people. They work in our hospitals and hotels and restaurants and agriculture, and they're good people.… They should be treated that way.”On Saturday evening (1/24/2026), top technology executives gathered in Washington to attend a screening of “Melania,” a documentary produced by Amazon about the first lady, Melania Trump. Black-tie event: guests were handed monogrammed buckets of popcorn, framed screening tickets for their trophy shelves, and a limited-edition copy of Trump's 2024 book of the same title as her documentary, “Melania.“Among them was Andy Jassy, the chief executive of Amazon; Tim Cook, the chief executive of Apple; and Lisa Su, the chief executive of chip maker AMD.Also: Eric Yuan – CEO, Zoom; Lynn Martin – President, New York Stock Exchange; General Electric CEO Larry CulpApple CEO Tim Cook says it's 'time for de-escalation' in MinneapolisCook came under fire for appearing at The White House just hours after federal immigration authorities killed Alex Pretti, a veterans' nurse, in Minnesota“This is a time for de-escalation,” Cook wrote to Apple staff. “I believe America is strongest when we live up to our highest ideals, when we treat everyone with dignity and respect no matter who they are or where they're from, and when we embrace our shared humanity.”Cook said he “had a good conversation with the president this week where I shared my views, and I appreciate his openness to engaging on issues that matter to us all." Apple's Cook says he's ‘heartbroken' by Minneapolis events and has spoken with TrumpOpen AI CEO Sam Altman (1/27/26): I love the US and its values of democracy and freedom and will be supportive of the country however I can; OpenAI will too. But part of loving the country is the American duty to push back against overreach. What's happening with ICE is going too far. There is a big difference between deporting violent criminals and what's happening now, and we need to get the distinction right. President Trump is a very strong leader, and I hope he will rise to this moment and unite the country. I am encouraged by the last few hours of response and hope to see trust rebuilt with transparent investigations. As a company, we aim to stick to our convictions and not get blown around by changing fashions too much. We didn't become super woke when that was popular, we didn't start talking about masculine corporate energy when that was popular, and we are not going to make a lot of performative statements now about safety or politics or anything else. But we are going to continue to try to figure out how to actually do the right thing as best as we can, engage with leaders and push for our values, and speak up clearly about it as needed.James Dyett, Global Business at OpenAI: “There is far more outrage from tech leaders over a wealth tax than masked ICE agents terrorizing communities and executing civilians in the streets. Tells you what you need to know about the values of our industry.”Angel Investor Jason Calacanis: Once again, I will remind everyone that our leaders are failing us. True leadership would be to calm this situation down by telling these non-peaceful protestors to stay home while recalling these inadequately-trained agents.”Jeff Dean, Chief Scientist, Google DeepMind & Google Research. Gemini Lead: “This is absolutely shameful. Agents of a federal agency unnecessarily escalating, and then executing a defenseless citizen whose offense appears to be using his cell phone camera. Every person regardless of political affiliation should be denouncing this.”Jeffrey Sonnenfeld, senior associate dean for leadership studies at the Yale School of Management: "CEOs are feeling the community pressure." He said that reactions that convey sorrow and don't mention Trump or ICE are likely to be perceived as an unwelcome challenge to the White House's immigration agenda. "That is not what the Trump administration wanted," he said.Business Roundtable CEO Joshua Bolten asked to comment on the chaos in Minneapolis: replied with a statement endorsing the Minnesota Chamber's call for "cooperation between state, local, and federal authorities to immediately de-escalate the situation in Minneapolis."Robert Pasin, CEO of toy company Radio Flyer: recently shared an email on LinkedIn that he sent to his employees that was critical of the shootings in Minneapolis: "I am deeply concerned about the current state of our democracy, and the continued actions we are seeing from President Trump and his administration that are intended to undermine democratic institutions, the rule of law, and the norms that hold our country together."Dario Amodei, CEO Anthropic: called the events in Minnesota a “horror” on Monday. An Anthropic spokeswoman said the company did not have contracts with ICE.ICEout.tech statement from January 24, 2026: "We condemn the Border Patrol's killing of Alex Pretti and the violent surge of federal agents across our cities. The wanton brutality we've seen from ICE and CBP has removed any credibility that these actions are about immigration enforcement. Their goal is terror, cruelty, and suppression of dissent. This must end. Tech professionals are speaking up against this brutality, and we call on all our colleagues who share our values to use their voice. We know our industry leaders have leverage: in October, they persuaded Trump to call off a planned ICE surge in San Francisco, and big tech CEOs are in the White House tonight. Now they need to go further, and join us in demanding ICE out of all of our cities." 811: 508 names; 19 one name with title, 284 role onlyReid Hoffman says business leaders are wrong to stay silent about the Trump administrationThe LinkedIn cofounder and tech investor said in an episode of the "Rapid Response" podcast published Tuesday that he rejects the idea that executives can simply wait out political turbulence: "The theory that if you just keep your mouth shut, the storm will blow over and it won't be a problem — you should be disabused of that theory now," Hoffman said.Palantir Defends Work With ICE to Staff Following Killing of Alex Pretti: Leadership defended its work as in part improving “ICE's operational effectiveness.”

AI Inside
The Kleenex of AI

AI Inside

Play Episode Listen Later Jan 28, 2026 70:03


This episode is sponsored by Your360 AI. Get 10% off through January 2026 at ⁠Your360.ai⁠ with code: INSIDE. On this week's AI Inside, Jeff Jarvis and Jason Howell test Google's new Gemini-powered Auto-Browse Chrome agents, wonder whether Yahoo Scout really matters, question Apple's Gemini-fueled Siri revamp and rumored AI pin, and explore Mozilla's “rebel alliance” bet on open-source AI. Note: Time codes subject to change depending on dynamic ad insertion by the distributor. CHAPTERS: 00:00 - Podcast begins 0:04:30 - Chrome takes on AI browsers with tighter Gemini integration, agentic features for autonomous tasks 0:26:42 - Yahoo Scout looks like a more web-friendly take on AI search 0:38:31 - Apple to Revamp Siri as a Built-In iPhone, Mac Chatbot to Fend Off OpenAI 0:42:59 - Not to be outdone by OpenAI, Apple is reportedly developing an AI wearable 0:47:10 - Mozilla is building an AI ‘rebel alliance' to take on industry heavyweights OpenAI, Anthropic 0:56:14 - Google DeepMind launches AI tool to help identify genetic drivers of disease 0:59:05 - The EU tells Google to give external AI assistants the same access to Android as Gemini has 1:01:07 - Shopify Merchants to Pay 4% Fee on ChatGPT Checkout Sales 1:02:23 - Microsoft announces powerful new chip for AI inference 1:03:50 - EU launches formal investigation of xAI over Grok's sexualized deepfakes Learn more about your ad choices. Visit megaphone.fm/adchoices

Squawk Pod
Davos 2026: Google DeepMind CEO Demis Hassabis 1/24/26

Squawk Pod

Play Episode Listen Later Jan 24, 2026 15:08


AI is front and center in Davos this year, as world leaders and tech executives debate how quickly the technology is reshaping the economy and workforce. Demis Hassabis, co-founder and CEO of Google DeepMind, sits down with CNBC's Andrew Ross Sorkin at the World Economic Forum. The two discuss Gemini's position in the AI race, the evolution of artificial general intelligence (AGI), and what it all means for jobs.In this episode:Demis Hassabis, @demihassabisAndrew Ross Sorkin, @andrewrsorkinCameron Costa, @CameronCostaNY Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

AI For Humans
Claude Code Is Taking Over (And We Don't Hate It)

AI For Humans

Play Episode Listen Later Jan 23, 2026 59:16


Claude Code is taking over and even the Wall Street Journal is Claude Pilled. Anthropic CEO Dario Amodei just said we're 6 months from AI doing most software engineering. No big deal. Claude Code skills are exploding: Remotion for AI video editing, Pencil for infinite design canvases, Compound Engineering for spinning up agent fleets while you sleep. Your $200/month Max subscription doesn't stand a chance. Plus Apple's working on an AI pin, Runway dropped Gen 4.5, LTX Studio has a wild audio-to-video model, and there's an AI monk with 2.5 million followers selling healing journeys. WE'RE CLAUDE PILLED NOW. RESISTANCE IS FUTILE.   Come to our Discord: https://discord.gg/muD2TYgC8f Join our Patreon: https://www.patreon.com/AIForHumansShow AI For Humans Newsletter: https://aiforhumans.beehiiv.com/ Follow us for more on X @AIForHumansShow Join our TikTok @aiforhumansshow To book us for speaking, please visit our website: https://www.aiforhumans.show/   Show Links Anthropic CEO Dario Amoedi at Davos https://youtu.be/9Zz2KrBDXUo?si=JliJ8xSnndouVWUM Even The Wall Street Journal Is Claude Code Pilled https://x.com/WSJ/status/2014186506320007182?s=20 Remotion: Video Editing In Claude https://www.remotion.dev/ Coding example: https://x.com/Remotion/status/2013626968386765291?s=20 Very good Remotion Video Example: https://x.com/justinmfarrugia/status/2014162910168162478?s=20 Infinite Design Canvas https://x.com/tomkrcha/status/2014028990810300498?s=20 Compound Engineering https://x.com/kieranklaassen/status/2013776190042185971?s=20 Matt Pocock's Claude Tutorials https://x.com/mattpocockuk/status/2014336302120923513?s=20 Meanwhile, Claude has a constitution now… https://www.anthropic.com/constitution Apple Wearable AI Pin https://www.theinformation.com/articles/apple-developing-ai-wearable-pin?rc=c3oojq&shared=2c49629944958284 New Apple AI Chatbot This Fall? https://x.com/markgurman/status/2014063049821299069?s=20 Google Buys Hume As Voice Tech Heats Up? https://www.wired.com/story/google-hires-hume-ai-ceo-licensing-deal-gemini/ (paywall) https://techcrunch.com/2026/01/22/google-reportedly-snags-up-team-behind-ai-voice-startup-hume-ai/ Google Deepmind's D4RT Model https://x.com/GoogleDeepMind/status/2014352808426807527?s=20 https://deepmind.google/blog/d4rt-teaching-ai-to-see-the-world-in-four-dimensions/ Runway Gen 4.5 Image to Video https://x.com/runwayml/status/2014090404769976744?s=20 Audio-to-Video From LTX https://x.com/LTXStudio/status/2013650214171877852?s=20 Good for music videos: https://x.com/fofrAI/status/2014110494315913706?s=20 Borat in ALL THE THINGS https://x.com/maxescu/status/2013650830650741130?s=20 Goodbye Kaplan, Gemini Launches SAT Practice Tests https://x.com/Google/status/2014020819173687626?s=20 The AI Monk: Do We Want This? https://x.com/pubity/status/2009762025707069545?s=20 Every Street Fighter Pose Brought To Life https://www.reddit.com/r/aivideo/comments/1qj3bys/every_street_fighter_ii_losing_pose_brought_to/ The ELEVEN ALBUM https://x.com/elevenlabsio/status/2014021275107172618?s=20 Epic Sports Anime  https://www.tiktok.com/t/ZThSmmnVA/ Vibe Coded Driving Game https://www.youtube.com/watch?v=mY-4Ls_2TS0&t=3s Our buddy Theoretically Media launched a newsletter! https://theoreticallymedia.beehiiv.com/p/openai-s-suno-killer-the-cinematic-prompt-you-ve-been-waiting-for SLIPPERY ROBIT https://x.com/rohanpaul_ai/status/2013856833426071787?s=20  

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
Captaining IMO Gold, Deep Think, On-Policy RL, Feeling the AGI in Singapore — Yi Tay

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Jan 23, 2026 92:05


From shipping Gemini Deep Think and IMO Gold to launching the Reasoning and AGI team in Singapore, Yi Tay has spent the last 18 months living through the full arc of Google DeepMind's pivot from architecture research to RL-driven reasoning—watching his team go from a dozen researchers to 300+, training models that solve International Math Olympiad problems in a live competition, and building the infrastructure to scale deep thinking across every domain, and driving Gemini to the top of the leaderboards across every category. Yi Returns to dig into the inside story of the IMO effort and more!We discuss:* Yi's path: Brain → Reka → Google DeepMind → Reasoning and AGI team Singapore, leading model training for Gemini Deep Think and IMO Gold* The IMO Gold story: four co-captains (Yi in Singapore, Jonathan in London, Jordan in Mountain View, and Tong leading the overall effort), training the checkpoint in ~1 week, live competition in Australia with professors punching in problems as they came out, and the tension of not knowing if they'd hit Gold until the human scores came in (because the Gold threshold is a percentile, not a fixed number)* Why they threw away AlphaProof: “If one model can't do it, can we get to AGI?” The decision to abandon symbolic systems and bet on end-to-end Gemini with RL was bold and non-consensus* On-policy vs. off-policy RL: off-policy is imitation learning (copying someone else's trajectory), on-policy is the model generating its own outputs, getting rewarded, and training on its own experience—”humans learn by making mistakes, not by copying”* Why self-consistency and parallel thinking are fundamental: sampling multiple times, majority voting, LM judges, and internal verification are all forms of self-consistency that unlock reasoning beyond single-shot inference* The data efficiency frontier: humans learn from 8 orders of magnitude less data than models, so where's the bug? Is it the architecture, the learning algorithm, backprop, off-policyness, or something else?* Three schools of thought on world models: (1) Genie/spatial intelligence (video-based world models), (2) Yann LeCun's JEPA + FAIR's code world models (modeling internal execution state), (3) the amorphous “resolution of possible worlds” paradigm (curve-fitting to find the world model that best explains the data)* Why AI coding crossed the threshold: Yi now runs a job, gets a bug, pastes it into Gemini, and relaunches without even reading the fix—”the model is better than me at this”* The Pokémon benchmark: can models complete Pokédex by searching the web, synthesizing guides, and applying knowledge in a visual game state? “Efficient search of novel idea space is interesting, but we're not even at the point where models can consistently apply knowledge they look up”* DSI and generative retrieval: re-imagining search as predicting document identifiers with semantic tokens, now deployed at YouTube (symmetric IDs for RecSys) and Spotify* Why RecSys and IR feel like a different universe: “modeling dynamics are strange, like gravity is different—you hit the shuttlecock and hear glass shatter, cause and effect are too far apart”* The closed lab advantage is increasing: the gap between frontier labs and open source is growing because ideas compound over time, and researchers keep finding new tricks that play well with everything built before* Why ideas still matter: “the last five years weren't just blind scaling—transformers, pre-training, RL, self-consistency, all had to play well together to get us here”* Gemini Singapore: hiring for RL and reasoning researchers, looking for track record in RL or exceptional achievement in coding competitions, and building a small, talent-dense team close to the frontier—Yi Tay* Google DeepMind: https://deepmind.google* X: https://x.com/YiTayMLFull Video EpisodeTimestamps00:00:00 Introduction: Returning to Google DeepMind and the Singapore AGI Team00:04:52 The Philosophy of On-Policy RL: Learning from Your Own Mistakes00:12:00 IMO Gold Medal: The Journey from AlphaProof to End-to-End Gemini00:21:33 Training IMO Cat: Four Captains Across Three Time Zones00:26:19 Pokemon and Long-Horizon Reasoning: Beyond Academic Benchmarks00:36:29 AI Coding Assistants: From Lazy to Actually Useful00:32:59 Reasoning, Chain of Thought, and Latent Thinking00:44:46 Is Attention All You Need? Architecture, Learning, and the Local Minima00:55:04 Data Efficiency and World Models: The Next Frontier01:08:12 DSI and Generative Retrieval: Reimagining Search with Semantic IDs01:17:59 Building GDM Singapore: Geography, Talent, and the Symposium01:24:18 Hiring Philosophy: High Stats, Research Taste, and Student Budgets01:28:49 Health, HRV, and Research Performance: The 23kg Journey Get full access to Latent.Space at www.latent.space/subscribe

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
Captaining IMO Gold, Deep Think, On-Policy RL, Feeling the AGI in Singapore — Yi Tay 2

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Jan 23, 2026 92:04


From shipping Gemini Deep Think and IMO Gold to launching the Reasoning and AGI team in Singapore, Yi Tay has spent the last 18 months living through the full arc of Google DeepMind's pivot from architecture research to RL-driven reasoning—watching his team go from a dozen researchers to 300+, training models that solve International Math Olympiad problems in a live competition, and building the infrastructure to scale deep thinking across every domain, and driving Gemini to the top of the leaderboards across every category. Yi Returns to dig into the inside story of the IMO effort and more! We discuss: Yi's path: Brain → Reka → Google DeepMind → Reasoning and AGI team Singapore, leading model training for Gemini Deep Think and IMO Gold The IMO Gold story: four co-captains (Yi in Singapore, Jonathan in London, Jordan in Mountain View, and Tong leading the overall effort), training the checkpoint in ~1 week, live competition in Australia with professors punching in problems as they came out, and the tension of not knowing if they'd hit Gold until the human scores came in (because the Gold threshold is a percentile, not a fixed number) Why they threw away AlphaProof: "If one model can't do it, can we get to AGI?" The decision to abandon symbolic systems and bet on end-to-end Gemini with RL was bold and non-consensus On-policy vs. off-policy RL: off-policy is imitation learning (copying someone else's trajectory), on-policy is the model generating its own outputs, getting rewarded, and training on its own experience—"humans learn by making mistakes, not by copying" Why self-consistency and parallel thinking are fundamental: sampling multiple times, majority voting, LM judges, and internal verification are all forms of self-consistency that unlock reasoning beyond single-shot inference The data efficiency frontier: humans learn from 8 orders of magnitude less data than models, so where's the bug? Is it the architecture, the learning algorithm, backprop, off-policyness, or something else? Three schools of thought on world models: (1) Genie/spatial intelligence (video-based world models), (2) Yann LeCun's JEPA + FAIR's code world models (modeling internal execution state), (3) the amorphous "resolution of possible worlds" paradigm (curve-fitting to find the world model that best explains the data) Why AI coding crossed the threshold: Yi now runs a job, gets a bug, pastes it into Gemini, and relaunches without even reading the fix—"the model is better than me at this" The Pokémon benchmark: can models complete Pokédex by searching the web, synthesizing guides, and applying knowledge in a visual game state? "Efficient search of novel idea space is interesting, but we're not even at the point where models can consistently apply knowledge they look up" DSI and generative retrieval: re-imagining search as predicting document identifiers with semantic tokens, now deployed at YouTube (symmetric IDs for RecSys) and Spotify Why RecSys and IR feel like a different universe: "modeling dynamics are strange, like gravity is different—you hit the shuttlecock and hear glass shatter, cause and effect are too far apart" The closed lab advantage is increasing: the gap between frontier labs and open source is growing because ideas compound over time, and researchers keep finding new tricks that play well with everything built before Why ideas still matter: "the last five years weren't just blind scaling—transformers, pre-training, RL, self-consistency, all had to play well together to get us here" Gemini Singapore: hiring for RL and reasoning researchers, looking for track record in RL or exceptional achievement in coding competitions, and building a small, talent-dense team close to the frontier — Yi Tay Google DeepMind: https://deepmind.google X: https://x.com/YiTayML Chapters 00:00:00 Introduction: Returning to Google DeepMind and the Singapore AGI Team 00:04:52 The Philosophy of On-Policy RL: Learning from Your Own Mistakes 00:12:00 IMO Gold Medal: The Journey from AlphaProof to End-to-End Gemini 00:21:33 Training IMO Cat: Four Captains Across Three Time Zones 00:26:19 Pokemon and Long-Horizon Reasoning: Beyond Academic Benchmarks 00:36:29 AI Coding Assistants: From Lazy to Actually Useful 00:32:59 Reasoning, Chain of Thought, and Latent Thinking 00:44:46 Is Attention All You Need? Architecture, Learning, and the Local Minima 00:55:04 Data Efficiency and World Models: The Next Frontier 01:08:12 DSI and Generative Retrieval: Reimagining Search with Semantic IDs 01:17:59 Building GDM Singapore: Geography, Talent, and the Symposium 01:24:18 Hiring Philosophy: High Stats, Research Taste, and Student Budgets 01:28:49 Health, HRV, and Research Performance: The 23kg Journey

Rental Property Owner & Real Estate Investor Podcast
Conference Announcement: Midwest Real Estate Investor Conference April 27–28, 2026

Rental Property Owner & Real Estate Investor Podcast

Play Episode Listen Later Jan 22, 2026 9:51


The Midwest Real Estate Investor Conference (MREIC) is back April 27–28, 2026 at DeVos Place in Grand Rapids, Michigan, and this year's theme is Thrive. This quick special announcement features Erika Farley, Executive Director of the Rental Property Owners Association of Michigan (RPOAM), breaking down what's new, who it's for, and why you should lock in your ticket now. (Midwest Real Estate Investor Conference) You'll hear how the 2026 agenda is built around systems, real-world strategy, and operating resilience, including major focus on AI integration and a grounded economic outlook for 2026 that investors can actually use. (Midwest Real Estate Investor Conference) Conference Theme Thrive, with a clear emphasis on "Where Strategy Meets Systems" and building portfolios that can perform in changing market cycles. (Midwest Real Estate Investor Conference) What we cover in this conversation Why "Thrive" matters right now and what attendees should expect to walk away with AI keynote with Steve Brown (former executive at Google DeepMind and Intel) and what practical AI strategy looks like for investors and operators Economic forecast keynote with Dr. Paul Isely (GVSU) and why it's consistently one of the most packed sessions  Featured speakers and what they're known for (private equity, tax and legal, multifamily, commercial, missing middle housing)  Networking, kickoff reception, vendors, and sponsor support that make the event worth showing up for  Featured speakers mentioned Steve Brown (AI keynote)  Dr. Paul Isely (2026 economic outlook keynote) John Burley, Mark Kohler, Anthony Chara, Paul Moore, Nathan Biller  Key agenda focus areas (2026 "Thrive" framework) Smart Systems and AI Integration Market Outlook and Economic Data Operational Risk and Compliance Capital and Acquisitions Strategy Sustainable Growth and Scaling Advanced Portfolio Management  Networking and Add-on Experiences MREIC Kickoff Reception: Sunday, April 26, 2026 (5:00–7:00 PM). Included with registration, RSVP required, space limited.  Private Keynote Strategy Forum: limited capacity add-on for deeper discussion (conference registration required).  Hotels and Lodging Discounted conference hotel options include the Amway Grand Plaza (connected to DeVos Place) and the JW Marriott Grand Rapids (short walk). Book through the official hotel block.  Pricing Note Super Early Bird pricing runs through January 31 and pricing increases February 1. (Midwest Real Estate Investor Conference) Quick timestamps (approx.) 00:00 Conference dates, location, and why this matters 00:40 Theme: Thrive in Every Market 01:00 AI keynote and why it's front and center 02:00 Economic outlook with Dr. Paul Isely 02:40 Speaker lineup and topic variety 05:00 Networking and kickoff reception 06:00 VIP and deeper access opportunities 08:00 Sponsors, vendors, and early pricing Register Lock in your seat at midwestreiconference.com. (Midwest Real Estate Investor Conference)

Waking Up With AI
Confessions of a Large Language Model

Waking Up With AI

Play Episode Listen Later Jan 22, 2026 22:41


In this episode, Katherine Forrest and Scott Caravello unpack OpenAI researchers' proposed “confessions” framework designed to monitor for and detect dishonest outputs. They break down the researchers' proof of concept results and the framework's resilience to reward hacking, along with its limits in connection with hallucinations. Then they turn to Google DeepMind's “Distributional AGI Safety,” exploring a hypothetical path to AGI via a patchwork of agents and routing infrastructure, as well as the authors' proposed four layer safety stack. ## Learn More About Paul, Weiss's Artificial Intelligence practice: https://www.paulweiss.com/industries/artificial-intelligence

Big Technology Podcast
Google DeepMind CEO Demis Hassabis: AI's Next Breakthroughs, AGI Timeline, Google's AI Glasses Bet

Big Technology Podcast

Play Episode Listen Later Jan 21, 2026 34:07


Demis Hassabis is the CEO of Google DeepMind. Hassabis joins Big Technology Podcast to discuss where AI progress really stands today, where the next breakthroughs might come from, and whether we've hit AGI already. Tune in for a deep discussion covering the latest in AI research, from continual learning to world models. We also dig into product, discussing Google's big bet on AI glasses, its advertising plans, and AI coding. We also cover what AI means for knowledge work and scientific discovery. Hit play for a wide-ranging, high-signal conversation about where AI is headed next from one of the leaders driving it forward.  --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology on Substack + Discord? Here's 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b Learn more about your ad choices. Visit megaphone.fm/adchoices

TEDTalks Health
How AI is saving billions of years of human research time | Max Jaderberg

TEDTalks Health

Play Episode Listen Later Jan 20, 2026 19:29


Can AI compress the years long research time of a PhD into seconds? Research scientist Max Jaderberg explores how “AI analogs” simulate real-world lab work with staggering speed and scale, unlocking new insights on protein folding and drug discovery. Drawing on his experience working on Isomorphic Labs' and Google DeepMind's AlphaFold 3 — an AI model for predicting the structure of molecules — Jaderberg explains how this new technology frees up researchers' time and resources to better understand the real, messy world and tackle the next frontiers of science, medicine and more. Hosted on Acast. See acast.com/privacy for more information.

矽谷輕鬆談 Just Kidding Tech
S2E41 從西洋棋神童到 DeepMind:Demis 追尋 AGI 的 20 年長征

矽谷輕鬆談 Just Kidding Tech

Play Episode Listen Later Jan 18, 2026 26:59


成為這個頻道的會員並獲得福利:https://www.youtube.com/channel/UCJIPFjZSCWR15_jxBaK2fQQ/join前陣子我在旅行途中看了一部剛出的紀錄片《The Thinking Game》,看完之後只能用「驚為天人」來形容。這部片記錄了 DeepMind 創辦人 Demis Hassabis 追尋通用人工智慧(AGI)的過程,看完當下我就決定:一定要做一集影片好好跟大家聊聊這個人,以及這家改變世界的公司。你很難想像,現在我們熟悉的 AlphaGo、AlphaFold 甚至是 Gemini,其實都源自於一個 13 歲西洋棋神童的頓悟。當年 Demis 在一場長達 10 小時的對弈後,意識到人類大腦如果只用來玩零和遊戲太過浪費。於是他從遊戲開發轉向神經科學,最後創立 DeepMind,並向 Peter Thiel 和 Elon Musk 提出了一個瘋狂的計畫:「我們要打造一個 AI 界的阿波羅計畫,第一步解開智慧,第二步用它解決所有問題。」這集影片不只是紀錄片的補充說明,我整理了 Demis 過去 20 年的長征故事,包括 Google 與 Facebook 當年的搶人大戰內幕、AlphaFold 如何破解困擾科學界 50 年的難題,以及現在 Google DeepMind 如何在逆境中反擊。這不只是一個關於開發軟體或遊戲的故事,更是一段人類試圖解開智慧謎團、破解生命密碼的旅程。希望能透過這集,帶大家看懂這場人類史上最宏大的科學實驗。本集精彩亮點:♟️ 西洋棋神童的頓悟: 為什麼一場 10 小時的平局,讓他決定放棄下棋轉做 AI?

Unchained
DEX in the City: Why Prediction Market 'Insider Trading' Isn't Illegal — Yet

Unchained

Play Episode Listen Later Jan 10, 2026 44:29


Thank you to our sponsor, Mantle! Canton's in bed with Nasdaq, a Google DeepMind's paper talks up the role of blockchain in an agentic economy and an alleged insider cashes in on Maduro's capture. In this DEX in the City episode, hosts Katherine Kirkpatrick Bos, Jessi Brooks and Vy Le dive into the implications of Canton's Nasdaq deal, why DeepMind's study matters for crypto and the legality of insider trading on prediction markets. Vy highlights what Canton's Nasdaq deal signals about the priorities of institutions adopting blockchain technology. Katherine and Jessi engage in what happens when the machines take over. Plus, should federal officials be banned from using prediction markets? Hosts: Jessi Brooks Katherine Kirkpatrick Bos TuongVy Le Links: Bitcoin Rallies to $93,000 After U.S. Attack on Venezuela How the x402 Standard Is Enabling AI Agents to Pay Each Other Why the Black Friday Whale's $192 Million Crypto Trade Was Legal DEX in the City: Insider Trading and Crypto: What the Law Actually Says Google DeepMind's agentic economy paper Pawthereum's website A copy of Rep. Ritchie's bill Learn more about your ad choices. Visit megaphone.fm/adchoices