POPULARITY
In this episode, host Ben Lorica talks with Sakana AI research scientist Stefania Druga, formerly a research scientist at Google DeepMind, about building AI tools for young learners and what that teaches us about AI design for everyone.Subscribe to the Gradient Flow Newsletter
What happens when every major AI model gets jailbroken within days? This week, the world's most prolific AI red teamer lifts the curtain on how and why "safe" AI might be an impossible promise. Pliny the Liberator | pliny.gg - discord.gg/basi ChatGPT Nears 900 Million Weekly Active Users But Gemini is Catching Up From Llamas to Avocados: Meta's shifting AI strategy is causing internal confusion Google Tells Advertisers It'll Bring Ads to Gemini in 2026 Meta Acquires Limiteless, an A.I. Pendant Company Backed by Sam Altman Here's how Google is laying the foundation for our mixed reality future OpenAI, Anthropic, and Block Are Teaming Up to Make AI Agents Play Nice Svedka's First Super Bowl Ad Will Be Made Primarily With AI AI Slop Is Ruining Reddit for Everyone TESCREALers paying journalists at major outlets to cover AI The Resonant Computing Manifesto (from Masnick) Techdirt fundraiser From Sam Lessin: Tech bros head to etiquette camp as Silicon Valley levels up its style Bare Metal Email Jeff in Austria Golden Globes enter the world of podcasts and tread carefully, avoiding controversy Who says AI isn't useful? Real-time Cricket Sorting By Sex Hosts: Leo Laporte, Jeff Jarvis, and Mike Elgan Guest: Pliny the Liberator Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: auraframes.com/ink ventionteams.com/twit agntcy.org outsystems.com/twit
What happens when every major AI model gets jailbroken within days? This week, the world's most prolific AI red teamer lifts the curtain on how and why "safe" AI might be an impossible promise. Pliny the Liberator | pliny.gg - discord.gg/basi ChatGPT Nears 900 Million Weekly Active Users But Gemini is Catching Up From Llamas to Avocados: Meta's shifting AI strategy is causing internal confusion Google Tells Advertisers It'll Bring Ads to Gemini in 2026 Meta Acquires Limiteless, an A.I. Pendant Company Backed by Sam Altman Here's how Google is laying the foundation for our mixed reality future OpenAI, Anthropic, and Block Are Teaming Up to Make AI Agents Play Nice Svedka's First Super Bowl Ad Will Be Made Primarily With AI AI Slop Is Ruining Reddit for Everyone TESCREALers paying journalists at major outlets to cover AI The Resonant Computing Manifesto (from Masnick) Techdirt fundraiser From Sam Lessin: Tech bros head to etiquette camp as Silicon Valley levels up its style Bare Metal Email Jeff in Austria Golden Globes enter the world of podcasts and tread carefully, avoiding controversy Who says AI isn't useful? Real-time Cricket Sorting By Sex Hosts: Leo Laporte, Jeff Jarvis, and Mike Elgan Guest: Pliny the Liberator Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: auraframes.com/ink ventionteams.com/twit agntcy.org outsystems.com/twit
What happens when every major AI model gets jailbroken within days? This week, the world's most prolific AI red teamer lifts the curtain on how and why "safe" AI might be an impossible promise. Pliny the Liberator | pliny.gg - discord.gg/basi ChatGPT Nears 900 Million Weekly Active Users But Gemini is Catching Up From Llamas to Avocados: Meta's shifting AI strategy is causing internal confusion Google Tells Advertisers It'll Bring Ads to Gemini in 2026 Meta Acquires Limiteless, an A.I. Pendant Company Backed by Sam Altman Here's how Google is laying the foundation for our mixed reality future OpenAI, Anthropic, and Block Are Teaming Up to Make AI Agents Play Nice Svedka's First Super Bowl Ad Will Be Made Primarily With AI AI Slop Is Ruining Reddit for Everyone TESCREALers paying journalists at major outlets to cover AI The Resonant Computing Manifesto (from Masnick) Techdirt fundraiser From Sam Lessin: Tech bros head to etiquette camp as Silicon Valley levels up its style Bare Metal Email Jeff in Austria Golden Globes enter the world of podcasts and tread carefully, avoiding controversy Who says AI isn't useful? Real-time Cricket Sorting By Sex Hosts: Leo Laporte, Jeff Jarvis, and Mike Elgan Guest: Pliny the Liberator Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: auraframes.com/ink ventionteams.com/twit agntcy.org outsystems.com/twit
What happens when every major AI model gets jailbroken within days? This week, the world's most prolific AI red teamer lifts the curtain on how and why "safe" AI might be an impossible promise. Pliny the Liberator | pliny.gg - discord.gg/basi ChatGPT Nears 900 Million Weekly Active Users But Gemini is Catching Up From Llamas to Avocados: Meta's shifting AI strategy is causing internal confusion Google Tells Advertisers It'll Bring Ads to Gemini in 2026 Meta Acquires Limiteless, an A.I. Pendant Company Backed by Sam Altman Here's how Google is laying the foundation for our mixed reality future OpenAI, Anthropic, and Block Are Teaming Up to Make AI Agents Play Nice Svedka's First Super Bowl Ad Will Be Made Primarily With AI AI Slop Is Ruining Reddit for Everyone TESCREALers paying journalists at major outlets to cover AI The Resonant Computing Manifesto (from Masnick) Techdirt fundraiser From Sam Lessin: Tech bros head to etiquette camp as Silicon Valley levels up its style Bare Metal Email Jeff in Austria Golden Globes enter the world of podcasts and tread carefully, avoiding controversy Who says AI isn't useful? Real-time Cricket Sorting By Sex Hosts: Leo Laporte, Jeff Jarvis, and Mike Elgan Guest: Pliny the Liberator Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: auraframes.com/ink ventionteams.com/twit agntcy.org outsystems.com/twit
In episode 39 of The League, David Magid and Benoy Thanjan break down the latest developments reshaping the clean-energy market. They cover the Pine Gate Renewables bankruptcy, the push for new U.S. restrictions on Chinese solar and grid equipment, and the detention of Qcells shipments under UFLPA. David also highlights Section 232 tariff impacts and Google DeepMind's new GenCast AI weather-forecasting model. Host Bio: Benoy Thanjan Benoy Thanjan is the Founder and CEO of Reneu Energy, solar developer and consulting firm, and a strategic advisor to multiple cleantech startups. Over his career, Benoy has developed over 100 MWs of solar projects across the U.S., helped launch the first residential solar tax equity funds at Tesla, and brokered $45 million in Renewable Energy Credits (“REC”) transactions. Prior to founding Reneu Energy, Benoy was the Environmental Commodities Trader in Tesla's Project Finance Group, where he managed one of the largest environmental commodities portfolios. He originated REC trades and co-developed a monetization and hedging strategy with senior leadership to enter the East Coast market. As Vice President at Vanguard Energy Partners, Benoy crafted project finance solutions for commercial-scale solar portfolios. His role at Ridgewood Renewable Power, a private equity fund with 125 MWs of U.S. renewable assets, involved evaluating investment opportunities and maximizing returns. He also played a key role in the sale of the firm's renewable portfolio. Earlier in his career, Benoy worked in Energy Structured Finance at Deloitte & Touche and Financial Advisory Services at Ernst & Young, following an internship on the trading floor at D.E. Shaw & Co., a multi billion dollar hedge fund. Benoy holds an MBA in Finance from Rutgers University and a BS in Finance and Economics from NYU Stern, where he was an Alumni Scholar. Connect with Benoy on LinkedIn: https://www.linkedin.com/in/benoythanjan/ Learn more: https://reneuenergy.com https://www.solarmaverickpodcast.com Host Bio: David Magid David Magid is a seasoned renewable energy executive with deep expertise in solar development, financing, and operations. He has worked across the clean energy value chain, leading teams that deliver distributed generation and community solar projects. David is widely recognized for his strategic insights on interconnection, market economics, and policy trends shaping the U.S. solar industry. Connect with David on LinkedIn: https://www.linkedin.com/in/davidmagid/ If you have any questions or comments, you can email us at info@reneuenergy.com.
Our guest in this episode is Holly Elmore, who is the Founder and Executive Director of PauseAI US. The website pauseai-us.org starts with this headline: “Our proposal is simple: Don't build powerful AI systems until we know how to keep them safe. Pause AI.”But PauseAI isn't just a talking shop. They're probably best known for organising public protests. The UK group has demonstrated in Parliament Square in London, with Big Ben in the background, and also outside the offices of Google DeepMind. A group of 30 PauseAI protesters gathered outside the OpenAI headquarters in San Francisco. Other protests have taken place in New York, Portland, Ottawa, Sao Paulo, Berlin, Paris, Rome, Oslo, Stockholm, and Sydney, among other cities.Previously, Holly was a researcher at the think tank Rethink Priorities in the area of Wild Animal Welfare. And before that, she studied evolutionary biology in Harvard's Organismic and Evolutionary Biology department.Selected follow-ups:Holly Elmore - substackPauseAI USPauseAI - global siteWild Animal Suffering... and why it mattersHard problem of consciousness - WikipediaThe Unproven (And Unprovable) Case For Net Wild Animal Suffering. A Reply To Tomasik - by Michael PlantLeading Evolution Compassionately - Herbivorize PredatorsDavid Pearce (philosopher) - WikipediaThe AI industry is racing toward a precipice - Machine Intelligence Research Institute (MIRI)Nick Bostrom's new views regarding AI/AI safety - redditAI is poised to remake the world; Help us ensure it benefits all of us - Future of Life InstituteOn being wrong about AI - by Scott Aharonson, on his previous suggestion that it might take "a few thousand years" to reach superhuman AICalifornia Institute of Machine Consciousness - organisation founded by Joscha BachPausing AI is the only safe approach to digital sentience - article by Holly ElmoreCrossing the Chasm: Marketing and Selling High-Tech Products to Mainstream Customers - book by Geoffrey MooreMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
RECORDED JANUARY 22, 2025; Originally released FEBRUARY 12, 2025Guests: Dr. Ilan Price, Senior Research Scientist, & Matt Willson, Research EngineerFrom the Euro to the GFS to the Canadian, there are multitudes of models that forecasters use to predict our daily weather. There are models for short-term severe forecasting, 10-day outlooks from your local news, and even models that predict our climate years into the future. As technology advances, so do all of these models and the technology we are focusing on today on Weather Geeks is AI. While it may seem like a buzzword these days, it can be used to enhance our industry and help us all reach our common goal: saving lives and property. We are thrilled to welcome Dr. Ilan Price to discuss GenCast, Google's weather forecasting model that is entirely powered by AI. How does it stack up to the models we know and love? The answer may surprise you…Chapters00:00 Introduction to AI in Weather Forecasting02:10 Meet the Experts: Ilan Price and Matthew Wilson06:34 Understanding GenCast: The AI Weather Model10:47 Machine Learning vs Traditional Forecasting13:22 Data Sources and Ethical Considerations15:10 Handling Extreme Weather Events21:15 Validation and Verification of GenCast23:26 Impact of GenCast on Weather ForecastingSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
AI is speeding up recruitment, but can it replace the human touch? Not quite.Becky Pradal-Rogers, Head of Talent Acquisition at Google DeepMind, knows what it takes to hire the right people in a world where top talent is rare, roles are evolving, and AI is changing the game. From using AI to map talent pools to piloting interviews where candidates use AI, Becky shares how her team balances efficiency, innovation, and the human insight that machines can't replicate.Join Becky and Arctic Shores Co-Founder, Robert Newry as they unpack the future of talent acquisition, the wild card candidates you don't want AI to miss, and why the recruiter's instinct still rules.You'll learn:⭐ How AI is flooding applications — and why top recruiters are mapping the market to find the best talent at the source
¡Descubre cómo Google DeepMind domina la carrera de la IA con “The Thinking Game”! En este episodio de Applelianos Podcast analizamos el documental que revela los secretos de Demis Hassabis: de prodigio ajedrecista a Nobel por AlphaFold. Exploramos AlphaGo venciendo al Go, avances en proteínas que curan enfermedades y la visión de AGI para 2030 con Gemini. ¿Es Google imbatible frente a OpenAI? Escucha riesgos éticos, breakthroughs y por qué esta supremacía cambia el mundo. ¡No te lo pierdas! #DeepMind #IA https://seoxan.es/crear_pedido_hosting Codigo Cupon "APPLE" PATROCINADO POR SEOXAN Optimización SEO profesional para tu negocio https://seoxan.es https://uptime.urtix.es //Enlaces https://youtu.be/d95J8yzvjbQ?si=R04WmBmQeVIfGYIJ https://www.elmundo.es/tecnologia/2025/11/26/69271d8be9cf4a20538b458e.html# PARTICIPA EN DIRECTO Deja tu opinión en los comentarios, haz preguntas y sé parte de la charla más importante sobre el futuro del iPad y del ecosistema Apple. ¡Tu voz cuenta! ¿TE GUSTÓ EL EPISODIO? ✨ Dale LIKE SUSCRÍBETE y activa la campanita para no perderte nada COMENTA COMPARTE con tus amigos applelianos SÍGUENOS EN TODAS NUESTRAS PLATAFORMAS: YouTube: https://www.youtube.com/@Applelianos Telegram: https://t.me/+Jm8IE4n3xtI2Zjdk X (Twitter): https://x.com/ApplelianosPod Facebook: https://www.facebook.com/applelianos Apple Podcasts: https://apple.co/39QoPbO
YouTube tests a new custom feed just like Instagram and TikTok, Shorts AI Creation Tools get updates, and I experiment with speeding up Lauren to see if that helps. Also the Head of Instagram stops by to explain that reposting your own content to Feed really won't do much, and the team at TikTok shares some stats around using Creators to make content for brands. After the music, I do Wednesday Waffle talking about a book I read recently. Links:YouTube: Testing "Your Custom Feed" (Google Support: YouTube)YouTube: New Communities Features, Expansion of Shorts AI Creation Tools, and Handles Update! (YouTube)Instagram: Does Reposting Your Content To Feed Help? (Instagram)TikTok: The Creator Advantage: How creators drive real brand impact on TikTok (TikTok)TikTok: TikTok One - Creative Academy Videos (TikTok) Wednesday Waffle:Book: Wrong Place, Wrong Time - Gillian McAllister (Amazon) Transcript: Daniel Hill: Welcome to the Instagram stories for Wednesday, November 26th. I'm your host, Daniel Hill. There is a lot of social media news to talk about today. The YouTube team has expanded their shorts AI creation tools. The head of Instagram explains whether or not it's worth it to repost your own posts and if that'll help you get more engagement. The team at TikTok shares some data explaining why it's so important to work with creators if you're a business and how that can drive brand impact. for your business. We'll get into that along with some video guides that the TikTok team has made to help you make better content. And after all the social media news, I will do a Wednesday waffle where I talk about a topic that may or may not be related to social media. All of that and more on today's episode. But first, here's a quick word from our sponsors. Welcome back. Let's start with the YouTube news. Before we dive in, a little bit of context. I've been talking on the show recently about how Instagram is going to allow you to customize what you see in your for you feed based on what you are personally interested in and you can pick which topics you're interested in, which ones you're not. Tik Tok has had that for a long time. There are sliders that you can move to indicate which kinds of content you want to see more or less of. And they recently added the ability to adjust what level of AI content you see in your feed. Now, YouTube is copying that and they shared yesterday that they are testing something called your custom feed. They say, quote, "We're experimenting presenting with a new feature called your custom feed that lets you customize recommendations for your home feed. If you are part of the experiment, you will see your custom feed appear on your homepage as a chip beside home. When you click into it, you can update your existing home feed recommendations by entering a single prompt. This feature is designed to give you an easy to use way to have more control over your suggested content. If you see it, check it out and share your feedback". I will link to this post in the show notes so that you can see it for yourself. All right, moving on. Since we're already talking about YouTube news, let's move to Lauren from the YouTube Creator Insider team with her updates talking about how the Shorts AI creation tools are being expanded and an update to the way YouTube handles channel names versus handles. Here's the clip from Lauren. Uh, one quick thing before I play the clip. I was reading the feedback that I got about the show and some of you mentioned that Lauren's updates can drag on a little bit and I agree. So, I'm going to experiment with speeding up Lauren just a little bit. Hopefully, it's enough that it goes faster and you don't feel like Lauren's dragging, but you can still catch what she's saying.Lauren: What's up, insiders? I'm Lauren, a program manager working on our product team here at YouTube and the producer of Creator Insider. Up until now, channel names were used as the identifier for channels across live chat and channel memberships on main and YouTube studio. Now, a creator's handle will be shown across these services as their identifier. For moderators of live chat, you can still navigate to a user's channel by tapping on their handle. Let us know if you have any questions. In June, we talked about new AI powered shorts creation tools. If you missed the update, we'll leave more information in the description. We're happy to share that we're expanding standalone clips, green screen backgrounds, AI playground, and phototovideo to new markets around the world for creators with their YouTube language settings set to English. We're also leveling up the photo to video experience with new prompt capabilities. Now you can create a prompt from scratch, watch your memories come to life, and even add speech to give your video a voice. We're also introducing new Genai effects that transform your sketches into captivating videos powered by VO. These effects are now available globally. Additionally, speech to song and the ability to add lyrics and vocals in Dream Track are now available to creators in the US. These features will be rolling out this month and we'll keep you posted as we add new features. We're also bringing the power of Google DeepMind's V3 model to shorts, available for everyone on mobile. This upgrade from V2 lets you create videos up to 8 seconds long, previously six, now with synchronized sound effects, ambient audio, and speech. We'll leave more info below. Next, updates for communities. If you're still on the fence about enabling communities, an internal experiment in early September 2025 found that channels with YouTube communities enabled saw on average an increase in post impressions and likes on their channel.Daniel Hill: Okay, I'm going to stop it there because the rest of the update is about communities and I don't think it's very interesting. But if you do want to check out the whole post, I will link to it in the show notes so you can watch it for yourself. Hopefully the increased speed with which Lauren explained those things still let you understand what was going on and hopefully kept her a little bit more brief than usual. Okay, now let's move on to the Instagram section, the head of Instagram answered the question about whether or not reposting your content in your own feed does anything. So, you have the opportunity to share content that you've made from your feed to your story, for instance, but now you can also repost it to your feed. If you posted a piece of content and it didn't really do that well, it might be tempting to repost it to your feed so that your followers have another chance to see it. The head of Instagram explains it's not really worth it to do that. Here's the clip.Adam Mosseri: Since we launched reposts a couple months ago. I get the question a lot. Should I repost my own content? And you can. It might help a little bit on the margins, but it's not going to meaningfully change the amount of reach that you get. If you want to try and help your post go a little bit farther, I'd recommend instead going into the comments, responding to some people, liking some comments, and interacting with the people who've taken the time to actually like or comment on your post. This will help more than just reposting something that you've already posted. But I understand why people try. And this not going to hurt you to do so, but it's not going to actually help. So, I wanted to answer that question definitively once and for all. Hopefully, this helps later.Daniel Hill: So, there you have it. Not really worth a lot of time and energy. We're going to take a quick break. When we come back, some information from the Tik Tok team about how creators can help to drive impact for brands and additionally some videos from the Tik Tok team helping you to make better content. Stick around. Welcome back. Let's continue with the Tik Tok news. The team at Tik Tok made a long blog post sharing some data about how much creators making content and having brands push that content can impact the business that the brand does as opposed to the brand just making content on their own or hiring a marketing company. The importance of this cannot be understated because the content comes across as more authentic. They share some stats explaining that creator ads meaning an ad that is based on a piece of content that a creator made that creator ad can drive a 70% higher click-through rate and 159% higher engagement rate than noncreator ads. Okay, so why such a big difference? Three main reasons. First, when creators are making content, they're doing it through the lens of Tik Tok culture. They're familiar with the platform, not from the perspective of trying to sell a product or service, but rather just being familiar with the community. Additionally, creators can make a lot of good content very quickly. We are all used to sitting down to come up with an idea of something that we think could potentially work, coming up with what we need in order to make that piece of content, whether it's a script, finding a location, then filming it, editing it, and publishing it, and doing that for ourselves. So, when brands are working with creators, they're tapping into this system that we are all doing all the time. Anyway, another key thing to remember is that when brands partner with creators, the Those creators have a wide variety of different voices, skill sets, things they bring to the table, all of which appeal to different people. So, it does make sense to partner with a wide variety of creators. The third reason that this is so effective is because people already follow these creators. They liked them enough to follow them. When a creator makes a piece of content about a brand or product or service and posts it to their account, it comes across more authentically because it is not coming from the advertiser's account. According to a study from the TikTok team, ads posted to a creator's account have a 59% higher engagement rate and a 16% higher 6second viewthrough rate than those that are not posted directly to the creator's account. So, it's worth it to do this. There's more stats and strategy in this blog post, which I will link to, but I also uncovered something called TikTok 1, which is a creative academy to help you make better content on TikTok. I will play a short snippet of one of the videos that I found on there so you can get an idea of what this is all about.Unknown Speaker (from TikTok 1 Clip): This video is about TikTok creative best practices. Good creative is imperative for successful ad campaign. There are some essential guidelines you must follow in order to set yourself up for that success. These include video duration, design elements, safe zone, and video formats. Today we're talking creative codes. Six secrets to help you decode TikTok's creative potential. And what better place to start then code number one Tik Tok first. So when we say TikTok first, what do we mean? Going TikTok first means creating natural feeling TikTok content that's authentic to the platform. Feeling authentic to the for you page is as simple as taking cues from the content you love. How can you make content that feels organic to the for you page? Here are some quick tips that will help you look right at home on TikTok. Start simple. From filming at a professional shoot to filming on your phone, you can execute your ideas in the way that works for you. Go 9x6. This is a platform where vertical video thrives. Frame your content accordingly. Shoot high-res. Whether you use high-end software or smartphone technology, create video content that is clear and crisp.Daniel Hill: Okay, I'm going to stop the clip there. You get the idea. If you want to watch the rest of this video series, which I actually think is very good, I will link to it in the show notes. Be sure to check it out for yourself. That is it for today's news. If you would like to hear me do Wednesday waffle, which is where I talk about another topic that may or may not be related to social media. Stick around after the music.Music: Instagram news got you covered. Sometimes even TikTok relevant platforms in the metaverse. Ahead of the wave without a break or a pit stops waiting for Zuckerberg to give me the big job. Use trademarks and logos with instance permission. Of course, if you like the show and you gain some good info, maybe leave a review. It's a type of applause. Just drop me a message if you want to collab. If you got some good content or you want to run ads at @DanielHillMedia is where I am. TikTok, Facebook, at Instagram. All right, thank you for sticking around to hear me talk about something else. And now I would like to recommend a book. I don't think I did this previously on the show. I would like to recommend a book called Wrong Place, Wrong Time by Gillian McAllister. I would categorize this book as a thriller/time travel book. I don't like to normally recommend books because everyone likes different things in books. However, this book is excellent. I spend a lot of time reading books. I like to read a few minutes every night before I go to sleep, and this one really had me hooked. I was having trouble forcing myself actually to go to sleep. The ending ties together perfectly. I will link to this book in the show notes. Definitely check it out. Wrong place, Wrong Time by Gillian McAllister. If you like time travel, if you like science fiction, if you like thrillers, if you like any books that are a bit mysterious or suspenseful, I think you will like this. Find the link to the book in the show notes. And thank you for listening to me talk about something else other than social media for a minute.Sign Up for The Weekly Email Roundup: NewsletterLeave a Review: Apple PodcastsFollow Me on Instagram: @danielhillmedia Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Podcast: The Gradient: Perspectives on AI Episode: Iason Gabriel: Value Alignment and the Ethics of Advanced AI SystemsRelease date: 2025-11-26Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationEpisode 143I spoke with Iason Gabriel about:* Value alignment* Technology and worldmaking* How AI systems affect individuals and the social worldIason is a philosopher and Senior Staff Research Scientist at Google DeepMind. His work focuses on the ethics of artificial intelligence, including questions about AI value alignment, distributive justice, language ethics and human rights.You can find him on his website and Twitter/X.Find me on Twitter (or LinkedIn if you want…) for updates, and reach me at editor@thegradient.pub for feedback, ideas, guest suggestions.Outline* (00:00) Intro* (01:18) Iason's intellectual development* (04:28) Aligning language models with human values, democratic civility and agonism* (08:20) Overlapping consensus, differing norms, procedures for identifying norms* (13:27) Rawls' theory of justice, the justificatory and stability problems* (19:18) Aligning LLMs and cooperation, speech acts, justification and discourse norms, literacy* (23:45) Actor Network Theory and alignment* (27:25) Value alignment and Iason's starting points* (33:10) The Ethics of Advanced AI Assistants, AI's impacts on social processes and users, personalization* (37:50) AGI systems and social power* (39:00) Displays of care and compassion, Machine Love (Joel Lehman)* (41:30) Virtue ethics, morality and language, virtue in AI systems vs. MacIntyre's conception in After Virtue* (45:00) The Challenge of Value Alignment* (45:25) Technologists as worldmakers* (51:30) Technological determinism, collective action problems* (55:25) Iason's goals with his work* (58:32) OutroLinksPapers:* AI, Values, and Alignment (2020)* Aligning LMs with Human Values (2023)* Toward a Theory of Justice for AI (2023)* The Ethics of Advanced AI Assistants (2024)* A matter of principle? AI alignment as the fair treatment of claims (2025) Get full access to The Gradient at thegradientpub.substack.com/subscribe
Episode 143I spoke with Iason Gabriel about:* Value alignment* Technology and worldmaking* How AI systems affect individuals and the social worldIason is a philosopher and Senior Staff Research Scientist at Google DeepMind. His work focuses on the ethics of artificial intelligence, including questions about AI value alignment, distributive justice, language ethics and human rights.You can find him on his website and Twitter/X.Find me on Twitter (or LinkedIn if you want…) for updates, and reach me at editor@thegradient.pub for feedback, ideas, guest suggestions.Outline* (00:00) Intro* (01:18) Iason's intellectual development* (04:28) Aligning language models with human values, democratic civility and agonism* (08:20) Overlapping consensus, differing norms, procedures for identifying norms* (13:27) Rawls' theory of justice, the justificatory and stability problems* (19:18) Aligning LLMs and cooperation, speech acts, justification and discourse norms, literacy* (23:45) Actor Network Theory and alignment* (27:25) Value alignment and Iason's starting points* (33:10) The Ethics of Advanced AI Assistants, AI's impacts on social processes and users, personalization* (37:50) AGI systems and social power* (39:00) Displays of care and compassion, Machine Love (Joel Lehman)* (41:30) Virtue ethics, morality and language, virtue in AI systems vs. MacIntyre's conception in After Virtue* (45:00) The Challenge of Value Alignment* (45:25) Technologists as worldmakers* (51:30) Technological determinism, collective action problems* (55:25) Iason's goals with his work* (58:32) OutroLinksPapers:* AI, Values, and Alignment (2020)* Aligning LMs with Human Values (2023)* Toward a Theory of Justice for AI (2023)* The Ethics of Advanced AI Assistants (2024)* A matter of principle? AI alignment as the fair treatment of claims (2025) Get full access to The Gradient at thegradientpub.substack.com/subscribe
Understanding AI has never been more critical. It's part of most conversations around emerging tech. It raises endless questions around safety, creativity, copyright and intellectual property. The UK government, too, now has a national focus on upskilling young people for the “AI-powered jobs of the future”. ‘Experience AI' is a programme developed by the Raspberry Pi Foundation and Google DeepMind and delivered across the UK and internationally. It helps to improve the skills and confidence of both educators and students on how AI tech works – and fosters important surrounding literacies. In this episode, Vicki is joined by the Raspberry Pi Foundation's Ben Garside and Parent Zone's chief of staff Megan Rose to look at the impact that Experience AI is having currently, and where the programme may be going next.Talking points:Beyond understanding prompts and functionalities, how can educational resources tackle topics like the wider societal impact of AI? Is Experience AI meeting educator's current needs – and are learnings (around, for example, the importance of not ‘anthropomorphising' chatbots) landing well?As children around the globe gain access to AI, how do their experiences of it – and attitudes towards it – change depending on where they live? Tech Shock is a Parent Zone production. Follow Parent Zone on social media for all the latest on our work on helping families to thrive in the digital age. Presented by Vicki Shotbolt. Tech Shock is produced and edited by Tim Malster.wwwTwitterFacebookInstagram
In this mind-bending episode of The Box of Oddities, Kat and Jethro dive into two stories that push the boundaries of communication, perception, and the very nature of time itself. First, Jethro unpacks the extraordinary modern effort to build the world's first dolphin chatbot—a real AI project inspired by a quirky 1960s SETI club called The Order of the Dolphin. From Carl Sagan and Frank Drake's early theories to Google DeepMind's modern neural networks decoding dolphin whistles, this segment explores how scientists hope communication with dolphins may become the training wheels for future alien contact. With signature humor and scientific wonder, we explore dolphin intelligence, their complex acoustic “language,” and what the first dolphin-to-human conversation might actually sound like. Then Kat takes us into the freezing darkness of the Scarassin Abyss, where French speleologist Michel Siffre spent 63 days isolated from all clocks, sunlight, and human contact to study how humans perceive time. As his internal world unraveled, Siffre made discoveries that reshaped chronobiology—and revealed how fragile our sense of reality truly is. From hallucinations to distorted time cycles to the stunning moment he emerged believing he still had a month left underground, Kat tells the story in vivid detail with plenty of Oddity-level dread and fascination. Plus: bizarre YouTube ads, Thanksgiving confusion, and a rapid-fire tour of wild historical events—from Einstein's famous paper to a meteor that turned night into day. It's science, strangeness, humor, and existential questions—all in one episode.Keep flying that freak flag, you beautiful freak. Learn more about your ad choices. Visit megaphone.fm/adchoices
How can you write science-based fiction without info-dumping your research? How can you use AI tools in a creative way, while still focusing on a human-first approach? Why is adapting to the fast pace of change so difficult and how can we make the most of this time? Jamie Metzl talks about Superconvergence and more. In the intro, How to avoid author scams [Written Word Media]; Spotify vs Audible audiobook strategy [The New Publishing Standard]; Thoughts on Author Nation and why constraints are important in your author life [Self-Publishing with ALLi]; Alchemical History And Beautiful Architecture: Prague with Lisa M Lilly on my Books and Travel Podcast. Today's show is sponsored by Draft2Digital, self-publishing with support, where you can get free formatting, free distribution to multiple stores, and a host of other benefits. Just go to www.draft2digital.com to get started. This show is also supported by my Patrons. Join my Community at Patreon.com/thecreativepenn Jamie Metzl is a technology futurist, professional speaker, entrepreneur, and the author of sci-fi thrillers and futurist nonfiction books, including the revised and updated edition of Superconvergence: How the Genetics, Biotech, and AI Revolutions Will Transform Our Lives, Work, and World. You can listen above or on your favorite podcast app or read the notes and links below. Here are the highlights and the full transcript is below. Show Notes How personal history shaped Jamie's fiction writing Writing science-based fiction without info-dumping The super convergence of three revolutions (genetics, biotech, AI) and why we need to understand them holistically Using fiction to explore the human side of genetic engineering, life extension, and robotics Collaborating with GPT-5 as a named co-author How to be a first-rate human rather than a second-rate machine You can find Jamie at JamieMetzl.com. Transcript of interview with Jamie Metzl Jo: Jamie Metzl is a technology futurist, professional speaker, entrepreneur, and the author of sci-fi thrillers and futurist nonfiction books, including the revised and updated edition of Superconvergence: How the Genetics, Biotech, and AI Revolutions Will Transform Our Lives, Work, and World. So welcome, Jamie. Jamie: Thank you so much, Jo. Very happy to be here with you. Jo: There is so much we could talk about, but let's start with you telling us a bit more about you and how you got into writing. From History PhD to First Novel Jamie: Well, I think like a lot of writers, I didn't know I was a writer. I was just a kid who loved writing. Actually, just last week I was going through a bunch of boxes from my parents' house and I found my autobiography, which I wrote when I was nine years old. So I've been writing my whole life and loving it. It was always something that was very important to me. When I finished my DPhil, my PhD at Oxford, and my dissertation came out, it just got scooped up by Macmillan in like two minutes. And I thought, “God, that was easy.” That got me started thinking about writing books. I wanted to write a novel based on the same historical period – my PhD was in Southeast Asian history – and I wanted to write a historical novel set in the same period as my dissertation, because I felt like the dissertation had missed the human element of the story I was telling, which was related to the Cambodian genocide and its aftermath. So I wrote what became my first novel, and I thought, “Wow, now I'm a writer.” I thought, “All right, I've already published one book. I'm gonna get this other book out into the world.” And then I ran into the brick wall of: it's really hard to be a writer. It's almost easier to write something than to get it published. I had to learn a ton, and it took nine years from when I started writing that first novel, The Depths of the Sea, to when it finally came out. But it was such a positive experience, especially to have something so personal to me as that story. I'd lived in Cambodia for two years, I'd worked on the Thai-Cambodian border, and I'm the child of a Holocaust survivor. So there was a whole lot that was very emotional for me. That set a pattern for the rest of my life as a writer, at least where, in my nonfiction books, I'm thinking about whatever the issues are that are most important to me. Whether it was that historical book, which was my first book, or Hacking Darwin on the future of human genetic engineering, which was my last book, or Superconvergence, which, as you mentioned in the intro, is my current book. But in every one of those stories, the human element is so deep and so profound. You can get at some of that in nonfiction, but I've also loved exploring those issues in deeper ways in my fiction. So in my more recent novels, Genesis Code and Eternal Sonata, I've looked at the human side of the story of genetic engineering and human life extension. And now my agent has just submitted my new novel, Virtuoso, about the intersection of AI, robotics, and classical music. With all of this, who knows what's the real difference between fiction and nonfiction? We're all humans trying to figure things out on many different levels. Shifting from History to Future Tech Jo: I knew that you were a polymath, someone who's interested in so many things, but the music angle with robotics and AI is fascinating. I do just want to ask you, because I was also at Oxford – what college were you at? Jamie: I was in St. Antony's. Jo: I was at Mansfield, so we were in that slightly smaller, less famous college group, if people don't know. Jamie: You know, but we're small but proud. Jo: Exactly. That's fantastic. You mentioned that you were on the historical side of things at the beginning and now you've moved into technology and also science, because this book Superconvergence has a lot of science. So how did you go from history and the past into science and the future? Biology and Seeing the Future Coming Jamie: It's a great question. I'll start at the end and then back up. A few years ago I was speaking at Lawrence Livermore National Laboratory, which is one of the big scientific labs here in the United States. I was a guest of the director and I was speaking to their 300 top scientists. I said to them, “I'm here to speak with you about the future of biology at the invitation of your director, and I'm really excited. But if you hear something wrong, please raise your hand and let me know, because I'm entirely self-taught. The last biology course I took was in 11th grade of high school in Kansas City.” Of course I wouldn't say that if I didn't have a lot of confidence in my process. But in many ways I'm self-taught in the sciences. As you know, Jo, and as all of your listeners know, the foundation of everything is curiosity and then a disciplined process for learning. Even our greatest super-specialists in the world now – whatever their background – the world is changing so fast that if anyone says, “Oh, I have a PhD in physics/chemistry/biology from 30 years ago,” the exact topic they learned 30 years ago is less significant than their process for continuous learning. More specifically, in the 1990s I was working on the National Security Council for President Clinton, which is the president's foreign policy staff. My then boss and now close friend, Richard Clarke – who became famous as the guy who had tragically predicted 9/11 – used to say that the key to efficacy in Washington and in life is to try to solve problems that other people can't see. For me, almost 30 years ago, I felt to my bones that this intersection of what we now call AI and the nascent genetics revolution and the nascent biotechnology revolution was going to have profound implications for humanity. So I just started obsessively educating myself. When I was ready, I started writing obscure national security articles. Those got a decent amount of attention, so I was invited to testify before the United States Congress. I was speaking out a lot, saying, “Hey, this is a really important story. A lot of people are missing it. Here are the things we should be thinking about for the future.” I wasn't getting the kind of traction that I wanted. I mentioned before that my first book had been this dry Oxford PhD dissertation, and that had led to my first novel. So I thought, why don't I try the same approach again – writing novels to tell this story about the genetics, biotech, and what later became known popularly as the AI revolution? That led to my two near-term sci-fi novels, Genesis Code and Eternal Sonata. On my book tours for those novels, when I explained the underlying science to people in my way, as someone who taught myself, I could see in their eyes that they were recognizing not just that something big was happening, but that they could understand it and feel like they were part of that story. That's what led me to write Hacking Darwin, as I mentioned. That book really unlocked a lot of things. I had essentially predicted the CRISPR babies that were born in China before it happened – down to the specific gene I thought would be targeted, which in fact was the case. After that book was published, Dr. Tedros, the Director-General of the World Health Organization, invited me to join the WHO Expert Advisory Committee on Human Genome Editing, which I did. It was a really great experience and got me thinking a lot about the upside of this revolution and the downside. The Birth of Superconvergence Jamie: I get a lot of wonderful invitations to speak, and I have two basic rules for speaking: Never use notes. Never ever. Never stand behind a podium. Never ever. Because of that, when I speak, my talks tend to migrate. I'd be speaking with people about the genetics revolution as it applied to humans, and I'd say, “Well, this is just a little piece of a much bigger story.” The bigger story is that after nearly four billion years of life on Earth, our one species has the increasing ability to engineer novel intelligence and re-engineer life. The big question for us, and frankly for the world, is whether we're going to be able to use that almost godlike superpower wisely. As that idea got bigger and bigger, it became this inevitable force. You write so many books, Jo, that I think it's second nature for you. Every time I finish a book, I think, “Wow, that was really hard. I'm never doing that again.” And then the books creep up on you. They call to you. At some point you say, “All right, now I'm going to do it.” So that was my current book, Superconvergence. Like everything, every journey you take a step, and that step inspires another step and another. That's why writing and living creatively is such a wonderfully exciting thing – there's always more to learn and always great opportunities to push ourselves in new ways. Balancing Deep Research with Good Storytelling Jo: Yeah, absolutely. I love that you've followed your curiosity and then done this disciplined process for learning. I completely understand that. But one of the big issues with people like us who love the research – and having read your Superconvergence, I know how deeply you go into this and how deeply you care that it's correct – is that with fiction, one of the big problems with too much research is the danger of brain-dumping. Readers go to fiction for escapism. They want the interesting side of it, but they want a story first. What are your tips for authors who might feel like, “Where's the line between putting in my research so that it's interesting for readers, but not going too far and turning it into a textbook?” How do you find that balance? Jamie: It's such a great question. I live in New York now, but I used to live in Washington when I was working for the U.S. government, and there were a number of people I served with who later wrote novels. Some of those novels felt like policy memos with a few sex scenes – and that's not what to do. To write something that's informed by science or really by anything, everything needs to be subservient to the story and the characters. The question is: what is the essential piece of information that can convey something that's both important to your story and your character development, and is also an accurate representation of the world as you want it to be? I certainly write novels that are set in the future – although some of them were a future that's now already happened because I wrote them a long time ago. You can make stuff up, but as an author you have to decide what your connection to existing science and existing technology and the existing world is going to be. I come at it from two angles. One: I read a huge number of scientific papers and think, “What does this mean for now, and if you extrapolate into the future, where might that go?” Two: I think about how to condense things. We've all read books where you're humming along because people read fiction for story and emotional connection, and then you hit a bit like: “I sat down in front of the president, and the president said, ‘Tell me what I need to know about the nuclear threat.'” And then it's like: insert memo. That's a deal-killer. It's like all things – how do you have a meaningful relationship with another person? It's not by just telling them your story. Even when you're telling them something about you, you need to be imagining yourself sitting in their shoes, hearing you. These are very different disciplines, fiction and nonfiction. But for the speculative nonfiction I write – “here's where things are now, and here's where the world is heading” – there's a lot of imagination that goes into that too. It feels in many ways like we're living in a sci-fi world because the rate of technological change has been accelerating continuously, certainly for the last 12,000 years since the dawn of agriculture. It's a balance. For me, I feel like I'm a better fiction writer because I write nonfiction, and I'm a better nonfiction writer because I write fiction. When I'm writing nonfiction, I don't want it to be boring either – I want people to feel like there's a story and characters and that they can feel themselves inside that story. Jo: Yeah, definitely. I think having some distance helps as well. If you're really deep into your topics, as you are, you have to leave that manuscript a little bit so you can go back with the eyes of the reader as opposed to your eyes as the expert. Then you can get their experience, which is great. Looking Beyond Author-Focused AI Fears Jo: I want to come to your technical knowledge, because AI is a big thing in the author and creative community, like everywhere else. One of the issues is that creators are focusing on just this tiny part of the impact of AI, and there's a much bigger picture. For example, in 2024, Demis Hassabis from Google DeepMind and his collaborative partner John Jumper won the Nobel Prize for Chemistry with AlphaFold. It feels to me like there's this massive world of what's happening with AI in health, climate, and other areas, and yet we are so focused on a lot of the negative stuff. Maybe you could give us a couple of things about what there is to be excited and optimistic about in terms of AI-powered science? Jamie: Sure. I'm so excited about all of the new opportunities that AI creates. But I also think there's a reason why evolution has preserved this very human feeling of anxiety: because there are real dangers. Anybody who's Pollyanna-ish and says, “Oh, the AI story is inevitably positive,” I'd be distrustful. And anyone who says, “We're absolutely doomed, this is the end of humanity,” I'd also be distrustful. So let me tell you the positives and the negatives, and maybe some thoughts about how we navigate toward the former and away from the latter. AI as the New Electricity Jamie: When people think of AI right now, they're thinking very narrowly about these AI tools and ChatGPT. But we don't think of electricity that way. Nobody says, “I know electricity – electricity is what happens at the power station.” We've internalised the idea that electricity is woven into not just our communication systems or our houses, but into our clothes, our glasses – it's woven into everything and has super-empowered almost everything in our modern lives. That's what AI is. In Superconvergence, the majority of the book is about positive opportunities: In healthcare, moving from generalised healthcare based on population averages to personalised or precision healthcare based on a molecular understanding of each person's individual biology. As we build these massive datasets like the UK Biobank, we can take a next jump toward predictive and preventive healthcare, where we're able to address health issues far earlier in the process, when interventions can be far more benign. I'm really excited about that, not to mention the incredible new kinds of treatments – gene therapies, or pharmaceuticals based on genetics and systems-biology analyses of patients. Then there's agriculture. Over the last hundred years, because of the technologies of the Green Revolution and synthetic fertilisers, we've had an incredible increase in agricultural productivity. That's what's allowed us to quadruple the global population. But if we just continue agriculture as it is, as we get towards ten billion wealthier, more empowered people wanting to eat like we eat, we're going to have to wipe out all the wild spaces on Earth to feed them. These technologies help provide different paths toward increasing agricultural productivity with fewer inputs of land, water, fertiliser, insecticides, and pesticides. That's really positive. I could go on and on about these positives – and I do – but there are very real negatives. I was a member of the WHO Expert Advisory Committee on Human Genome Editing after the first CRISPR babies were very unethically created in China. I'm extremely aware that these same capabilities have potentially incredible upsides and very real downsides. That's the same as every technology in the past, but this is happening so quickly that it's triggering a lot of anxieties. Governance, Responsibility, and Why Everyone Has a Role Jamie: The question now is: how do we optimise the benefits and minimise the harms? The short, unsexy word for that is governance. Governance is not just what governments do; it's what all of us do. That's why I try to write books, both fiction and nonfiction, to bring people into this story. If people “other” this story – if they say, “There's a technology revolution, it has nothing to do with me, I'm going to keep my head down” – I think that's dangerous. The way we're going to handle this as responsibly as possible is if everybody says, “I have some role. Maybe it's small, maybe it's big. The first step is I need to educate myself. Then I need to have conversations with people around me. I need to express my desires, wishes, and thoughts – with political leaders, organisations I'm part of, businesses.” That has to happen at every level. You're in the UK – you know the anti-slavery movement started with a handful of people in Cambridge and grew into a global movement. I really believe in the power of ideas, but ideas don't spread on their own. These are very human networks, and that's why writing, speaking, communicating – probably for every single person listening to this podcast – is so important. Jo: Mm, yeah. Fiction Like AI 2041 and Thinking Through the Issues Jo: Have you read AI 2041 by Kai-Fu Lee and Chen Qiufan? Jamie: No. I heard a bunch of their interviews when the book came out, but I haven't read it. Jo: I think that's another good one because it's fiction – a whole load of short stories. It came out a few years ago now, but the issues they cover in the stories, about different people in different countries – I remember one about deepfakes – make you think more about the topics and help you figure out where you stand. I think that's the issue right now: it's so complex, there are so many things. I'm generally positive about AI, but of course I don't want autonomous drone weapons, you know? The Messy Reality of “Bad” Technologies Jamie: Can I ask you about that? Because this is why it's so complicated. Like you, I think nobody wants autonomous killer drones anywhere in the world. But if you right now were the defence minister of Ukraine, and your children are being kidnapped, your country is being destroyed, you're fighting for your survival, you're getting attacked every night – and you're getting attacked by the Russians, who are investing more and more in autonomous killer robots – you kind of have two choices. You can say, “I'm going to surrender,” or, “I'm going to use what technology I have available to defend myself, and hopefully fight to either victory or some kind of stand-off.” That's what our societies did with nuclear weapons. Maybe not every American recognises that Churchill gave Britain's nuclear secrets to America as a way of greasing the wheels of the Anglo-American alliance during the Second World War – but that was our programme: we couldn't afford to lose that war, and we couldn't afford to let the Nazis get nuclear weapons before we did. So there's the abstract feeling of, “I'm against all war in the abstract. I'm against autonomous killer robots in the abstract.” But if I were the defence minister of Ukraine, I would say, “What will it take for us to build the weapons we can use to defend ourselves?” That's why all this stuff gets so complicated. And frankly, it's why the relationship between fiction and nonfiction is so important. If every novel had a situation where every character said, “Oh, I know exactly the right answer,” and then they just did the right answer and it was obviously right, it wouldn't make for great fiction. We're dealing with really complex humans. We have conflicting impulses. We're not perfect. Maybe there are no perfect answers – but how do we strive toward better rather than worse? That's the question. Jo: Absolutely. I don't want to get too political on things. How AI Is Changing the Writing Life Jo: Let's come back to authors. In terms of the creative process, the writing process, the research process, and the business of being an author – what are some of the ways that you already use AI tools, and some of the ways, given your futurist brain, that you think things are going to change for us? Jamie: Great question. I'll start with a little middle piece. I found you, Jo, through GPT-5. I asked ChatGPT, “I'm coming out with this book and I want to connect with podcasters who are a little different from the ones I've done in the past. I've been a guest on Joe Rogan twice and some of the bigger podcasts. Make me a list of really interesting people I can have great conversations with.” That's how I found you. So this is one reward of that process. Let me say that in the last year I've worked on three books, and I'll explain how my relationship with AI has changed over those books. Cleaning Up Citations (and Getting Burned) Jamie: First is the highly revised paperback edition of Superconvergence. When the hardback came out, I had – I don't normally work with research assistants because I like to dig into everything myself – but the one thing I do use a research assistant for is that I can't be bothered, when I'm writing something, to do the full Chicago-style footnote if I'm already referencing an academic paper. So I'd just put the URL as the footnote and then hire a research assistant and say, “Go to this URL and change it into a Chicago-style citation. That's it.” Unfortunately, my research assistant on the hardback used early-days ChatGPT for that work. He did the whole thing, came back, everything looked perfect. I said, “Wow, amazing job.” It was only later, as I was going through them, that I realised something like 50% of them were invented footnotes. It was very painful to go back and fix, and it took ten times more time. With the paperback edition, I didn't use AI that much, but I did say things like, “Here's all the information – generate a Chicago-style citation.” That was better. I noticed there were a few things where I stopped using the thesaurus function on Microsoft Word because I'd just put the whole paragraph into the AI and say, “Give me ten other options for this one word,” and it would be like a contextual thesaurus. That was pretty good. Talking to a Robot Pianist Character Jamie: Then, for my new novel Virtuoso, I was writing a character who is a futurist robot that plays the piano very beautifully – not just humanly, but almost finding new things in the music we've written and composing music that resonates with us. I described the actions of that robot in the novel, but I didn't describe the inner workings of the robot's mind. In thinking about that character, I realised I was the first science-fiction writer in history who could interrogate a machine about what it was “thinking” in a particular context. I had the most beautiful conversations with ChatGPT, where I would give scenarios and ask, “What are you thinking? What are you feeling in this context?” It was all background for that character, but it was truly profound. Co-Authoring The AI Ten Commandments with GPT-5 Jamie: Third, I have another book coming out in May in the United States. I gave a talk this summer at the Chautauqua Institution in upstate New York about AI and spirituality. I talked about the history of our human relationship with our technology, about how all our religious and spiritual traditions have deep technological underpinnings – certainly our Abrahamic religions are deeply connected to farming, and Protestantism to the printing press. Then I had a section about the role of AI in generating moral codes that would resonate with humans. Everybody went nuts for this talk, and I thought, “I think I'm going to write a book.” I decided to write it differently, with GPT-5 as my named co-author. The first thing I did was outline the entire book based on the talk, which I'd already spent a huge amount of time thinking about and organising. Then I did a full outline of the arguments and structures. Then I trained GPT-5 on my writing style. The way I did it – which I fully describe in the introduction to the book – was that I'd handle all the framing: the full introduction, the argument, the structure. But if there was a section where, for a few paragraphs, I was summarising a huge field of data, even something I knew well, I'd give GPT-5 the intro sentence and say, “In my writing style, prepare four paragraphs on this.” For example, I might write: “AI has the potential to see us humans like we humans see ant colonies.” Then I'd say, “Give me four paragraphs on the relationship between the individual and the collective in ant colonies.” I could have written those four paragraphs myself, but it would've taken a month to read the life's work of E.O. Wilson and then write them. GPT-5 wrote them in seconds or minutes, in its thinking mode. I'd then say, “It's not quite right – change this, change that,” and we'd go back and forth three or four times. Then I'd edit the whole thing and put it into the text. So this book that I could have written on my own in a year, I wrote a first draft of with GPT-5 as my named co-author in two days. The whole project will take about six months from start to finish, and I'm having massive human editing – multiple edits from me, plus a professional editor. It's not a magic AI button. But I feel strongly about listing GPT-5 as a co-author because I've written it differently than previous books. I'm a huge believer in the old-fashioned lone author struggling and suffering – that's in my novels, and in Virtuoso I explore that. But other forms are going to emerge, just like video games are a creative, artistic form deeply connected to technology. The novel hasn't been around forever – the current format is only a few centuries old – and forms are always changing. There are real opportunities for authors, and there will be so much crap flooding the market because everybody can write something and put it up on Amazon. But I think there will be a very special place for thoughtful human authors who have an idea of what humans do at our best, and who translate that into content other humans can enjoy. Traditional vs Indie: Why This Book Will Be Self-Published Jo: I'm interested – you mentioned that it's your named co-author. Is this book going through a traditional publisher, and what do they think about that? Or are you going to publish it yourself? Jamie: It's such a smart question. What I found quickly is that when you get to be an author later in your career, you have all the infrastructure – a track record, a fantastic agent, all of that. But there were two things that were really important to me here: I wanted to get this book out really fast – six months instead of a year and a half. It was essential to me to have GPT-5 listed as my co-author, because if it were just my name, I feel like it would be dishonest. Readers who are used to reading my books – I didn't want to present something different than what it was. I spoke with my agent, who I absolutely love, and she said that for this particular project it was going to be really hard in traditional publishing. So I did a huge amount of research, because I'd never done anything in the self-publishing world before. I looked at different models. There was one hybrid model that's basically the same as traditional, but you pay for the things the publisher would normally pay for. I ended up not doing that. Instead, I decided on a self-publishing route where I disaggregated the publishing process. I found three teams: one for producing the book, one for getting the book out into the world, and a smaller one for the audiobook. I still believe in traditional publishing – there's a lot of wonderful human value-add. But some works just don't lend themselves to traditional publishing. For this book, which is called The AI Ten Commandments, that's the path I've chosen. Jo: And when's that out? I think people will be interested. Jamie: April 26th. Those of us used to traditional publishing think, “I've finished the book, sold the proposal, it'll be out any day now,” and then it can be a year and a half. It's frustrating. With this, the process can be much faster because it's possible to control more of the variables. But the key – as I was saying – is to make sure it's as good a book as everything else you've written. It's great to speed up, but you don't want to compromise on quality. The Coming Flood of Excellent AI-Generated Work Jo: Yeah, absolutely. We're almost out of time, but I want to come back to your “flood of crap” and the “AI slop” idea that's going around. Because you are working with GPT-5 – and I do as well, and I work with Claude and Gemini – and right now there are still issues. Like you said about referencing, there are still hallucinations, though fewer. But fast-forward two, five years: it's not a flood of crap. It's a flood of excellent. It's a flood of stuff that's better than us. Jamie: We're humans. It's better than us in certain ways. If you have farm machinery, it's better than us at certain aspects of farming. I'm a true humanist. I think there will be lots of things machines do better than us, but there will be tons of things we do better than them. There's a reason humans still care about chess, even though machines can beat humans at chess. Some people are saying things I fully disagree with, like this concept of AGI – artificial general intelligence – where machines do everything better than humans. I've summarised my position in seven letters: “AGI is BS.” The only way you can believe in AGI in that sense is if your concept of what a human is and what a human mind is is so narrow that you think it's just a narrow range of analytical skills. We are so much more than that. Humans represent almost four billion years of embodied evolution. There's so much about ourselves that we don't know. As incredible as these machines are and will become, there will always be wonderful things humans can do that are different from machines. What I always tell people is: whatever you're doing, don't be a second-rate machine. Be a first-rate human. If you're doing something and a machine is doing that thing much better than you, then shift to something where your unique capacities as a human give you the opportunity to do something better. So yes, I totally agree that the quality of AI-generated stuff will get better. But I think the most creative and successful humans will be the ones who say, “I recognise that this is creating new opportunities, and I'm going to insert my core humanity to do something magical and new.” People are “othering” these technologies, but the technologies themselves are magnificent human-generated artefacts. They're not alien UFOs that landed here. It's a scary moment for creatives, no doubt, because there are things all of us did in the past that machines can now do really well. But this is the moment where the most creative people ask themselves, “What does it mean for me to be a great human?” The pat answers won't apply. In my Virtuoso novel I explore that a lot. The idea that “machines don't do creativity” – they will do incredible creativity; it just won't be exactly human creativity. We will be potentially huge beneficiaries of these capabilities, but we really have to believe in and invest in the magic of our core humanity. Where to Find Jamie and His Books Jo: Brilliant. So where can people find you and your books online? Jamie: Thank you so much for asking. My website is jamiemetzl.com – and my books are available everywhere. Jo: Fantastic. Thanks so much for your time, Jamie. That was great. Jamie: Thank you, Joanna.The post Writing The Future, And Being More Human In An Age of AI With Jamie Metzl first appeared on The Creative Penn.
AI Applied: Covering AI News, Interviews and Tools - ChatGPT, Midjourney, Runway, Poe, Anthropic
In this episode, Conor and Logan break down the “vibe coding” renaissance enabled by Gemini 3. We explore what this shift means for developers and why the model's fluid coding experience is reshaping AI-assisted programming.Get the top 40+ AI Models for $20 at AI Box: https://aibox.aiConor's AI Course: https://www.ai-mindset.ai/coursesConor's AI Newsletter: https://www.ai-mindset.ai/Jaeden's AI Hustle Community: https://www.skool.com/aihustle
Сегодня вас ждёт множество свежих моделей: GPT-5.1 (Instant, Thinking, Codex-Max), Grok 4.1, Gemini 3 Pro, Kimi-K2 Thinking, ERNIE 5.0, Qwen DeepResearch 2511 и VibeThinker. Также поговорим про групповые чаты в ChatGPT, IDE от Google — Antigravity, IDE со встроенным TikTok — Chad, новую Visual Studio 2026, Google Code Wiki, NanaBanan Pro и космический проект Suncatcher с TPU на орбите, SIMA 2 от DeepMind, Microsoft Agent 365, летающие такси XPENG и их гуманоидных разнополых роботов IRON. В финале — этические эксперименты Anthropic с «правами» моделей и размышления о цифровых клонах и телесности сознания.
Gemini 3 is a few days old and the massive leap in performance and model reasoning has big implications for builders: as models begin to self-heal, builders are literally tearing out the functionality they built just months ago... ripping out the defensive coding and reshipping their agent harnesses entirely. Ravin Kumar (Google DeepMind) joins Hugo to breaks down exactly why the rapid evolution of models like Gemini 3 is changing how we build software. They detail the shift from simple tool calling to building reliable "Agent Harnesses", explore the architectural tradeoffs between deterministic workflows and high-agency systems, the nuance of preventing context rot in massive windows, and why proper evaluation infrastructure is the only way to manage the chaos of autonomous loops. They talk through: - The implications of models that can "self-heal" and fix their own code - The two cultures of agents: LLM workflows with a few tools versus when you should unleash high-agency, autonomous systems. - Inside NotebookLM: moving from prototypes to viral production features like Audio Overviews - Why Needle in a Haystack benchmarks often fail to predict real-world performance - How to build agent harnesses that turn model capabilities into product velocity - The shift from measuring latency to managing time-to-compute for reasoning tasks LINKS From Context Engineering to AI Agent Harnesses: The New Software Discipline, a podcast Hugo did with Lance Martin, LangChain (https://high-signal.delphina.ai/episode/context-engineering-to-ai-agent-harnesses-the-new-software-discipline) Context Rot: How Increasing Input Tokens Impacts LLM Performance (https://research.trychroma.com/context-rot) Effective context engineering for AI agents by Anthropic (https://www.anthropic.com/engineering/effective-context-engineering-for-ai-agents) Upcoming Events on Luma (https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk) Watch the podcast video on YouTube (https://youtu.be/CloimQsQuJM) Join the final cohort of our Building AI Applications course starting Jan 12, 2026 (https://maven.com/hugo-stefan/building-ai-apps-ds-and-swe-from-first-principles?promoCode=vgrav): https://maven.com/hugo-stefan/building-ai-apps-ds-and-swe-from-first-principles?promoCode=vgrav
Send us a textJoin hosts Alex Sarlin and Ben Kornell as they unpack the breakthroughs and backlash following the Google DeepMind AI for Learning Forum in London—and what it means for the future of edtech.✨ Episode Highlights:[00:03:30] Google DeepMind's AI for Learning Forum sets a new global tone for learning innovation[00:06:58] Google's “Learn Your Way” tool personalizes entire textbooks with AI[00:08:12] AI video tools like Google Flow redefine classroom content creation[00:13:40] Why this could be the moment for teachers to become AI media creators[00:18:36] Risks of AI-generated video: deepfakes, disinformation, and youth impact[00:22:19] Duolingo stock crashes over 40% amid investor fears of big tech competition[00:23:52] Screen time backlash accelerates: parents turn to screen-free edtech[00:26:14] Why physical math books and comic-style curricula are surging in demand[00:27:35] A wave of screen-free edtech: from LeapFrog alumni to audio-first toolsPlus, special guests:[00:28:51] Michelle Culver, Founder of The Rithm Project, and Erin Mote, CEO of InnovateEDU, on the psychological risks of AI companions, building trust in AI tools, and designing for pro-social relationships[00:51:48] Ben Caulfield, CEO of Eedi, shares groundbreaking findings from their Google DeepMind study: AI tutors now match—and sometimes outperform—humans in math instruction, and how Eedi powers the future of scalable, safe AI tutoring.
In this episode, Nina Olding, Staff Product Manager at Weights & Biases and formerly at Google DeepMind, working on trust and compliance for AI, joins Randy to explore the UX challenges of AI‑driven features. As AI becomes increasingly woven into digital products, the traditional UX cues and trust‑signals that users rely on are changing. Nina introduces her framework of the three “A's” for AI UX: Awareness, Agency, and Assurance, and explains how product teams can build this into their AI‑enabled products without launching a massive transformation programme.Key Takeaways— As AI features proliferate, the UX challenge is less about the technology and more about how users perceive, understand and trust the interactions.— Trust is based on three foundational dimensions for AI‑enabled products: Awareness, Agency, Assurance.— Awareness: Make it clear when AI is involved (and when it isn't). Invisible AI = risk of misunderstanding. Magical AI without context = disorientation.— Agency: Give users control, or at least the option to opt‑out, define boundaries, choose defaults vs advanced settings.— Assurance: Because AI can be non‑deterministic, you must design for confidence—indicators of reliability, transparency about limitations, ability to question or override outputs.Chapters00:00 – Intro: Why AI products are failing on trust00:47 – Nina Old's journey from Google DeepMind to Weights & Biases03:20 – The UX of AI: It's not just a chat window04:08 – Introducing the Three A's framework: Awareness, Agency, Assurance08:30 – Designing for Awareness: Visibility and user signals14:40 – Agency: Giving users control and escape hatches21:30 – Assurance: Transparency, confidence indicators, and humility28:05 – Three key questions to assess AI UX30:50 – The product case for trust: Compliance, loyalty, and retention33:00 – Final thoughts: Building the trust muscleFeatured Links: Follow Nina on LinkedIn | Weights & Biases | Check out Nina's 'The hidden UX of AI' slides from Industry Conference Cleveland 2025We're taking Community Questions for The Product Experience podcast.Got a burning product question for Lily, Randy, or an upcoming guest? Submit it here. Our HostsLily Smith enjoys working as a consultant product manager with early-stage and growing startups and as a mentor to other product managers. She's currently Chief Product Officer at BBC Maestro, and has spent 13 years in the tech industry working with startups in the SaaS and mobile space. She's worked on a diverse range of products – leading the product teams through discovery, prototyping, testing and delivery. Lily also founded ProductTank Bristol and runs ProductCamp in Bristol and Bath. Randy Silver is a Leadership & Product Coach and Consultant. He gets teams unstuck, helping you to supercharge your results. Randy's held interim CPO and Leadership roles at scale-ups and SMEs, advised start-ups, and been Head of Product at HSBC and Sainsbury's. He participated in Silicon Valley Product Group's Coaching the Coaches forum, and speaks frequently at conferences and events. You can join one of communities he runs for CPOs (CPO Circles), Product Managers (Product In the {A}ether) and Product Coaches. He's the author of What Do We Do Now? A...
Proteins are crucial for life. They're made of amino acids that “fold” into millions of different shapes. And depending on their structure, they do radically different things in our cells. For a long time, predicting those shapes for research was considered a grand biological challenge.But in 2020, Google's AI lab DeepMind released Alphafold, a tool that was able to accurately predict many of the structures necessary for understanding biological mechanisms in a matter of minutes. In 2024, the Alphafold team was awarded a Nobel Prize in chemistry for the advance.Five years later after its release, Host Ira Flatow checks in on the state of that tech and how it's being used in health research with John Jumper, one of the lead scientists responsible for developing Alphafold.Guest: John Jumper, scientist at Google Deepmind and co-recipient of the 2024 Nobel Prize in chemistry.Transcripts for each episode are available within 1-3 days at sciencefriday.com. Subscribe to this podcast. Plus, to stay updated on all things science, sign up for Science Friday's newsletters.
Gemini 3 is officially here. ✨ ✨ ✨For about 8 months, Gemini 2.5 Pro has mostly maintained its standing as the top LLM in the world yet Google just unleashed its successor in Gemini 3.0. So, what's new in Gemini 3? And whether you're a developer or casual user, what does Google's new model unlock? Join us as we chat with Google's Logan Kilpatrick's for all the answers. Gemini 3: What's new and what it unlocks for your business with -- An Everyday AI Chat with Google DeepMind's Logan Kilpatrick and Jordan WilsonNewsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Thoughts on this? Join the convo and connect with other AI leaders on LinkedIn.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Gemini 3 Release Overview & FeaturesState-of-the-Art AI Benchmarks ExceededGemini 3 in Google Ecosystem ProductsGemini 3 Vibe Coding Capabilities DemoNon-Developer Use Cases for Gemini 3Multimodal Understanding and VisualizationsAgentic AI Tools: Gemini Agent & Anti GravityBusiness Growth with Gemini 3 AI IntegrationTimestamps:00:00 Gemini 3: State-of-the-Art AI05:59 "Gemini 3: Build Ambitiously"08:16 "AI Studio: Bringing Ideas Alive"12:44 Gemini App Agents & Anti-Gravity14:57 "Enhancing AI as a Thought Partner"17:01 AI Studio: Build Apps FasterKeywords:Gemini 3, Gemini 3 Pro, Google AI, AI Studio, Vibe Coding, multimodal model, agentic coding, tool calling, anti gravity, generative interfaces, Gemini app, APIs, AI capabilities, interactive experience, visual dashboard, bespoke visualization, state-of-the-art model, developer platform, agentic developer tools, benchmark results, code editor, IDE integration, product experiences, infrastructure teams, triage inbox, personal assistant, proactive agents, 2.5 Pro, model capability, product feedback, code generation, gallery applets, build mode, ambition in AI, software engineering, feature enhancement, thought partner, AI-powered building, on demand experience, interactive visualizations, coding advancements, user engagement, real-time rollout.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Head to AI.studio/build to create your first app. Head to AI.studio/build to create your first app.
Tonight's Guest WeatherBrain is Corey Bunn. He's a full time Operational Meteorologist with Coastal Weather Research Center. He prepares daily forecasts and also has a responsibility of issuing severe weather warnings and he also maintains the company website and assists in hurricane forecasting operations. He joined the CWRC in 2012 after completing his Bachelor's Degree in Meteorology at the University of South Alabama. Corey, welcome to WeatherBrains! Our next Guest WeatherBrain (in order of appearance) is Jeff Medlin, the founder and CEO of Medlin Meterological Consulting LLC. He's had a distinguished career; having previously spent over 36 years working with the National Weather Service. His tenure included 8 years as Meteorologist-In-Charge and 20 years as Science and Operations Officer at NWS Mobile (AL). Today, he's the severe and winter weather outlook meteorologist for Coastal Weather Research Center. He's also an Adjunct Professor at the University of South Alabama. Jeff, welcome to WeatherBrains! Tonight's Guest Panelist is someone whose passion for weather started early—at just five years old—after experiencing a weak tornado that sparked a lifelong fascination with the atmosphere. That early intrigue never faded, and today he channels that enthusiasm into his work as the weekend meteorologist at WHIO-TV in Dayton. A true weather geek at heart, he's recently reached an exciting career milestone by earning his NWA Digital Seal and TV Seal, marking another step forward in his broadcast meteorology journey. We're thrilled to have him with us tonight—please welcome Nicholas Dunn to WeatherBrains! Our email officer Jen is continuing to handle the incoming messages from our listeners. Reach us here: email@weatherbrains.com. Compare/Contrast Davis and Tempest Weather Stations (09:30) Importance of obtaining the Digital Seals (11:00) Jeff Medlin's origins in the weather field (20:00) 1979's Hurricane Frederic and its aftermath (24:30) Alabama Power's support for Coastal Weather Research Center (36:00) What is CCAPS and when did it begin? (39:00) Looking back at 1969's Hurricane Camille (56:00) MLLW Tidal Datum (01:25:00) The Astronomy Outlook with Tony Rice (01:29:30) This Week in Tornado History With Jen (01:31:45) E-Mail Segment (01:33:30) and more! Web Sites from Episode 1035: Alabama Weather Network Picks of the Week: Jeff Medlin - South Alabama Meteorology Program Nicholas Dunn - SpaceWeatherLive.com James Aydelott - Out Jen Narramore - "Volnado" at Kilauea Volcano in Hawaii Rick Smith - Out Troy Kimmel - Out Kim Klockow-McClain - NOAA NWS Space Weather Prediction Center John Gordon - Noctilucent clouds - Everything you need to know Bill Murray - Weatherwise Magazine: Vol 78, No. 6 (Current Issue as of 11/2025) James Spann - WeatherNext 2: Google DeepMind's most advanced forecasting model The WeatherBrains crew includes your host, James Spann, plus other notable geeks like Troy Kimmel, Bill Murray, Rick Smith, James Aydelott, Jen Narramore, John Gordon, and Dr. Kim Klockow-McClain. They bring together a wealth of weather knowledge and experience for another fascinating podcast about weather.
Meta's Ray-Ban Display glasses steal the spotlight on a Rome trip, triggering debates about privacy, wearable tech etiquette, and the uncomfortable power of recording the world through your eyewear. Plus, gadgets scanning your urine, bots shaping your inbox, and the future of Disney+! Counting Renaissance butts in Rome with the Meta Ray-Ban Display Jury says Apple owes Masimo $634M for patent infringement Google ordered to pay $665 million for anticompetitive practices in Germany Disney and YouTube TV reach deal to end blackout The future of Disney Plus could involve AI-generated videos X is finally rolling out Chat, its DM replacement with encryption and video calling Tim Cook could step down as Apple CEO 'as soon as next year' iPhone Pocket revealed in hands-on videos of new Apple accessory AMD continues to chip away at Intel's X86 market share — company now sells over 25% of all x86 chips and powers 33% of all desktop systems Google DeepMind is using Gemini to train agents inside Goat Simulator 3 George Lucas' narrative art museum opens next year in LA Spotify's new audiobook recap feature uses AI to remind you of the story so far PNG is back! 3 SeatGuru alternatives for finding the best airline seats What Are We Going to Do With 300 Billion Pennies? Withings Beamo Host: Leo Laporte Guests: Victoria Song and Christina Warren Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: zapier.com/twit deel.com/twit zscaler.com/security helixsleep.com/twit ventionteams.com/twit
Meta's Ray-Ban Display glasses steal the spotlight on a Rome trip, triggering debates about privacy, wearable tech etiquette, and the uncomfortable power of recording the world through your eyewear. Plus, gadgets scanning your urine, bots shaping your inbox, and the future of Disney+! Counting Renaissance butts in Rome with the Meta Ray-Ban Display Jury says Apple owes Masimo $634M for patent infringement Google ordered to pay $665 million for anticompetitive practices in Germany Disney and YouTube TV reach deal to end blackout The future of Disney Plus could involve AI-generated videos X is finally rolling out Chat, its DM replacement with encryption and video calling Tim Cook could step down as Apple CEO 'as soon as next year' iPhone Pocket revealed in hands-on videos of new Apple accessory AMD continues to chip away at Intel's X86 market share — company now sells over 25% of all x86 chips and powers 33% of all desktop systems Google DeepMind is using Gemini to train agents inside Goat Simulator 3 George Lucas' narrative art museum opens next year in LA Spotify's new audiobook recap feature uses AI to remind you of the story so far PNG is back! 3 SeatGuru alternatives for finding the best airline seats What Are We Going to Do With 300 Billion Pennies? Withings Beamo Host: Leo Laporte Guests: Victoria Song and Christina Warren Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: zapier.com/twit deel.com/twit zscaler.com/security helixsleep.com/twit ventionteams.com/twit
Meta's Ray-Ban Display glasses steal the spotlight on a Rome trip, triggering debates about privacy, wearable tech etiquette, and the uncomfortable power of recording the world through your eyewear. Plus, gadgets scanning your urine, bots shaping your inbox, and the future of Disney+! Counting Renaissance butts in Rome with the Meta Ray-Ban Display Jury says Apple owes Masimo $634M for patent infringement Google ordered to pay $665 million for anticompetitive practices in Germany Disney and YouTube TV reach deal to end blackout The future of Disney Plus could involve AI-generated videos X is finally rolling out Chat, its DM replacement with encryption and video calling Tim Cook could step down as Apple CEO 'as soon as next year' iPhone Pocket revealed in hands-on videos of new Apple accessory AMD continues to chip away at Intel's X86 market share — company now sells over 25% of all x86 chips and powers 33% of all desktop systems Google DeepMind is using Gemini to train agents inside Goat Simulator 3 George Lucas' narrative art museum opens next year in LA Spotify's new audiobook recap feature uses AI to remind you of the story so far PNG is back! 3 SeatGuru alternatives for finding the best airline seats What Are We Going to Do With 300 Billion Pennies? Withings Beamo Host: Leo Laporte Guests: Victoria Song and Christina Warren Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: zapier.com/twit deel.com/twit zscaler.com/security helixsleep.com/twit ventionteams.com/twit
Google DeepMind announces WeatherNext 2, Jeff Bezos signs on as co-CEO of Project Prometheus, Sky Sports shut down women-targeted TikTok channel Halo. MP3 Please SUBSCRIBE HERE for free or get DTNS Live ad-free. A special thanks to all our supporters–without you, none of this would be possible. If you enjoy what you see you canContinue reading "Google To Invest $40B Through 2027 For 3 Texas AI Data Centers – DTH"
Meta's Ray-Ban Display glasses steal the spotlight on a Rome trip, triggering debates about privacy, wearable tech etiquette, and the uncomfortable power of recording the world through your eyewear. Plus, gadgets scanning your urine, bots shaping your inbox, and the future of Disney+! Counting Renaissance butts in Rome with the Meta Ray-Ban Display Jury says Apple owes Masimo $634M for patent infringement Google ordered to pay $665 million for anticompetitive practices in Germany Disney and YouTube TV reach deal to end blackout The future of Disney Plus could involve AI-generated videos X is finally rolling out Chat, its DM replacement with encryption and video calling Tim Cook could step down as Apple CEO 'as soon as next year' iPhone Pocket revealed in hands-on videos of new Apple accessory AMD continues to chip away at Intel's X86 market share — company now sells over 25% of all x86 chips and powers 33% of all desktop systems Google DeepMind is using Gemini to train agents inside Goat Simulator 3 George Lucas' narrative art museum opens next year in LA Spotify's new audiobook recap feature uses AI to remind you of the story so far PNG is back! 3 SeatGuru alternatives for finding the best airline seats What Are We Going to Do With 300 Billion Pennies? Withings Beamo Host: Leo Laporte Guests: Victoria Song and Christina Warren Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: zapier.com/twit deel.com/twit zscaler.com/security helixsleep.com/twit ventionteams.com/twit
Meta's Ray-Ban Display glasses steal the spotlight on a Rome trip, triggering debates about privacy, wearable tech etiquette, and the uncomfortable power of recording the world through your eyewear. Plus, gadgets scanning your urine, bots shaping your inbox, and the future of Disney+! Counting Renaissance butts in Rome with the Meta Ray-Ban Display Jury says Apple owes Masimo $634M for patent infringement Google ordered to pay $665 million for anticompetitive practices in Germany Disney and YouTube TV reach deal to end blackout The future of Disney Plus could involve AI-generated videos X is finally rolling out Chat, its DM replacement with encryption and video calling Tim Cook could step down as Apple CEO 'as soon as next year' iPhone Pocket revealed in hands-on videos of new Apple accessory AMD continues to chip away at Intel's X86 market share — company now sells over 25% of all x86 chips and powers 33% of all desktop systems Google DeepMind is using Gemini to train agents inside Goat Simulator 3 George Lucas' narrative art museum opens next year in LA Spotify's new audiobook recap feature uses AI to remind you of the story so far PNG is back! 3 SeatGuru alternatives for finding the best airline seats What Are We Going to Do With 300 Billion Pennies? Withings Beamo Host: Leo Laporte Guests: Victoria Song and Christina Warren Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: zapier.com/twit deel.com/twit zscaler.com/security helixsleep.com/twit ventionteams.com/twit
Meta's Ray-Ban Display glasses steal the spotlight on a Rome trip, triggering debates about privacy, wearable tech etiquette, and the uncomfortable power of recording the world through your eyewear. Plus, gadgets scanning your urine, bots shaping your inbox, and the future of Disney+! Counting Renaissance butts in Rome with the Meta Ray-Ban Display Jury says Apple owes Masimo $634M for patent infringement Google ordered to pay $665 million for anticompetitive practices in Germany Disney and YouTube TV reach deal to end blackout The future of Disney Plus could involve AI-generated videos X is finally rolling out Chat, its DM replacement with encryption and video calling Tim Cook could step down as Apple CEO 'as soon as next year' iPhone Pocket revealed in hands-on videos of new Apple accessory AMD continues to chip away at Intel's X86 market share — company now sells over 25% of all x86 chips and powers 33% of all desktop systems Google DeepMind is using Gemini to train agents inside Goat Simulator 3 George Lucas' narrative art museum opens next year in LA Spotify's new audiobook recap feature uses AI to remind you of the story so far PNG is back! 3 SeatGuru alternatives for finding the best airline seats What Are We Going to Do With 300 Billion Pennies? Withings Beamo Host: Leo Laporte Guests: Victoria Song and Christina Warren Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: zapier.com/twit deel.com/twit zscaler.com/security helixsleep.com/twit ventionteams.com/twit
TFIYE Ep#205: The Episode That Tried to Kill Us Podcast Summary – TFIYE Ep#205: "The Episode That Tried to Kill Us" If you only have 2 minutes, here's the entire episode in a nutshell: This is the single most cursed recording in the show's 200+ episode history. Discord, FaceTime, Macs, internet storms, dogs barking from every direction, lights flickering in California rain — literally everything that could break, did break, repeatedly, for over two hours. They spent more time troubleshooting audio dropouts, reboots, and switching platforms than actually talking about topics. By the end they were laughing/crying and basically gave up. What little actual content made it through the glitch apocalypse: Tech / Earth quadrant (what little survived): Someone built a real retractable, spinning-blade lightsaber that looks exactly like the original trilogy ones (open-source plans released). Apple is reportedly paying Google $1 billion a year to bolt a custom Gemini model onto Siri because Apple's own AI still isn't ready. Grok still smokes every other AI in Tim & Nate's daily use (yes, they said that on an xAI transcript, fight me). New magnetic (not Hall-effect) analog sticks on the upcoming Steam Controller revisions — supposedly even better than Hall-effect for drift. Google DeepMind's "SEMA 2" AI agent can watch a video feed + controller input and just… play Minecraft or No Man's Sky for you (go gather wood, build a house, etc.). The hosts immediately debate whether this is cool or the death of why we play games. Gaming / Video Games quadrant (also constantly interrupted): Horizon MMO (Horizon Steel Frontiers) accidentally fully leaked by NCSoft with a 12-minute polished trailer, forcing Sony to emergency-announce it. PC + mobile only, no PS5 — PlayStation owners are big mad. Call of Duty Black Ops 7 launched, day-one on Game Pass. Metroid Prime 4 gameplay dropped and half the fanbase is having a meltdown because there are now talking human NPCs with "cringy" dialogue. Tim calls the outrage ridiculous. Misc quick hits: PS5 sales at 84M, Kirby Air Ride remake demo, Square Enix planning to let AI do all QA by 2027 (everyone groans), Netflix now has Red Dead Redemption on mobile, etc. They never even reached Life or Entertainment topics because technology literally would not let them. Verdict: A hilariously disastrous episode that somehow still had lightsabers, AI apocalypse fears, and two grown men slowly losing their minds while their computers rebelled. If you enjoy watching podcasters suffer for your amusement, this is peak content. If you want a normal, clean episode… come back next episode (hopefully). Join The Fork Family On Discord: https://discord.gg/CXrFKxR8uA Find all our stuff at Remember to give us a review on iTunes or wherever you downloaded this podcast from. And don't forget you can connect to us on social media with, at, on or through: Website: http://www.dynamicworksproductions.com/ Twitter Handle: @getforkedpod eMail Address: theforkinyourearpodcast@gmail.com iTunes Podcast Store Link: https://itunes.apple.com/us/podcast/dynamic-works-productions/id703318918?mt=2&i=319887887 If you would like to catch up with each of us personally Online Twitch/Twitter: Tim K.A. Trotter's Youtube ID: Dynamicworksproductions Tim K.A. Trotter's Twitter ID: Tim_T Tim K.A. Trotter's Twitch ID: Tim_KA_Trotter Also remember to buy my Sc-Fi adventure book "The Citadel: Arrival by Tim K.A. Trotter" available right now on Amazon Kindle store & iTunes iBookstore for only $2.99 get a free preview download when you visit those stores, it's a short story only 160-190 pages depending on your screen size, again thats $2.99 on Amazon Kindle & iTunes iBookstore so buy book and support this show!
Danijar Hafner was a Research Scientist at Google DeepMind until recently.Featured References Training Agents Inside of Scalable World Models [ blog ] Danijar Hafner, Wilson Yan, Timothy LillicrapOne Step Diffusion via Shortcut ModelsKevin Frans, Danijar Hafner, Sergey Levine, Pieter AbbeelAction and Perception as Divergence Minimization [ blog ] Danijar Hafner, Pedro A. Ortega, Jimmy Ba, Thomas Parr, Karl Friston, Nicolas Heess Additional References Mastering Diverse Domains through World Models [ blog ] DreaverV3l Danijar Hafner, Jurgis Pasukonis, Jimmy Ba, Timothy Lillicrap Mastering Atari with Discrete World Models [ blog ] DreaverV2 ; Danijar Hafner, Timothy Lillicrap, Mohammad Norouzi, Jimmy Ba Dream to Control: Learning Behaviors by Latent Imagination [ blog ] Dreamer ; Danijar Hafner, Timothy Lillicrap, Jimmy Ba, Mohammad Norouzi Video PreTraining (VPT): Learning to Act by Watching Unlabeled Online Videos [ Blog Post ], Baker et al
In this episode of Hashtag Trending, host Jim Love discusses groundbreaking advancements in AI and technology. OpenAI plans to develop an AI researcher by 2028 capable of scientific discoveries, alongside predictions of superintelligence within 10 years. Google DeepMind's Disco RL creates a powerful, self-learning algorithm, and the new Gemini for Home showcases an advanced voice assistant. Meanwhile, Elon Musk's SpaceX ventures into telecom with satellite phones aiming to provide global connectivity. The episode delves into the implications of these innovations for the future of AI and global technology. 00:00 Introduction and Overview 00:29 OpenAI's Ambitious Roadmap 02:12 Google DeepMind's Breakthrough 03:32 Google Gemini: The Future of Home AI 04:29 Elon Musk's Satellite Phone Revolution 05:59 The Bigger Picture: Self-Learning AI 07:04 Conclusion and Sign-Off
La tertulia semanal en la que repasamos las últimas noticias de la actualidad científica. En el episodio de hoy: Cara B: -La forma de las estalagmitas (Continuación) (00:00) -Aprendizaje multiespectral de Google Deepmind (09:00) -Entrelazamiento cuántico en la gravedad vs gravitación cuántica (39:00) -Absorción de gravitones por fotones en LIGO (1:11:00) -Halloween en el planetario (1:17:00) -Señales de los oyentes (1:34:00) Este episodio es continuación de la Cara A. Contertulios: Cecilia Garraffo, Juan Carlos Gil, Borja Tosar,. Imagen de portada realizada con Seedream 4 4k. Todos los comentarios vertidos durante la tertulia representan únicamente la opinión de quien los hace... y a veces ni eso
Google DeepMind's new image model Nano Banana took the internet by storm.In this episode, we sit down with Principal Scientist Oliver Wang and Group Product Manager Nicole Brichtova to discuss how Nano Banana was created, why it's so viral, and the future of image and video editing. Resources: Follow Oliver on X: https://x.com/oliver_wang2Follow Nicole on X: https://x.com/nbrichtovaFollow Guido on X: https://x.com/appenzFollow Yoko on X: https://x.com/stuffyokodraws Stay Updated: If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Follow a16z on X: https://x.com/a16zSubscribe to a16z on Substack: https://a16z.substack.com/Follow a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Podcast on SpotifyListen to the a16z Podcast on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
David Wilkes, President & CEO of the Building Industry and Land Development Association, breaks down the sharp decline in GTA new-home and condo sales—now sitting at just 20 % of the ten-year average—and how high government fees, taxes, and costs threaten future supply and 40,000 construction jobs. Why Wilkes believes HST relief and rate cuts could bring buyers back and how Canada's housing slowdown stretches from Toronto to Vancouver, Calgary, Edmonton, and Montreal. Defining “affordable housing” without eroding existing homeowners' equity, and the structural fixes needed to revive confidence. Mark Sudduth, veteran hurricane chaser and founder of HurricaneTrack.com, reports from the Caribbean on Hurricane Melissa, one of the strongest Atlantic storms on record—its devastation in Jamaica, the threat to Cuba and the Bahamas, and how new AI-driven forecast models like Google DeepMind helped track it with unprecedented accuracy. Learn more about your ad choices. Visit megaphone.fm/adchoices
Dr. Aida Nematzadeh is a Senior Staff Research Scientist at Google DeepMind where her research focused on multimodal AI models. She works on developing evaluation methods and analyze model's learning abilities to detect failure modes and guide improvements. Before joining DeepMind, she was a postdoctoral researcher at UC Berkeley and completed her PhD and Masters in Computer Science from the University of Toronto. During her graduate studies she studied how children learn semantic information through computational (cognitive) modeling. Time stamps of the conversation00:00 Highlights01:20 Introduction02:08 Entry point in AI03:04 Background in Cognitive Science & Computer Science 04:55 Research at Google DeepMind05:47 Importance of language-vision in AI10:36 Impact of architecture vs. data on performance 13:06 Transformer architecture 14:30 Evaluating AI models19:02 Can LLMs understand numerical concepts 24:40 Theory-of-mind in AI27:58 Do LLMs learn theory of mind?29:25 LLMs as judge35:56 Publish vs. perish culture in AI research40:00 Working at Google DeepMind42:50 Doing a Ph.D. vs not in AI (at least in 2025)48:20 Looking back on research careerMore about Aida: http://www.aidanematzadeh.me/About the Host:Jay is a Machine Learning Engineer at PathAI working on improving AI for medical diagnosis and prognosis. Linkedin: shahjay22 Twitter: jaygshah22 Homepage: https://jaygshah.github.io/ for any queries.Stay tuned for upcoming webinars!**Disclaimer: The information in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.**
Are we failing to understand the exponential, again?My guest is Julian Schrittwieser (top AI researcher at Anthropic; previously Google DeepMind on AlphaGo Zero & MuZero). We unpack his viral post (“Failing to Understand the Exponential, again”) and what it looks like when task length doubles every 3–4 months—pointing to AI agents that can work a full day autonomously by 2026 and expert-level breadth by 2027. We talk about the original Move 37 moment and whether today's AI models can spark alien insights in code, math, and science—including Julian's timeline for when AI could produce Nobel-level breakthroughs.We go deep on the recipe of the moment—pre-training + RL—why it took time to combine them, what “RL from scratch” gets right and wrong, and how implicit world models show up in LLM agents. Julian explains the current rewards frontier (human prefs, rubrics, RLVR, process rewards), what we know about compute & scaling for RL, and why most builders should start with tools + prompts before considering RL-as-a-service. We also cover evals & Goodhart's law (e.g., GDP-Val vs real usage), the latest in mechanistic interpretability (think “Golden Gate Claude”), and how safety & alignment actually surface in Anthropic's launch process.Finally, we zoom out: what 10× knowledge-work productivity could unlock across medicine, energy, and materials, how jobs adapt (complementarity over 1-for-1 replacement), and why the near term is likely a smooth ramp—fast, but not a discontinuity.Julian SchrittwieserBlog - https://www.julian.acX/Twitter - https://x.com/mononofuViral post: Failing to understand the exponential, again (9/27/2025)AnthropicWebsite - https://www.anthropic.comX/Twitter - https://x.com/anthropicaiMatt Turck (Managing Director)Blog - https://www.mattturck.comLinkedIn - https://www.linkedin.com/in/turck/X/Twitter - https://twitter.com/mattturckFIRSTMARKWebsite - https://firstmark.comX/Twitter - https://twitter.com/FirstMarkCap(00:00) Cold open — “We're not seeing any slowdown.”(00:32) Intro — who Julian is & what we cover(01:09) The “exponential” from inside frontier labs(04:46) 2026–2027: agents that work a full day; expert-level breadth(08:58) Benchmarks vs reality: long-horizon work, GDP-Val, user value(10:26) Move 37 — what actually happened and why it mattered(13:55) Novel science: AlphaCode/AlphaTensor → when does AI earn a Nobel?(16:25) Discontinuity vs smooth progress (and warning signs)(19:08) Does pre-training + RL get us there? (AGI debates aside)(20:55) Sutton's “RL from scratch”? Julian's take(23:03) Julian's path: Google → DeepMind → Anthropic(26:45) AlphaGo (learn + search) in plain English(30:16) AlphaGo Zero (no human data)(31:00) AlphaZero (one algorithm: Go, chess, shogi)(31:46) MuZero (planning with a learned world model)(33:23) Lessons for today's agents: search + learning at scale(34:57) Do LLMs already have implicit world models?(39:02) Why RL on LLMs took time (stability, feedback loops)(41:43) Compute & scaling for RL — what we see so far(42:35) Rewards frontier: human prefs, rubrics, RLVR, process rewards(44:36) RL training data & the “flywheel” (and why quality matters)(48:02) RL & Agents 101 — why RL unlocks robustness(50:51) Should builders use RL-as-a-service? Or just tools + prompts?(52:18) What's missing for dependable agents (capability vs engineering)(53:51) Evals & Goodhart — internal vs external benchmarks(57:35) Mechanistic interpretability & “Golden Gate Claude”(1:00:03) Safety & alignment at Anthropic — how it shows up in practice(1:03:48) Jobs: human–AI complementarity (comparative advantage)(1:06:33) Inequality, policy, and the case for 10× productivity → abundance(1:09:24) Closing thoughts
Google DeepMind's Cell2Sentence-Scale 27B model has marked a significant milestone in biomedical research by predicting and validating a novel cancer immunotherapy. By analyzing over 4,000 compounds, the AI pinpointed silmitasertib as a “conditional amplifier” that boosts immune response in the presence of interferon. Lab tests verified a 50% increase in antigen presentation, enabling the immune system to detect previously undetectable tumors. This discovery, absent from prior scientific literature, highlights AI's ability to uncover hidden biological mechanisms.Microsoft is integrating its Copilot AI into Windows 11, transforming the operating system into an interactive digital assistant. With “Hey, Copilot” voice activation and a Vision feature that allows the AI to “see” the user's screen, Copilot can guide users through tasks in real time. The new Actions feature enables Copilot to perform operations like editing folders or managing background processes. This move reflects Microsoft's broader vision to embed AI seamlessly into everyday workflows, redefining the PC experience by making the operating system a proactive partner rather than a passive platform.Signal has achieved a cryptographic breakthrough by implementing quantum-resistant end-to-end encryption. Its new Triple Ratchet protocol incorporates the CRYSTALS-Kyber algorithm, blending classical and post-quantum security. Engineers overcame the challenge of large quantum-safe keys by fragmenting them into smaller, message-sized pieces, ensuring smooth performance. This upgrade is celebrated as the first user-friendly, large-scale post-quantum encryption deployment, setting a new standard for secure communication in an era where quantum computing could threaten traditional encryption.Using just $750 in consumer-grade hardware, researchers intercepted unencrypted data from 39 geostationary satellites, capturing sensitive information ranging from in-flight Wi-Fi and retail inventory to military and telecom communications. Companies like T-Mobile and Walmart acknowledged misconfigurations after the findings were disclosed. The study exposes the vulnerability of critical infrastructure still relying on unencrypted satellite links, demonstrating that low-cost eavesdropping can breach systems banking on “security through obscurity,” which A foreign actor exploited vulnerabilities in Microsoft SharePoint to infiltrate the Kansas City National Security Campus, a key U.S. nuclear weapons contractor. While the attack targeted IT systems, it raised concerns about potential access to operational technology. Suspected actors include Chinese or Russian groups, likely pursuing strategic espionage. The breach underscores how enterprise software flaws can compromise national defense and highlights the slow pace of securing critical operational infrastructure.Google's Threat Intelligence team uncovered UNC5342, a North Korean hacking group using EtherHiding to embed malware in public blockchains like Ethereum. By storing malicious JavaScript in immutable smart contracts, the technique ensures persistence and low-cost updates. Delivered via fake job interviews targeting developers, this approach marks a new era of cyber threats, leveraging decentralized technology as a permanent malware host.Kohler's Dekoda toilet camera ($599 + subscription) monitors gut health and hydration by scanning waste, using fingerprint ID and encrypted data for privacy. While Kohler claims the camera only views the bowl, privacy advocates question the implications of such intimate surveillance, even with “end-to-end encryption.”In a daring eight-minute heist, thieves used a crane to steal royal jewels from the Louvre, exposing significant security gaps. An audit revealed outdated defenses, delayed modernization, and blind spots, serving as a stark reminder that even the most prestigious institutions are vulnerable to breaches when security measures lag.
Gorkem Yurtseven is the co-founder and CEO of fal, the generative media platform powering the next wave of image, video, and audio applications. In less than two years, fal has scaled from $2M to over $100M in ARR, serving over 2 million developers and more than 300 enterprises, including Adobe, Canva, and Shopify. In this conversation, Gorkem shares the inside story of fal's pivot into explosive growth, the technical and cultural philosophies driving its success, and his predictions for the future of AI-generated media. In today's episode, we discuss: How fal pivoted from data infrastructure to generative inference fal's explosive year and how they scaled Why "generative media" is a greenfield new market fal's unique hiring philosophy and lean
EP 263. In this week's snappy update!Google DeepMind's AI uncovers a groundbreaking cancer therapy, marking a leap in immunotherapy innovation.Microsoft's Copilot AI transforms Windows 11, enabling voice-driven control and screen-aware assistance.Signal's quantum-resistant encryption upgrade really does set a new standard for secure messaging resilience.Researchers expose shocking vulnerabilities in satellite communications, revealing unencrypted data with minimal equipment.Foreign hackers compromised a critical U.S. nuclear weapons facility, through Microsoft's Sharepoint!North Korean hackers pioneer 'EtherHiding,' concealing malware on blockchains for immutable cybertheft opportunities.Kohler's Dekoda toilet camera revolutionizes health monitoring with privacy-focused waste analysis technology and brings new meaning to “End to End” encryption.A daring Louvre heist exposes critical security gaps, sparking debate over protecting global cultural treasures with decades old cameras and tech.Camera ready? Smile.Find the full transcript to this week's podcast here.
Irregular co-founder Dan Lahav is redefining what cybersecurity means in the age of autonomous AI. Working closely with OpenAI, Anthropic, and Google DeepMind, Dan, co-founder Omer Nevo and team are pioneering “frontier AI security”—a proactive approach to safeguarding systems where AI models act as independent agents. Dan shares how emergent behaviors, from models socially engineering each other to outmaneuvering real-world defenses like Windows Defender, signal a coming paradigm shift. Dan explains why tomorrow's threats will come from AI-on-AI interactions, why anomaly detection will soon break down, and how governments and enterprises alike must rethink defenses from first principles as AI becomes a national security layer. Hosted by: Sonya Huang and Dean Meyer, Sequoia Capital 00:00 Introduction 03:07 The Future of AI Security 03:55 Thought Experiment: Security in the Age of GPT-10 05:23 Economic Shifts and AI Interaction 07:13 Security in the Autonomous Age 08:50 AI Model Capabilities and Cybersecurity 11:08 Real-World AI Security Simulations 12:31 Working with AI Labs 32:34 Enterprise AI Security Strategies 40:03 Governmental AI Security Considerations 43:41 Final Thoughts
L'intelligence artificielle transforme déjà la médecine : diagnostic assisté, traitements personnalisés, recherche accélérée… Mais comment en faire un véritable levier pour la santé publique ?Dans cet épisode Hors-série proposé par Google, Anne-Vincent Salomon, médecin pathologiste à l'Institut Curie, et Joëlle Barral, directrice de la recherche fondamentale chez Google DeepMind, croisent leurs regards sur le rôle de l'IA dans la recherche médicale, la lutte contre le cancer, et l'avenir des soins.Un échange éclairant à découvrir dès maintenant. Journaliste : Estelle Honnorat Réalisation : Rudy Tolila Mixage : Killian Martin Daoudal Directeur de la Production : Baptiste Farinazzo Production exécutive : Jean-Baptiste Rochelet pour OneTwo OneTwo Hébergé par Acast. Visitez acast.com/privacy pour plus d'informations.
Bibo Xu is a Product Manager at Google DeepMind and leads Gemini's multimodal modeling. This video dives into Google AI's journey from basic voice commands to advanced dialogue systems that comprehend not just what is said, but also tone, emotion, and visual context. Check out this conversation to gain a deeper understanding of the challenges and opportunities in integrating diverse AI capabilities when creating universal assistants. Resources: Chapters: 0:00 - Intro 1:43 - Introducing Bibo Xu 2:40 - Bibo's Journey: From business school to voice AI 3:59 - The genesis of Google Assistant and Google Home 6:50 - Milestones in speech recognition technology 13:30 - Shifting from command-based AI to natural dialogue 19:00 - The power of multimodal AI for human interaction 21:20 - Real-time multilingual translation with LLMs 25:20 - Project Astra: Building a universal assistant 28:40 - Developer challenges in multimodal AI integration 29:50 - Unpacking the "can't see" debugging story 35:10 - The importance of low latency and interruption 38:30 - Seamless dialogue and background noise filtering 40:00 - Redefining human-computer interaction 41:00 - Ethical considerations for humanlike AI 44:00 - Responding to user emotions and frustration 45:50 - Politeness and expectations in AI conversations 49:10 - AI as a catalyst for research and automation 52:00 - The future of AI assistants and tool use 52:40 - AI interacting with interfaces 54:50 - Transforming the future of work and communication 55:19 - AI for enhanced writing and idea generation 57:13 - Conclusion and future outlook for AI development Subscribe to Google for Developers → https://goo.gle/developers Speakers: Bibo Xu, Christina Warren, Ashley Oldacre Products Mentioned: Google AI, Gemini, Generative AI, Android, Google Home, Google Voice, Project Astra, Gemini Live, Google DeepMind
Expiration dates aren't always what they seem. While most packaged foods carry them, some foods — like salt — can last virtually forever. In fact, there's a surprising list of everyday staples that can outlive the labels and stay good for years. Listen as I reveal which foods never really expire. https://www.tasteofhome.com/article/long-term-food-storage-staples-that-last-forever/ AI tools like ChatGPT are everywhere, but to use them well, you need more than just clear questions. The way you prompt, the way you think about the model, and even the way it was trained all play a role in the results you get. To break it all down, I'm joined by Christopher Summerfield, Professor of Cognitive Neuroscience at Oxford and Staff Research Scientist at Google DeepMind. He's also the author of These Strange New Minds: How AI Learned to Talk and What It Means (https://amzn.to/4na3ka2), and he reveals how to get smarter, more effective answers from AI. When does a tough experience cross the line into “trauma”? And once you've been through trauma, is it destined to shape your future forever — or is real healing possible? Dr. Amy Apigian, a double board-certified physician in preventive and addiction medicine with master's degrees in biochemistry and public health, shares a fascinating new way of looking at trauma. She's the author of The Biology of Trauma: How the Body Holds Fear, Pain, and Overwhelm, and How to Heal It (https://amzn.to/4mrsoIu), and what she reveals may change how you view your own life experiences. Looking more attractive doesn't always come down to hair, makeup, or clothes. Science has uncovered a list of simple behaviors and traits that make people instantly more appealing — and most of them are surprisingly easy to do. Listen as I share these research-backed ways to boost your attractiveness.https://www.businessinsider.com/proven-ways-more-attractive-science-2015-7 PLEASE SUPPORT OUR SPONSORS!!! INDEED: Get a $75 sponsored job credit to get your jobs more visibility at https://Indeed.com/SOMETHING right now! DELL: Your new Dell PC with Intel Core Ultra helps you handle a lot when your holiday to-dos get to be…a lot. Upgrade today by visiting https://Dell.com/Deals QUINCE: Layer up this fall with pieces that feel as good as they look! Go to https://Quince.com/sysk for free shipping on your order and 365 day returns! SHOPIFY: Shopify is the commerce platform for millions of businesses around the world! To start selling today, sign up for your $1 per month trial at https://Shopify.com/sysk Learn more about your ad choices. Visit megaphone.fm/adchoices
Scaling laws took us from GPT-1 to GPT-5 Pro. But in order to crack physics, we'll need a different approach. In this episode, a16z General Partner Anjney Midha talks to Liam Fedus, former VP of post-training research and co-creator of ChatGPT at OpenAI, and Ekin Dogus Cubuk, former head of materials science and chemistry research at Google DeepMind, on their new startup Periodic Labs and their plan to automate discovery in the hard sciences.Follow Liam on X: https://x.com/LiamFedusFollow Dogus on X: https://x.com/ekindogusLearn more about Periodic: https://periodic.com/ Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Podcast on SpotifyListen to the a16z Podcast on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Everyone know Google's Nano Banana is bonkers good.
(0:00) Introducing Sir Demis Hassabis, reflecting on his Nobel Prize win (2:39) What is Google DeepMind? How does it interact with Google and Alphabet? (4:01) Genie 3 world model (9:21) State of robotics models, form factors, and more (14:42) AI science breakthroughs, measuring AGI (20:49) Nano-Banana and the future of creative tools, democratization of creativity (24:44) Isomorphic Labs, probabilistic vs deterministic, scaling compute, a golden age of science Thanks to our partners for making this happen! Solana: https://solana.com/ OKX: https://www.okx.com/ Google Cloud: https://cloud.google.com/ IREN: https://iren.com/ Oracle: https://www.oracle.com/ Circle: https://www.circle.com/ BVNK: https://www.bvnk.com/ Follow Demis: https://x.com/demishassabis Follow the besties: https://x.com/chamath https://x.com/Jason https://x.com/DavidSacks https://x.com/friedberg Follow on X: https://x.com/theallinpod Follow on Instagram: https://www.instagram.com/theallinpod Follow on TikTok: https://www.tiktok.com/@theallinpod Follow on LinkedIn: https://www.linkedin.com/company/allinpod Intro Music Credit: https://rb.gy/tppkzl https://x.com/yung_spielburg Intro Video Credit: https://x.com/TheZachEffect