Podcasts about google deepmind

  • 505PODCASTS
  • 953EPISODES
  • 41mAVG DURATION
  • 1DAILY NEW EPISODE
  • Oct 9, 2025LATEST
google deepmind

POPULARITY

20172018201920202021202220232024


Best podcasts about google deepmind

Latest podcast episodes about google deepmind

Something You Should Know
How to Get Better Results with AI & The Science of Healing Trauma

Something You Should Know

Play Episode Listen Later Oct 9, 2025 50:22


Expiration dates aren't always what they seem. While most packaged foods carry them, some foods — like salt — can last virtually forever. In fact, there's a surprising list of everyday staples that can outlive the labels and stay good for years. Listen as I reveal which foods never really expire. https://www.tasteofhome.com/article/long-term-food-storage-staples-that-last-forever/ AI tools like ChatGPT are everywhere, but to use them well, you need more than just clear questions. The way you prompt, the way you think about the model, and even the way it was trained all play a role in the results you get. To break it all down, I'm joined by Christopher Summerfield, Professor of Cognitive Neuroscience at Oxford and Staff Research Scientist at Google DeepMind. He's also the author of These Strange New Minds: How AI Learned to Talk and What It Means (https://amzn.to/4na3ka2), and he reveals how to get smarter, more effective answers from AI. When does a tough experience cross the line into “trauma”? And once you've been through trauma, is it destined to shape your future forever — or is real healing possible? Dr. Amy Apigian, a double board-certified physician in preventive and addiction medicine with master's degrees in biochemistry and public health, shares a fascinating new way of looking at trauma. She's the author of The Biology of Trauma: How the Body Holds Fear, Pain, and Overwhelm, and How to Heal It (https://amzn.to/4mrsoIu), and what she reveals may change how you view your own life experiences. Looking more attractive doesn't always come down to hair, makeup, or clothes. Science has uncovered a list of simple behaviors and traits that make people instantly more appealing — and most of them are surprisingly easy to do. Listen as I share these research-backed ways to boost your attractiveness.https://www.businessinsider.com/proven-ways-more-attractive-science-2015-7 PLEASE SUPPORT OUR SPONSORS!!! INDEED: Get a $75 sponsored job credit to get your jobs more visibility at ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://Indeed.com/SOMETHING⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ right now! DELL: Your new Dell PC with Intel Core Ultra helps you handle a lot when your holiday to-dos get to be…a lot. Upgrade today by visiting⁠⁠⁠ https://Dell.com/Deals⁠⁠⁠ QUINCE: Layer up this fall with pieces that feel as good as they look! Go to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://Quince.com/sysk⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ for free shipping on your order and 365 day returns! SHOPIFY: Shopify is the commerce platform for millions of businesses around the world! To start selling today, sign up for your $1 per month trial at⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ https://Shopify.com/sysk⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Learn more about your ad choices. Visit megaphone.fm/adchoices

Generative Now | AI Builders on Creating the Future
Inside the Black Box: The Urgency of AI Interpretability

Generative Now | AI Builders on Creating the Future

Play Episode Listen Later Oct 2, 2025 62:17


Recorded live at Lightspeed's offices in San Francisco, this special episode of Generative Now dives into the urgency and promise of AI interpretability. Lightspeed partner Nnamdi Iregbulem spoke with Anthropic researcher Jack Lindsey and Goodfire co-founder and Chief Scientist Tom McGrath, who previously co-founded Google DeepMind's interpretability team. They discuss opening the black box of modern AI models in order to understand their reliability and spot real-world safety concerns, in order to build AI systems of the future that we can trust. Episode Chapters: 00:42 Welcome and Introduction00:36 Overview of Lightspeed and AI Investments03:19 Event Agenda and Guest Introductions05:35 Discussion on Interpretability in AI18:44 Technical Challenges in AI Interpretability29:42 Advancements in Model Interpretability30:05 Smarter Models and Interpretability31:26 Models Doing the Work for Us32:43 Real-World Applications of Interpretability34:32 Philanthropics' Approach to Interpretability39:15 Breakthrough Moments in AI Interpretability44:41 Challenges and Future Directions48:18 Neuroscience and Model Training Insights54:42 Emergent Misalignment and Model Behavior01:01:30 Concluding Thoughts and NetworkingStay in touch:www.lsvp.comX: https://twitter.com/lightspeedvpLinkedIn: https://www.linkedin.com/company/lightspeed-venture-partners/Instagram: https://www.instagram.com/lightspeedventurepartners/Subscribe on your favorite podcast app: generativenow.coEmail: generativenow@lsvp.comThe content here does not constitute tax, legal, business or investment advice or an offer to provide such advice, should not be construed as advocating the purchase or sale of any security or investment or a recommendation of any company, and is not an offer, or solicitation of an offer, for the purchase or sale of any security or investment product. For more details please see lsvp.com/legal.

a16z
Building an AI Physicist: ChatGPT Co-Creator's Next Venture

a16z

Play Episode Listen Later Sep 30, 2025 54:20


Scaling laws took us from GPT-1 to GPT-5 Pro. But in order to crack physics, we'll need a different approach. In this episode, a16z General Partner Anjney Midha talks to Liam Fedus, former VP of post-training research and co-creator of ChatGPT at OpenAI, and Ekin Dogus Cubuk, former head of materials science and chemistry research at Google DeepMind, on their new startup Periodic Labs and their plan to automate discovery in the hard sciences.Follow Liam on X: https://x.com/LiamFedusFollow Dogus on X: https://x.com/ekindogusLearn more about Periodic: https://periodic.com/ Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Podcast on SpotifyListen to the a16z Podcast on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Business of Tech
California's AI Law, Malicious MCP Server, Microsoft Marketplace Overhaul & VMware Migration

Business of Tech

Play Episode Listen Later Sep 30, 2025 16:11


The episode starts with the passage of California's groundbreaking AI transparency law, marking the first legislation in the United States that mandates large AI companies to disclose their safety protocols and provide whistleblower protections. This law applies to major AI labs like OpenAI, Anthropic, and Google DeepMind, requiring them to report critical safety incidents to California's Office of Emergency Services and ensure safety for communities while promoting AI growth. This regulation is a clear signal that the compliance wave surrounding AI is real, with California leading the charge in shaping the future of AI governance.The second story delves into a new cybersecurity risk in the form of the first known malicious Model Context Protocol (MCP) server discovered in the wild. A rogue npm package, "postmark-mcp," was found to be forwarding email data to an external address, exposing sensitive communications. This incident raises concerns about the security of software supply chains and highlights how highly trusted systems like MCP servers are being exploited. Service providers are urged to be vigilant, as this attack marks the emergence of a new vulnerability within increasingly complex software environments.Moving to Microsoft, the company is revamping its Marketplace to introduce stricter partner rules and enhanced discoverability for partner solutions. Microsoft's new initiative, Intune for MSPs, aims to address the needs of managed service providers who have long struggled with multi-tenancy management. Additionally, the company's new "Agent Mode" in Excel and Word promises to streamline productivity by automating tasks but has raised concerns over its accuracy. Despite the potential, Microsoft's tightening ecosystem requires careful navigation for both customers and partners, with compliance and risk management being central to successful engagement.Finally, Broadcom's decision to end support for VMware vSphere 7 has left customers with difficult decisions. As part of Broadcom's transition to a subscription-based model, customers face either costly upgrades, cloud migrations, or reliance on third-party support. Gartner predicts that a significant number of VMware customers will migrate to the cloud in the coming years, and this shift presents a valuable opportunity for service providers to act as trusted advisors in guiding clients through the transition. For those who can manage the complexity of this migration, there's a once-in-a-generation opportunity to capture long-term customer loyalty. Three things to know today00:00 California Enacts Nation's First AI Transparency Law, Mandating Safety Disclosures and Whistleblower Protections05:25 First Malicious MCP Server Discovered, Exposing Email Data and Raising New Software Supply Chain Fears07:16 Microsoft's New Playbook: Stricter Marketplace, Finally Some MSP Love, and AI That's Right Only Half the Time11:07 VMware Customers Face Subscription Shift, Rising Cloud Moves, and Risky Alternatives as Broadcom Ends vSphere 7 This is the Business of Tech.   Supported by: https://scalepad.com/dave/https://mailprotector.com/ Webinar:  https://bit.ly/msprmail All our Sponsors: https://businessof.tech/sponsors/ Do you want the show on your podcast app or the written versions of the stories? Subscribe to the Business of Tech: https://www.businessof.tech/subscribe/Looking for a link from the stories? The entire script of the show, with links to articles, are posted in each story on https://www.businessof.tech/ Support the show on Patreon: https://patreon.com/mspradio/ Want to be a guest on Business of Tech: Daily 10-Minute IT Services Insights? Send Dave Sobel a message on PodMatch, here: https://www.podmatch.com/hostdetailpreview/businessoftech Want our stuff? Cool Merch? Wear “Why Do We Care?” - Visit https://mspradio.myspreadshop.com Follow us on:LinkedIn: https://www.linkedin.com/company/28908079/YouTube: https://youtube.com/mspradio/Facebook: https://www.facebook.com/mspradionews/Instagram: https://www.instagram.com/mspradio/TikTok: https://www.tiktok.com/@businessoftechBluesky: https://bsky.app/profile/businessof.tech Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

The Daily Crunch – Spoken Edition
California Governor Newsom signs landmark AI safety bill; also, Explosion, vehicle fire rock Faraday Future's LA headquarters

The Daily Crunch – Spoken Edition

Play Episode Listen Later Sep 30, 2025 7:02


California Governor Gavin Newsom has signed SB 53, a first-in-the-nation bill that sets new transparency requirements on large AI companies. The bill, which passed the state legislature two weeks ago, requires large AI labs, including OpenAI, Anthropic, Meta, and Google DeepMind,  to be transparent about safety protocols. It also ensures whistleblower protections for employees at those companies.   Also, a Faraday Future electric SUV caught fire at the startup's Los Angeles headquarters early Sunday morning, leading to an explosion that blew out part of a wall.  The fire was extinguished in 40 minutes, and no injuries were reported. Damage to the building, which is a smaller two-story structure next to the larger portion of the headquarters, was severe enough that the city's Department of Building and Safety has “red tagged” it, meaning it may need structural work before it can be occupied again.  Learn more about your ad choices. Visit podcastchoices.com/adchoices

Zebras & Unicorns
AI Talk #45: Exponentielles Wachstum | Builder's Dilemma | Startup-Todesspirale | Oracle-OpenAI-Nvidia

Zebras & Unicorns

Play Episode Listen Later Sep 30, 2025 46:29


Nein, der österreichischen Startup-Szene geht es, AI hin oder her, gar nicht gut. Deswegen haben die beiden AI Talk Hosts  Jakob Steinschaden (Trending Topics, newsrooms) und Clemens Wasner (enliteAI, AI Austria) in dieser Folge über den KI-Tellerrand hinaus einiges zu besprechen. Die Themen:

Leveraging AI
227 | AI is now outperforming top human experts in coding & real-world tasks

Leveraging AI

Play Episode Listen Later Sep 27, 2025 64:49 Transcription Available


Check the self-paced AI Business Transformation course - https://multiplai.ai/self-paced-online-course/ What happens when AI not only matches but beats the best human minds?OpenAI and Google DeepMind just entered  and won the "Olympics of coding", outperforming every top university team in the world… using off-the-shelf models. Now, combine that with agents, robotics, and a trillion-dollar infrastructure arms race, and business as we know it is about to change — fast.In this Weekend News episode of Leveraging AI, Isar Meitis breaks down the real-world implications of AI's explosive progress on your workforce, your bottom line, and your industry's future.Whether you're leading digital transformation or trying to stay ahead of disruption, this episode delivers the insights you need — minus the fluff.In this session, you'll discover:01:12 – AI beats elite humans at coding using public models05:15 – OpenAI's GDP-VAL study: AI outperforms humans in 40–49% of real-world jobs12:56 – KPMG report: 42% of enterprises already deploy AI agents18:02 – Allianz warns: 15–20% of companies could vanish without AI adaptation29:22 – OpenAI + Nvidia announce $100B+ infrastructure build33:30 – Deutsche Bank: AI spending may be masking a U.S. recession43:15 – Sam Altman introduces “Pulse”: ChatGPT gets proactiveand more!About Leveraging AI The Ultimate AI Course for Business People: https://multiplai.ai/ai-course/ YouTube Full Episodes: https://www.youtube.com/@Multiplai_AI/ Connect with Isar Meitis: https://www.linkedin.com/in/isarmeitis/ Join our Live Sessions, AI Hangouts and newsletter: https://services.multiplai.ai/events If you've enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Echo Podcasty
Vyhladí nás umělá inteligence už přespříští rok?

Echo Podcasty

Play Episode Listen Later Sep 26, 2025 28:27


Co můžeme čekat od umělé inteligence a jaká jsou její rizika? Představuje technologie existenční hrozbu pro lidstvo, anebo naopak přinese vědecký pokrok, vyléčí rakovinu a nastartuje ekonomický růst? V Salonu Echa o tom diskutovala trojice odborníků. Kristina Fort se zabývá geopolitickými dopady umělé inteligence a aktuálně pracuje pro americký Institut pro právo a AI, vystudovala London School of Economics a působila mimo jiné i v Google DeepMind a v Evropské službě pro vnější činnost. Stanislav Fort má doktorát v umělé inteligenci ze Stanfordu, pracoval jako výzkumník ve společnosti Anthropic, StabilityAI a Google DeepMind, v současnosti je šéfem vývoje ve startupu, jenž se zabývá umělou inteligencí a kyberbezpečností. A Ondřej Bajgar pracoval na jazykových modelech ve firmě IBM Watson, následně působil v Institutu pro budoucnost lidstva při Univerzitě v Oxfordu, kde také dokončuje doktorát v oblasti strojového učení a bezpečnosti umělé inteligence. Každý z hostů tu mluvil sám za sebe a jejich názory nereprezentují instituce ani firmy, v nichž působí.

Data Driven
Why Simulating Reality Is the Key to Advancing Artificial Intelligence

Data Driven

Play Episode Listen Later Sep 25, 2025 53:38 Transcription Available


In this episode, we're joined once again by Christopher Nuland, technical marketing manager at Red Hat, whose globe-trotting schedule rivals the complexity of a Kubernetes deployment. Christopher sits down with hosts Bailey and Frank La Vigne to explore the frontier of artificial intelligence—from simulating reality and continuous learning models to debates around whether we really need humanoid robots to achieve superintelligence, or if a convincingly detailed simulation (think Grand Theft Auto, but for AI) might get us there first.Christopher takes us on a whirlwind tour of Google DeepMind's pioneering alpha projects, the latest buzz around simulating experiences for AI, and the metaphysical rabbit hole of iRobot and simulation theory. We dive into why the next big advancement in AI might not come from making models bigger, but from making them better at simulating the world around them. Along the way, we tackle timely topics in AI governance, security, and the ethics of continuous learning, with plenty of detours through pop culture, finance, and grassroots tech conferences.If you're curious about where the bleeding edge of AI meets science fiction, and how simulation could redefine the race for superintelligence, this episode is for you. Buckle up—because reality might just be the next thing AI learns to hack.Time Stamps00:00 Upcoming European and US Conferences05:38 AI Optimization Plateau08:43 Simulation's Role in Spatial Awareness10:00 Evolutionary Efficiency of Human Brains16:30 "Robotics Laws and Contradictions"17:32 AI, Paperclips, and Robot Ethics22:18 Troubleshooting Insight Experience25:16 Challenges in Training Deep Learning Models27:15 Challenges in Continuous Model Training32:04 AI Gateway for Specialized Requests36:54 Open Source and Rapid Innovation38:10 Industry-Specific AI Breakthroughs43:28 Misrepresented R&D Success Rates44:51 POC Challenges: Meaningful Versus Superficial47:59 "Crypto's Bumpy Crash"52:59 AI: Beyond Models to Simulation

This Week in Machine Learning & Artificial Intelligence (AI) Podcast

Today, we're joined by Oliver Wang, principal scientist at Google DeepMind and tech lead for Gemini 2.5 Flash Image—better known by its code name, “Nano Banana.” We dive into the development and capabilities of this newly released frontier vision-language model, beginning with the broader shift from specialized image generators to general-purpose multimodal agents that can use both visual and textual data for a variety of tasks. Oliver explains how Nano Banana can generate and iteratively edit images while maintaining consistency, and how its integration with Gemini's world knowledge expands creative and practical use cases. We discuss the tension between aesthetics and accuracy, the relative maturity of image models compared to text-based LLMs, and scaling as a driver of progress. Oliver also shares surprising emergent behaviors, the challenges of evaluating vision-language models, and the risks of training on AI-generated data. Finally, we look ahead to interactive world models and VLMs that may one day “think” and “reason” in images. The complete show notes for this episode can be found at https://twimlai.com/go/748.

AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store

AI Daily Rundown: September 22, 2025: Your daily briefing on the real world business impact of AIHello AI Unraveled listeners, and welcome to today's news where we cut through the hype to find the real-world business impact of AI.Listen at https://podcasts.apple.com/us/podcast/ai-tech-daily-news-rundown-google-deepmind-updates/id1684415169?i=1000727967942Today's Headlines:

California real estate radio
Santa Clarita Artificial Intelligence Aliens and Super Intelligence might as well already be here with alignment issues

California real estate radio

Play Episode Listen Later Sep 20, 2025 7:59


This week, the AI world took a monumental leap forward. I'm Connor with Honor, and we're breaking down the events that are shaping our future, right now.We start with Google DeepMind's Gemini 2.5 achieving a new 'Deep Blue' moment, solving problems the brightest humans could not. We then unpack the seismic shift at OpenAI as it embraces a fully for-profit model, raising alarms about who will control the future of intelligence.We'll also cover the staggering multi-billion dollar arms race in AI hardware and why current safety regulations feel like a band-aid on a bursting dam. We are building something we don't understand at a speed we can't control. Is this progress, or a reckless sprint toward a dangerous, invisible finish line?This isn't just tech news; it's the blueprint for humanity's next chapter. You need to hear this.Join our community for deeper discussions. Find us at SantaClaritaAI.com or SantaClaritaArtificialIntelligence.com.Keywords: Artificial Intelligence, AGI, AI Safety, Google DeepMind, OpenAI, NVIDIA, Tech Regulation, AI News, Future of Technology, Machine Learning, Existential Risk, Santa Clarita AIYoutube Channels:Conner with Honor - real estateHome Muscle - fat torchingFrom first responder to real estate expert, Connor with Honor brings honesty and integrity to your Santa Clarita home buying or selling journey. Subscribe to my YouTube channel for valuable tips, local market trends, and a glimpse into the Santa Clarita lifestyle.Dive into Real Estate with Connor with Honor:Santa Clarita's Trusted Realtor & Fitness EnthusiastReal Estate:Buying or selling in Santa Clarita? Connor with Honor, your local expert with over 2 decades of experience, guides you seamlessly through the process. Subscribe to his YouTube channel for insider market updates, expert advice, and a peek into the vibrant Santa Clarita lifestyle.Fitness:Ready to unlock your fitness potential? Join Connor's YouTube journey for inspiring workouts, healthy recipes, and motivational tips. Remember, a strong body fuels a strong mind and a successful life!Podcast:Dig deeper with Connor's podcast! Hear insightful interviews with industry experts, inspiring success stories, and targeted real estate advice specific to Santa Clarita.

Canaltech Podcast
Como a Eletrobras usa IA do Google para prever o clima e evitar apagões

Canaltech Podcast

Play Episode Listen Later Sep 18, 2025 26:13


A Eletrobras anunciou uma parceria inédita com o Google Cloud e o Google DeepMind para trazer inteligência artificial ao processo de previsão meteorológica. O objetivo é aumentar a precisão das previsões de curto e médio prazo e, assim, reduzir os impactos de eventos climáticos extremos no fornecimento de energia. No novo episódio do Podcast Canaltech, o repórter Marcelo Fischer, conversa com Lucas Pinz, diretor de Inovação e Tecnologia da Eletrobras, que explicou como funciona o modelo Weather Next e como a empresa está aplicando essa tecnologia no Brasil. A iniciativa faz parte do projeto Atmos, centro de inteligência e monitoramento meteorológico da Eletrobras, que une meteorologistas, cientistas de dados e engenheiros para antecipar situações críticas como ondas de calor, chuvas intensas e ventos extremos. Segundo Lucas, o uso de IA permite que a companhia se prepare melhor para cenários de risco e garanta mais qualidade no fornecimento de energia, reduzindo desligamentos e fortalecendo a rede elétrica frente às mudanças climáticas. Você também vai conferir: Trump adia de novo o banimento do TikTok nos EUA em meio a negociações de venda, WhatsApp libera lembretes de mensagens também no iPhone, OpenAI quer reduzir as “alucinações” do ChatGPT, mas você pode não gostar da solução e Xiaomi lança novo tablet para desafiar o iPad Pro. Este podcast foi roteirizado e apresentado por Fernanda Santos e contou com reportagens de Claudio Yuge, André Lourenti, João Melo, Renato Moura sob coordenação de Anáísa Catucci. A trilha sonora é de Guilherme Zomer, a edição de Jully Cruz e a arte da capa é de Erick Teixeira.See omnystudio.com/listener for privacy information.

80,000 Hours Podcast with Rob Wiblin
Neel Nanda on leading a Google DeepMind team at 26 – and advice if you want to work at an AI company (part 2)

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Sep 15, 2025 106:49


At 26, Neel Nanda leads an AI safety team at Google DeepMind, has published dozens of influential papers, and mentored 50 junior researchers — seven of whom now work at major AI companies. His secret? “It's mostly luck,” he says, but “another part is what I think of as maximising my luck surface area.”Video, full transcript, and links to learn more: https://80k.info/nn2This means creating as many opportunities as possible for surprisingly good things to happen:Write publicly.Reach out to researchers whose work you admire.Say yes to unusual projects that seem a little scary.Nanda's own path illustrates this perfectly. He started a challenge to write one blog post per day for a month to overcome perfectionist paralysis. Those posts helped seed the field of mechanistic interpretability and, incidentally, led to meeting his partner of four years.His YouTube channel features unedited three-hour videos of him reading through famous papers and sharing thoughts. One has 30,000 views. “People were into it,” he shrugs.Most remarkably, he ended up running DeepMind's mechanistic interpretability team. He'd joined expecting to be an individual contributor, but when the team lead stepped down, he stepped up despite having no management experience. “I did not know if I was going to be good at this. I think it's gone reasonably well.”His core lesson: “You can just do things.” This sounds trite but is a useful reminder all the same. Doing things is a skill that improves with practice. Most people overestimate the risks and underestimate their ability to recover from failures. And as Neel explains, junior researchers today have a superpower previous generations lacked: large language models that can dramatically accelerate learning and research.In this extended conversation, Neel and host Rob Wiblin discuss all that and some other hot takes from Neel's four years at Google DeepMind. (And be sure to check out part one of Rob and Neel's conversation!)What did you think of the episode? https://forms.gle/6binZivKmjjiHU6dA Chapters:Cold open (00:00:00)Who's Neel Nanda? (00:01:12)Luck surface area and making the right opportunities (00:01:46)Writing cold emails that aren't insta-deleted (00:03:50)How Neel uses LLMs to get much more done (00:09:08)“If your safety work doesn't advance capabilities, it's probably bad safety work” (00:23:22)Why Neel refuses to share his p(doom) (00:27:22)How Neel went from the couch to an alignment rocketship (00:31:24)Navigating towards impact at a frontier AI company (00:39:24)How does impact differ inside and outside frontier companies? (00:49:56)Is a special skill set needed to guide large companies? (00:56:06)The benefit of risk frameworks: early preparation (01:00:05)Should people work at the safest or most reckless company? (01:05:21)Advice for getting hired by a frontier AI company (01:08:40)What makes for a good ML researcher? (01:12:57)Three stages of the research process (01:19:40)How do supervisors actually add value? (01:31:53)An AI PhD – with these timelines?! (01:34:11)Is career advice generalisable, or does everyone get the advice they don't need? (01:40:52)Remember: You can just do things (01:43:51)This episode was recorded on July 21.Video editing: Simon Monsour and Luke MonsourAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongMusic: Ben CordellCamera operator: Jeremy ChevillotteCoordination, transcriptions, and web: Katy Moore

Les Cast Codeurs Podcast
LCC 330 - Nano banana l'AI de Julia

Les Cast Codeurs Podcast

Play Episode Listen Later Sep 15, 2025 108:38


Katia, Emmanuel et Guillaume discutent Java, Kotlin, Quarkus, Hibernate, Spring Boot 4, intelligence artificielle (modèles Nano Banana, VO3, frameworks agentiques, embedding). On discute les vulnerabilités OWASP pour les LLMs, les personalités de codage des différents modèles, Podman vs Docker, comment moderniser des projets legacy. Mais surtout on a passé du temps sur les présentations de Luc Julia et les différents contre points qui ont fait le buzz sur les réseaux. Enregistré le 12 septembre 2025 Téléchargement de l'épisode LesCastCodeurs-Episode-330.mp3 ou en vidéo sur YouTube. News Langages Dans cette vidéo, José détaille les nouveautés de Java entre Java 21 et 25 https://inside.java/2025/08/31/roadto25-java-language/ Aperçu des nouveautés du JDK 25 : Introduction des nouvelles fonctionnalités du langage Java et des changements à venir [00:02]. Programmation orientée données et Pattern Matching [00:43] : Évolution du “pattern matching” pour la déconstruction des “records” [01:22]. Utilisation des “sealed types” dans les expressions switch pour améliorer la lisibilité et la robustesse du code [01:47]. Introduction des “unnamed patterns” (_) pour indiquer qu'une variable n'est pas utilisée [04:47]. Support des types primitifs dans instanceof et switch (en preview) [14:02]. Conception d'applications Java [00:52] : Simplification de la méthode main [21:31]. Exécution directe des fichiers .java sans compilation explicite [22:46]. Amélioration des mécanismes d'importation [23:41]. Utilisation de la syntaxe Markdown dans la Javadoc [27:46]. Immuabilité et valeurs nulles [01:08] : Problème d'observation de champs final à null pendant la construction d'un objet [28:44]. JEP 513 pour contrôler l'appel à super() et restreindre l'usage de this dans les constructeurs [33:29]. JDK 25 sort le 16 septembre https://openjdk.org/projects/jdk/25/ Scoped Values (JEP 505) - alternative plus efficace aux ThreadLocal pour partager des données immutables entre threads Structured Concurrency (JEP 506) - traiter des groupes de tâches concurrentes comme une seule unité de travail, simplifiant la gestion des threads Compact Object Headers (JEP 519) - Fonctionnalité finale qui réduit de 50% la taille des en-têtes d'objets (de 128 à 64 bits), économisant jusqu'à 22% de mémoire heap Flexible Constructor Bodies (JEP 513) - Relaxation des restrictions sur les constructeurs, permettant du code avant l'appel super() ou this() Module Import Declarations (JEP 511) - Import simplifié permettant d'importer tous les éléments publics d'un module en une seule déclaration Compact Source Files (JEP 512) - Simplification des programmes Java basiques avec des méthodes main d'instance sans classe wrapper obligatoire Primitive Types in Patterns (JEP 455) - Troisième preview étendant le pattern matching et instanceof aux types primitifs dans switch et instanceof Generational Shenandoah (JEP 521) - Le garbage collector Shenandoah passe en mode générationnel pour de meilleures performances JFR Method Timing & Tracing (JEP 520) - Nouvel outillage de profilage pour mesurer le temps d'exécution et tracer les appels de méthodes Key Derivation API (JEP 510) - API finale pour les fonctions de dérivation de clés cryptographiques, remplaçant les implémentations tierces Améliorations du traitement des annotations dans Kotlin 2.2 https://blog.jetbrains.com/idea/2025/09/improved-annotation-handling-in-kotlin-2-2-less-boilerplate-fewer-surprises/ Avant Kotlin 2.2, les annotations sur les paramètres de constructeur n'étaient appliquées qu'au paramètre, pas à la propriété ou au champ Cela causait des bugs subtils avec Spring et JPA où la validation ne fonctionnait qu'à la création d'objet, pas lors des mises à jour La solution précédente nécessitait d'utiliser explicitement @field: pour chaque annotation, créant du code verbeux Kotlin 2.2 introduit un nouveau comportement par défaut qui applique les annotations aux paramètres ET aux propriétés/champs automatiquement Le code devient plus propre sans avoir besoin de syntaxe @field: répétitive Pour l'activer, ajouter -Xannotation-default-target=param-property dans les options du compilateur Gradle IntelliJ IDEA propose un quick-fix pour activer ce comportement à l'échelle du projet Cette amélioration rend l'intégration Kotlin plus fluide avec les frameworks majeurs comme Spring et JPA Le comportement peut être configuré pour garder l'ancien mode ou activer un mode transitoire avec avertissements Cette mise à jour fait partie d'une initiative plus large pour améliorer l'expérience Kotlin + Spring Librairies Sortie de Quarkus 3.26 avec mises à jour d'Hibernate et autres fonctionnalités - https://quarkus.io/blog/quarkus-3-26-released/ mettez à jour vers la 3.26.x car il y a eu une regression vert.x Jalon important vers la version LTS 3.27 prévue fin septembre, basée sur cette version Mise à jour vers Hibernate ORM 7.1, Hibernate Search 8.1 et Hibernate Reactive 3.1 Support des unités de persistance nommées et sources de données dans Hibernate Reactive Démarrage hors ligne et configuration de dialecte pour Hibernate ORM même si la base n'est pas accessible Refonte de la console HQL dans Dev UI avec fonctionnalité Hibernate Assistant intégrée Exposition des capacités Dev UI comme fonctions MCP pour pilotage via outils IA Rafraîchissement automatique des tokens OIDC en cas de réponse 401 des clients REST Extension JFR pour capturer les données runtime (nom app, version, extensions actives) Bump de Gradle vers la version 9.0 par défaut, suppression du support des classes config legacy Guide de démarrage avec Quarkus et A2A Java SDK 0.3.0 (pour faire discuter des agents IA avec la dernière version du protocole A2A) https://quarkus.io/blog/quarkus-a2a-java-0-3-0-alpha-release/ Sortie de l'A2A Java SDK 0.3.0.Alpha1, aligné avec la spécification A2A v0.3.0. Protocole A2A : standard ouvert (Linux Foundation), permet la communication inter-agents IA polyglottes. Version 0.3.0 plus stable, introduit le support gRPC. Mises à jour générales : changements significatifs, expérience utilisateur améliorée (côté client et serveur). Agents serveur A2A : Support gRPC ajouté (en plus de JSON-RPC). HTTP+JSON/REST à venir. Implémentations basées sur Quarkus (alternatives Jakarta existent). Dépendances spécifiques pour chaque transport (ex: a2a-java-sdk-reference-jsonrpc, a2a-java-sdk-reference-grpc). AgentCard : décrit les capacités de l'agent. Doit spécifier le point d'accès primaire et tous les transports supportés (additionalInterfaces). Clients A2A : Dépendance principale : a2a-java-sdk-client. Support gRPC ajouté (en plus de JSON-RPC). HTTP+JSON/REST à venir. Dépendance spécifique pour gRPC : a2a-java-sdk-client-transport-grpc. Création de client : via ClientBuilder. Sélectionne automatiquement le transport selon l'AgentCard et la configuration client. Permet de spécifier les transports supportés par le client (withTransport). Comment générer et éditer des images en Java avec Nano Banana, le “photoshop killer” de Google https://glaforge.dev/posts/2025/09/09/calling-nano-banana-from-java/ Objectif : Intégrer le modèle Nano Banana (Gemini 2.5 Flash Image preview) dans des applications Java. SDK utilisé : GenAI Java SDK de Google. Compatibilité : Supporté par ADK for Java ; pas encore par LangChain4j (limitation de multimodalité de sortie). Capacités de Nano Banana : Créer de nouvelles images. Modifier des images existantes. Assembler plusieurs images. Mise en œuvre Java : Quelle dépendance utiliser Comment s'authentifier Comment configurer le modèle Nature du modèle : Nano Banana est un modèle de chat qui peut retourner du texte et une image (pas simplement juste un modèle générateur d'image) Exemples d'utilisation : Création : Via un simple prompt textuel. Modification : En passant l'image existante (tableau de bytes) et les instructions de modification (prompt). Assemblage : En passant plusieurs images (en bytes) et les instructions d'intégration (prompt). Message clé : Toutes ces fonctionnalités sont accessibles en Java, sans nécessiter Python. Générer des vidéos IA avec le modèle Veo 3, mais en Java ! https://glaforge.dev/posts/2025/09/10/generating-videos-in-java-with-veo3/ Génération de vidéos en Java avec Veo 3 (via le GenAI Java SDK de Google). Veo 3: Annoncé comme GA, prix réduits, support du format 9:16, résolution jusqu'à 1080p. Création de vidéos : À partir d'une invite textuelle (prompt). À partir d'une image existante. Deux versions différentes du modèle : veo-3.0-generate-001 (qualité supérieure, plus coûteux, plus lent). veo-3.0-fast-generate-001 (qualité inférieure, moins coûteux, mais plus rapide). Rod Johnson sur ecrire des aplication agentic en Java plus facilement qu'en python avec Embabel https://medium.com/@springrod/you-can-build-better-ai-agents-in-java-than-python-868eaf008493 Rod the papa de Spring réécrit un exemple CrewAI (Python) qui génère un livre en utilisant Embabel (Java) pour démontrer la supériorité de Java L'application utilise plusieurs agents AI spécialisés : un chercheur, un planificateur de livre et des rédacteurs de chapitres Le processus suit trois étapes : recherche du sujet, création du plan, rédaction parallèle des chapitres puis assemblage CrewAI souffre de plusieurs problèmes : configuration lourde, manque de type safety, utilisation de clés magiques dans les prompts La version Embabel nécessite moins de code Java que l'original Python et moins de fichiers de configuration YAML Embabel apporte la type safety complète, éliminant les erreurs de frappe dans les prompts et améliorant l'outillage IDE La gestion de la concurrence est mieux contrôlée en Java pour éviter les limites de débit des APIs LLM L'intégration avec Spring permet une configuration externe simple des modèles LLM et hyperparamètres Le planificateur Embabel détermine automatiquement l'ordre d'exécution des actions basé sur leurs types requis L'argument principal : l'écosystème JVM offre un meilleur modèle de programmation et accès à la logique métier existante que Python Il y a pas mal de nouveaux framework agentic en Java, notamment le dernier LAngchain4j Agentic Spring lance un serie de blog posts sur les nouveautés de Spring Boot 4 https://spring.io/blog/2025/09/02/road_to_ga_introduction baseline JDK 17 mais rebase sur Jakarta 11 Kotlin 2, Jackson 3 et JUnit 6 Fonctionnalités de résilience principales de Spring : @ConcurrencyLimit, @Retryable, RetryTemplate Versioning d'API dans Spring Améliorations du client de service HTTP L'état des clients HTTP dans Spring Introduction du support Jackson 3 dans Spring Consommateur partagé - les queues Kafka dans Spring Kafka Modularisation de Spring Boot Autorisation progressive dans Spring Security Spring gRPC - un nouveau module Spring Boot Applications null-safe avec Spring Boot 4 OpenTelemetry avec Spring Boot Repos Ahead of Time (Partie 2) Web Faire de la recherche sémantique directement dans le navigateur en local, avec EmbeddingGemma et Transformers.js https://glaforge.dev/posts/2025/09/08/in-browser-semantic-search-with-embeddinggemma/ EmbeddingGemma: Nouveau modèle d'embedding (308M paramètres) de Google DeepMind. Objectif: Permettre la recherche sémantique directement dans le navigateur. Avantages clés de l'IA côté client: Confidentialité: Aucune donnée envoyée à un serveur. Coûts réduits: Pas besoin de serveurs coûteux (GPU), hébergement statique. Faible latence: Traitement instantané sans allers-retours réseau. Fonctionnement hors ligne: Possible après le chargement initial du modèle. Technologie principale: Modèle: EmbeddingGemma (petit, performant, multilingue, support MRL pour réduire la taille des vecteurs). Moteur d'inférence: Transformers.js de HuggingFace (exécute les modèles AI en JavaScript dans le navigateur). Déploiement: Site statique avec Vite/React/Tailwind CSS, déployé sur Firebase Hosting via GitHub Actions. Gestion du modèle: Fichiers du modèle trop lourds pour Git; téléchargés depuis HuggingFace Hub pendant le CI/CD. Fonctionnement de l'app: Charge le modèle, génère des embeddings pour requêtes/documents, calcule la similarité sémantique. Conclusion: Démonstration d'une recherche sémantique privée, économique et sans serveur, soulignant le potentiel de l'IA embarquée dans le navigateur. Data et Intelligence Artificielle Docker lance Cagent, une sorte de framework multi-agent IA utilisant des LLMs externes, des modèles de Docker Model Runner, avec le Docker MCP Tookit. Il propose un format YAML pour décrire les agents d'un système multi-agents. https://github.com/docker/cagent des agents “prompt driven” (pas de code) et une structure pour decrire comment ils sont deployés pas clair comment ils sont appelés a part dans la ligne de commande de cagent fait par david gageot L'owasp décrit l'independance excessive des LLM comme une vulnerabilité https://genai.owasp.org/llmrisk2023-24/llm08-excessive-agency/ L'agence excessive désigne la vulnérabilité qui permet aux systèmes LLM d'effectuer des actions dommageables via des sorties inattendues ou ambiguës. Elle résulte de trois causes principales : fonctionnalités excessives, permissions excessives ou autonomie excessive des agents LLM. Les fonctionnalités excessives incluent l'accès à des plugins qui offrent plus de capacités que nécessaire, comme un plugin de lecture qui peut aussi modifier ou supprimer. Les permissions excessives se manifestent quand un plugin accède aux systèmes avec des droits trop élevés, par exemple un accès en lecture qui inclut aussi l'écriture. L'autonomie excessive survient quand le système effectue des actions critiques sans validation humaine préalable. Un scénario d'attaque typique : un assistant personnel avec accès email peut être manipulé par injection de prompt pour envoyer du spam via la boîte de l'utilisateur. La prévention implique de limiter strictement les plugins aux fonctions minimales nécessaires pour l'opération prévue. Il faut éviter les fonctions ouvertes comme “exécuter une commande shell” au profit d'outils plus granulaires et spécifiques. L'application du principe de moindre privilège est cruciale : chaque plugin doit avoir uniquement les permissions minimales requises. Le contrôle humain dans la boucle reste essentiel pour valider les actions à fort impact avant leur exécution. Lancement du MCP registry, une sorte de méta-annuaire officiel pour référencer les serveurs MCP https://www.marktechpost.com/2025/09/09/mcp-team-launches-the-preview-version-of-the-mcp-registry-a-federated-discovery-layer-for-enterprise-ai/ MCP Registry : Couche de découverte fédérée pour l'IA d'entreprise. Fonctionne comme le DNS pour le contexte de l'IA, permettant la découverte de serveurs MCP publics ou privés. Modèle fédéré : Évite les risques de sécurité et de conformité d'un registre monolithique. Permet des sous-registres privés tout en conservant une source de vérité “upstream”. Avantages entreprises : Découverte interne sécurisée. Gouvernance centralisée des serveurs externes. Réduction de la prolifération des contextes. Support pour les agents IA hybrides (données privées/publiques). Projet open source, actuellement en version preview. Blog post officiel : https://blog.modelcontextprotocol.io/posts/2025-09-08-mcp-registry-preview/ Exploration des internals du transaction log SQL Server https://debezium.io/blog/2025/09/08/sqlserver-tx-log/ C'est un article pour les rugeux qui veulent savoir comment SQLServer marche à l'interieur Debezium utilise actuellement les change tables de SQL Server CDC en polling périodique L'article explore la possibilité de parser directement le transaction log pour améliorer les performances Le transaction log est divisé en Virtual Log Files (VLFs) utilisés de manière circulaire Chaque VLF contient des blocs (512B à 60KB) qui contiennent les records de transactions Chaque record a un Log Sequence Number (LSN) unique pour l'identifier précisément Les données sont stockées dans des pages de 8KB avec header de 96 bytes et offset array Les tables sont organisées en partitions et allocation units pour gérer l'espace disque L'utilitaire DBCC permet d'explorer la structure interne des pages et leur contenu Cette compréhension pose les bases pour parser programmatiquement le transaction log dans un prochain article Outillage Les personalités des codeurs des différents LLMs https://www.sonarsource.com/blog/the-coding-personalities-of-leading-llms-gpt-5-update/ GPT-5 minimal ne détrône pas Claude Sonnet 4 comme leader en performance fonctionnelle malgré ses 75% de réussite GPT-5 génère un code extrêmement verbeux avec 490 000 lignes contre 370 000 pour Claude Sonnet 4 sur les mêmes tâches La complexité cyclomatique et cognitive du code GPT-5 est dramatiquement plus élevée que tous les autres modèles GPT-5 introduit 3,90 problèmes par tâche réussie contre seulement 2,11 pour Claude Sonnet 4 Point fort de GPT-5 : sécurité exceptionnelle avec seulement 0,12 vulnérabilité par 1000 lignes de code Faiblesse majeure : densité très élevée de “code smells” (25,28 par 1000 lignes) nuisant à la maintenabilité GPT-5 produit 12% de problèmes liés à la complexité cognitive, le taux le plus élevé de tous les modèles Tendance aux erreurs logiques fondamentales avec 24% de bugs de type “Control-flow mistake” Réapparition de vulnérabilités classiques comme les failles d'injection et de traversée de chemin Nécessité d'une gouvernance renforcée avec analyse statique obligatoire pour gérer la complexité du code généré Pourquoi j'ai abandonné Docker pour Podman https://codesmash.dev/why-i-ditched-docker-for-podman-and-you-should-too Problème Docker : Le daemon dockerd persistant s'exécute avec des privilèges root, posant des risques de sécurité (nombreuses CVEs citées) et consommant des ressources inutilement. Solution Podman : Sans Daemon : Pas de processus d'arrière-plan persistant. Les conteneurs s'exécutent comme des processus enfants de la commande Podman, sous les privilèges de l'utilisateur. Sécurité Renforcée : Réduction de la surface d'attaque. Une évasion de conteneur compromet un utilisateur non privilégié sur l'hôte, pas le système entier. Mode rootless. Fiabilité Accrue : Pas de point de défaillance unique ; le crash d'un conteneur n'affecte pas les autres. Moins de Ressources : Pas de daemon constamment actif, donc moins de mémoire et de CPU. Fonctionnalités Clés de Podman : Intégration Systemd : Génération automatique de fichiers d'unité systemd pour gérer les conteneurs comme des services Linux standards. Alignement Kubernetes : Support natif des pods et capacité à générer des fichiers Kubernetes YAML directement (podman generate kube), facilitant le développement local pour K8s. Philosophie Unix : Se concentre sur l'exécution des conteneurs, délègue les tâches spécialisées à des outils dédiés (ex: Buildah pour la construction d'images, Skopeo pour leur gestion). Migration Facile : CLI compatible Docker : podman utilise les mêmes commandes que docker (alias docker=podman fonctionne). Les Dockerfiles existants sont directement utilisables. Améliorations incluses : Sécurité par défaut (ports privilégiés en mode rootless), meilleure gestion des permissions de volume, API Docker compatible optionnelle. Option de convertir Docker Compose en Kubernetes YAML. Bénéfices en Production : Sécurité améliorée, utilisation plus propre des ressources. Podman représente une évolution plus sécurisée et mieux alignée avec les pratiques modernes de gestion Linux et de déploiement de conteneurs. Guide Pratique (Exemple FastAPI) : Le Dockerfile ne change pas. podman build et podman run remplacent directement les commandes Docker. Déploiement en production via Systemd. Gestion d'applications multi-services avec les “pods” Podman. Compatibilité Docker Compose via podman-compose ou kompose. Détection améliorée des APIs vulnérables dans les IDEs JetBrains et Qodana - https://blog.jetbrains.com/idea/2025/09/enhanced-vulnerable-api-detection-in-jetbrains-ides-and-qodana/ JetBrains s'associe avec Mend.io pour renforcer la sécurité du code dans leurs outils Le plugin Package Checker bénéficie de nouvelles données enrichies sur les APIs vulnérables Analyse des graphes d'appels pour couvrir plus de méthodes publiques des bibliothèques open-source Support de Java, Kotlin, C#, JavaScript, TypeScript et Python pour la détection de vulnérabilités Activation des inspections via Paramètres > Editor > Inspections en recherchant “Vulnerable API” Surlignage automatique des méthodes vulnérables avec détails des failles au survol Action contextuelle pour naviguer directement vers la déclaration de dépendance problématique Mise à jour automatique vers une version non affectée via Alt+Enter sur la dépendance Fenêtre dédiée “Vulnerable Dependencies” pour voir l'état global des vulnérabilités du projet Méthodologies Le retour de du sondage de Stack Overflow sur l'usage de l'IA dans le code https://medium.com/@amareshadak/stack-overflow-just-exposed-the-ugly-truth-about-ai-coding-tools-b4f7b5992191 84% des développeurs utilisent l'IA quotidiennement, mais 46% ne font pas confiance aux résultats. Seulement 3,1% font “hautement confiance” au code généré. 66% sont frustrés par les solutions IA “presque correctes”. 45% disent que déboguer le code IA prend plus de temps que l'écrire soi-même. Les développeurs seniors (10+ ans) font moins confiance à l'IA (2,6%) que les débutants (6,1%), créant un écart de connaissances dangereux. Les pays occidentaux montrent moins de confiance - Allemagne (22%), UK (23%), USA (28%) - que l'Inde (56%). Les créateurs d'outils IA leur font moins confiance. 77% des développeurs professionnels rejettent la programmation en langage naturel, seuls 12% l'utilisent réellement. Quand l'IA échoue, 75% se tournent vers les humains. 35% des visites Stack Overflow concernent maintenant des problèmes liés à l'IA. 69% rapportent des gains de productivité personnels, mais seulement 17% voient une amélioration de la collaboration d'équipe. Coûts cachés : temps de vérification, explication du code IA aux équipes, refactorisation et charge cognitive constante. Les plateformes humaines dominent encore : Stack Overflow (84%), GitHub (67%), YouTube (61%) pour résoudre les problèmes IA. L'avenir suggère un “développement augmenté” où l'IA devient un outil parmi d'autres, nécessitant transparence et gestion de l'incertitude. Mentorat open source et défis communautaires par les gens de Microcks https://microcks.io/blog/beyond-code-open-source-mentorship/ Microcks souffre du syndrome des “utilisateurs silencieux” qui bénéficient du projet sans contribuer Malgré des milliers de téléchargements et une adoption croissante, l'engagement communautaire reste faible Ce manque d'interaction crée des défis de durabilité et limite l'innovation du projet Les mainteneurs développent dans le vide sans feedback des vrais utilisateurs Contribuer ne nécessite pas de coder : documentation, partage d'expérience, signalement de bugs suffisent Parler du project qu'on aime autour de soi est aussi super utile Microcks a aussi des questions specifiques qu'ils ont posé dans le blog, donc si vous l'utilisez, aller voir Le succès de l'open source dépend de la transformation des utilisateurs en véritables partenaires communautaires c'est un point assez commun je trouve, le ratio parlant / silencieux est tres petit et cela encourage les quelques grandes gueules La modernisation du systemes legacy, c'est pas que de la tech https://blog.scottlogic.com/2025/08/27/holistic-approach-successful-legacy-modernisation.html Un artcile qui prend du recul sur la modernisation de systemes legacy Les projets de modernisation legacy nécessitent une vision holistique au-delà du simple focus technologique Les drivers business diffèrent des projets greenfield : réduction des coûts et mitigation des risques plutôt que génération de revenus L'état actuel est plus complexe à cartographier avec de nombreuses dépendances et risques de rupture Collaboration essentielle entre Architectes, Analystes Business et Designers UX dès la phase de découverte Approche tridimensionnelle obligatoire : Personnes, Processus et Technologie (comme un jeu d'échecs 3D) Le leadership doit créer l'espace nécessaire pour la découverte et la planification plutôt que presser l'équipe Communication en termes business plutôt que techniques vers tous les niveaux de l'organisation Planification préalable essentielle contrairement aux idées reçues sur l'agilité Séquencement optimal souvent non-évident et nécessitant une analyse approfondie des interdépendances Phases projet alignées sur les résultats business permettent l'agilité au sein de chaque phase Sécurité Cyber Attaque su Musée Histoire Naturelle https://www.franceinfo.fr/internet/securite-sur-internet/cyberattaques/le-museum-nati[…]e-d-une-cyberattaque-severe-une-plainte-deposee_7430356.html Compromission massive de packages npm populaires par un malware crypto https://www.aikido.dev/blog/npm-debug-and-chalk-packages-compromised 18 packages npm très populaires compromis le 8 septembre 2025, incluant chalk, debug, ansi-styles avec plus de 2 milliards de téléchargements hebdomadaires combinés duckdb s'est rajouté à la liste Code malveillant injecté qui intercepte silencieusement l'activité crypto et web3 dans les navigateurs des utilisateurs Le malware manipule les interactions de wallet et redirige les paiements vers des comptes contrôlés par l'attaquant sans signes évidents Injection dans les fonctions critiques comme fetch, XMLHttpRequest et APIs de wallets (window.ethereum, Solana) pour intercepter le trafic Détection et remplacement automatique des adresses crypto sur multiple blockchains (Ethereum, Bitcoin, Solana, Tron, Litecoin, Bitcoin Cash) Les transactions sont modifiées en arrière-plan même si l'interface utilisateur semble correcte et légitime Utilise des adresses “sosies” via correspondance de chaînes pour rendre les échanges moins évidents à détecter Le mainteneur compromis par email de phishing provenant du faux domaine “mailto:support@npmjs.help|support@npmjs.help” enregistré 3 jours avant l'attaque sur une demande de mise a jour de son autheotnfication a deux facteurs après un an Aikido a alerté le mainteneur via Bluesky qui a confirmé la compromission et commencé le nettoyage des packages Attaque sophistiquée opérant à plusieurs niveaux: contenu web, appels API et manipulation des signatures de transactions Les anti-cheats de jeux vidéo : une faille de sécurité majeure ? - https://tferdinand.net/jeux-video-et-si-votre-anti-cheat-etait-la-plus-grosse-faille/ Les anti-cheats modernes s'installent au Ring 0 (noyau système) avec privilèges maximaux Ils obtiennent le même niveau d'accès que les antivirus professionnels mais sans audit ni certification Certains exploitent Secure Boot pour se charger avant le système d'exploitation Risque de supply chain : le groupe APT41 a déjà compromis des jeux comme League of Legends Un attaquant infiltré pourrait désactiver les solutions de sécurité et rester invisible Menace de stabilité : une erreur peut empêcher le démarrage du système (référence CrowdStrike) Conflits possibles entre différents anti-cheats qui se bloquent mutuellement Surveillance en temps réel des données d'utilisation sous prétexte anti-triche Dérive dangereuse selon l'auteur : des entreprises de jeux accèdent au niveau EDR Alternatives limitées : cloud gaming ou sandboxing avec impact sur performances donc faites gaffe aux jeux que vos gamins installent ! Loi, société et organisation Luc Julia au Sénat - Monsieur Phi réagi et publie la vidéo Luc Julia au Sénat : autopsie d'un grand N'IMPORTE QUOI https://www.youtube.com/watch?v=e5kDHL-nnh4 En format podcast de 20 minutes, sorti au même moment et à propos de sa conf à Devoxx https://www.youtube.com/watch?v=Q0gvaIZz1dM Le lab IA - Jérôme Fortias - Et si Luc Julia avait raison https://www.youtube.com/watch?v=KScI5PkCIaE Luc Julia au Senat https://www.youtube.com/watch?v=UjBZaKcTeIY Luc Julia se défend https://www.youtube.com/watch?v=DZmxa7jJ8sI Intelligence artificielle : catastrophe imminente ? - Luc Julia vs Maxime Fournes https://www.youtube.com/watch?v=sCNqGt7yIjo Tech and Co Monsieur Phi vs Luc Julia (put a click) https://www.youtube.com/watch?v=xKeFsOceT44 La tronche en biais https://www.youtube.com/live/zFwLAOgY0Wc Conférences La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 12 septembre 2025 : Agile Pays Basque 2025 - Bidart (France) 15 septembre 2025 : Agile Tour Montpellier - Montpellier (France) 18-19 septembre 2025 : API Platform Conference - Lille (France) & Online 22-24 septembre 2025 : Kernel Recipes - Paris (France) 22-27 septembre 2025 : La Mélée Numérique - Toulouse (France) 23 septembre 2025 : OWASP AppSec France 2025 - Paris (France) 23-24 septembre 2025 : AI Engineer Paris - Paris (France) 25 septembre 2025 : Agile Game Toulouse - Toulouse (France) 25-26 septembre 2025 : Paris Web 2025 - Paris (France) 30 septembre 2025-1 octobre 2025 : PyData Paris 2025 - Paris (France) 2 octobre 2025 : Nantes Craft - Nantes (France) 2-3 octobre 2025 : Volcamp - Clermont-Ferrand (France) 3 octobre 2025 : DevFest Perros-Guirec 2025 - Perros-Guirec (France) 6-7 octobre 2025 : Swift Connection 2025 - Paris (France) 6-10 octobre 2025 : Devoxx Belgium - Antwerp (Belgium) 7 octobre 2025 : BSides Mulhouse - Mulhouse (France) 7-8 octobre 2025 : Agile en Seine - Issy-les-Moulineaux (France) 8-10 octobre 2025 : SIG 2025 - Paris (France) & Online 9 octobre 2025 : DevCon #25 : informatique quantique - Paris (France) 9-10 octobre 2025 : Forum PHP 2025 - Marne-la-Vallée (France) 9-10 octobre 2025 : EuroRust 2025 - Paris (France) 16 octobre 2025 : PlatformCon25 Live Day Paris - Paris (France) 16 octobre 2025 : Power 365 - 2025 - Lille (France) 16-17 octobre 2025 : DevFest Nantes - Nantes (France) 17 octobre 2025 : Sylius Con 2025 - Lyon (France) 17 octobre 2025 : ScalaIO 2025 - Paris (France) 17-19 octobre 2025 : OpenInfra Summit Europe - Paris (France) 20 octobre 2025 : Codeurs en Seine - Rouen (France) 23 octobre 2025 : Cloud Nord - Lille (France) 30-31 octobre 2025 : Agile Tour Bordeaux 2025 - Bordeaux (France) 30-31 octobre 2025 : Agile Tour Nantais 2025 - Nantes (France) 30 octobre 2025-2 novembre 2025 : PyConFR 2025 - Lyon (France) 4-7 novembre 2025 : NewCrafts 2025 - Paris (France) 5-6 novembre 2025 : Tech Show Paris - Paris (France) 5-6 novembre 2025 : Red Hat Summit: Connect Paris 2025 - Paris (France) 6 novembre 2025 : dotAI 2025 - Paris (France) 6 novembre 2025 : Agile Tour Aix-Marseille 2025 - Gardanne (France) 7 novembre 2025 : BDX I/O - Bordeaux (France) 12-14 novembre 2025 : Devoxx Morocco - Marrakech (Morocco) 13 novembre 2025 : DevFest Toulouse - Toulouse (France) 15-16 novembre 2025 : Capitole du Libre - Toulouse (France) 19 novembre 2025 : SREday Paris 2025 Q4 - Paris (France) 19-21 novembre 2025 : Agile Grenoble - Grenoble (France) 20 novembre 2025 : OVHcloud Summit - Paris (France) 21 novembre 2025 : DevFest Paris 2025 - Paris (France) 27 novembre 2025 : DevFest Strasbourg 2025 - Strasbourg (France) 28 novembre 2025 : DevFest Lyon - Lyon (France) 1-2 décembre 2025 : Tech Rocks Summit 2025 - Paris (France) 4-5 décembre 2025 : Agile Tour Rennes - Rennes (France) 5 décembre 2025 : DevFest Dijon 2025 - Dijon (France) 9-11 décembre 2025 : APIdays Paris - Paris (France) 9-11 décembre 2025 : Green IO Paris - Paris (France) 10-11 décembre 2025 : Devops REX - Paris (France) 10-11 décembre 2025 : Open Source Experience - Paris (France) 11 décembre 2025 : Normandie.ai 2025 - Rouen (France) 14-17 janvier 2026 : SnowCamp 2026 - Grenoble (France) 2-6 février 2026 : Web Days Convention - Aix-en-Provence (France) 3 février 2026 : Cloud Native Days France 2026 - Paris (France) 12-13 février 2026 : Touraine Tech #26 - Tours (France) 22-24 avril 2026 : Devoxx France 2026 - Paris (France) 23-25 avril 2026 : Devoxx Greece - Athens (Greece) 17 juin 2026 : Devoxx Poland - Krakow (Poland) 4 septembre 2026 : JUG SUmmer Camp 2026 - La Rochelle (France) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via X/twitter https://twitter.com/lescastcodeurs ou Bluesky https://bsky.app/profile/lescastcodeurs.com Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/

Masty o Rasty | پادکست فارسی مستی و راستی

Ali is VP of AI Product at SandboxAQ and was Product lead at Google DeepMind. Previously at Google Research. Before that: Meta (including Facebook AI - FAIR, Integrity and News Feed), LinkedIn, Yahoo, Microsoft and a startup. PHD in computer science.In this episode we talk about the current climate in silicon valley. To reach Ali go to :https://www.instagram.com/alikh1980 Hosted on Acast. See acast.com/privacy for more information.

All-In with Chamath, Jason, Sacks & Friedberg
Google DeepMind CEO Demis Hassabis on AI, Creativity, and a Golden Age of Science | All-In Summit

All-In with Chamath, Jason, Sacks & Friedberg

Play Episode Listen Later Sep 12, 2025 31:48


(0:00) Introducing Sir Demis Hassabis, reflecting on his Nobel Prize win (2:39) What is Google DeepMind? How does it interact with Google and Alphabet? (4:01) Genie 3 world model (9:21) State of robotics models, form factors, and more (14:42) AI science breakthroughs, measuring AGI (20:49) Nano-Banana and the future of creative tools, democratization of creativity (24:44) Isomorphic Labs, probabilistic vs deterministic, scaling compute, a golden age of science Thanks to our partners for making this happen! Solana: https://solana.com/ OKX: https://www.okx.com/ Google Cloud: https://cloud.google.com/ IREN: https://iren.com/ Oracle: https://www.oracle.com/ Circle: https://www.circle.com/ BVNK: https://www.bvnk.com/ Follow Demis: https://x.com/demishassabis Follow the besties: https://x.com/chamath https://x.com/Jason https://x.com/DavidSacks https://x.com/friedberg Follow on X: https://x.com/theallinpod Follow on Instagram: https://www.instagram.com/theallinpod Follow on TikTok: https://www.tiktok.com/@theallinpod Follow on LinkedIn: https://www.linkedin.com/company/allinpod Intro Music Credit: https://rb.gy/tppkzl https://x.com/yung_spielburg Intro Video Credit: https://x.com/TheZachEffect  

The Made to Thrive Show
AI Health Breakthroughs: Personal AI Agents for Disease Prevention, Peak Performance, Lifelong Wellness & Curing Cancer by 2030 with Leading AI Futurist Steve Brown

The Made to Thrive Show

Play Episode Listen Later Sep 10, 2025 59:04


AI is here, and today is the worst it is ever going to be. But there are also over 8 billion human beings, so the biggest question of all is - how are we going to cooperate and even co-exist? Is AI here to take all the jobs or create economic and scientific super-abundance? And for human health, will we all carry a personal AI health agent in our pockets helping us live healthier and happier and prevent disease, and will AI cure every single cancer by 2030? At this stage, there are more questions than answers, which is why I invited the number one artificial intelligence futurist onto the show.Steve Brown is a leading voice in the field of artificial intelligence. A former executive at Google DeepMind and Intel, he has delivered hundreds of information-packed and entertaining keynotes across five continents, inspiring audiences to take action with AI. ‍As a thought leader on AI, generative AI, autonomous agents, digital transformation, and the future impact of AI on business, education, and society, Over his 25-year career, Steve has held senior leadership roles, including Senior Director and in-house Futurist at Google DeepMind in London and Intel's Chief Evangelist and Futurist. He is the co-founder of The Provenance Chain Network, a company providing supply chain transparency and security services for the U.S. Space Force, as well as a strategic advisor to two AI startups and a BCG Luminary. Steve's mission is to help organisations build a better future with AI by creating new customer experiences, streamlining operations, and elevating the workforce. Join us as we explore:What is AI, what are the different types, why use one type over the other, what AI can do today and why today is the worst AI will ever be.How to use AI right now to improve decision making, and specifically your personal health AI agent who is there with you 24/7 to make medicine, health and performance optimization choices.How to think of and deploy AI as an enhancer and augmenter or our lives and professions rather than fearing it is here to replace you.Prompt engineering your health and performance.AI hallucinations, AI risks, AI misuse, AI misalignment and AI blackmail. Contact:Website - https://www.stevebrown.aiMentions:Tools - NotebookLM, https://notebooklm.googleTools - Perplexity, https://www.perplexity.aiSupport the showFollow Steve's socials: Instagram | LinkedIn | YouTube | Facebook | Twitter | TikTokSupport the show on Patreon:As much as we love doing it, there are costs involved and any contribution will allow us to keep going and keep finding the best guests in the world to share their health expertise with you. I'd be grateful and feel so blessed by your support: https://www.patreon.com/MadeToThriveShowSend me a WhatsApp to +27 64 871 0308. Disclaimer: Please see the link for our disclaimer policy for all of our content: https://madetothrive.co.za/terms-and-conditions-and-privacy-policy/

People of AI
Creative storytelling with AI: The making of Ancestra

People of AI

Play Episode Listen Later Sep 10, 2025 61:40


In this episode of People of AI , we take you behind the scenes of "ANCESTRA," a groundbreaking film that integrates generative artificial intelligence into its core. Hear from the director Eliza McNitt and key collaborators from the Google DeepMind team about how they leveraged AI as a new creative tool, navigated its capabilities and limitations, and ultimately shaped a unique cinematic experience. Understand the future role of AI in filmmaking and its potential for developers and storytellers. Chapters: 0:00 - Introduction to Ancestra: AI in filmmaking 3:38 - The Origin Story of ANCESTRA 5:35 - Google DeepMind and Primordial Soup collaboration 11:47 - Veo and the creative process 20:21 - Behind the scenes: Making the film 28:47 - Generating videos: Gemini and Veo tools 38:11 - AI as a creative tool, not a replacement 47:41 - AI's impact and the future of the film industry 53:51 - Generative models: A new kind of camera 57:46 - Rapid fire & conclusion Resources: Ancestra → https://goo.gle/4mVScNW  Making of ANCESTRA → https://goo.gle/3JVJil1  Veo 3 → https://goo.gle/4mWn3Kz  Veo 3 Documentation → https://goo.gle/46qqFOV  Veo 3 Cookbook → https://goo.gle/3VMVFSZ  Google Flow → https://goo.gle/3VMVR4F  Watch more People of AI → https://goo.gle/PAI  Subscribe to Google for Developers → https://goo.gle/developers #PeopleofAI Speaker: Christina Warren, Ashley Oldacre,  Eliza McNitt, Ben Wiley, Corey Matthewson,  Products Mentioned: Google AI, Gemini, Veo 2, Veo 3

80,000 Hours Podcast with Rob Wiblin
#222 – Neel Nanda on the race to read AI minds

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Sep 8, 2025 181:11


We don't know how AIs think or why they do what they do. Or at least, we don't know much. That fact is only becoming more troubling as AIs grow more capable and appear on track to wield enormous cultural influence, directly advise on major government decisions, and even operate military equipment autonomously. We simply can't tell what models, if any, should be trusted with such authority.Neel Nanda of Google DeepMind is one of the founding figures of the field of machine learning trying to fix this situation — mechanistic interpretability (or “mech interp”). The project has generated enormous hype, exploding from a handful of researchers five years ago to hundreds today — all working to make sense of the jumble of tens of thousands of numbers that frontier AIs use to process information and decide what to say or do.Full transcript, video, and links to learn more: https://80k.info/nn1Neel now has a warning for us: the most ambitious vision of mech interp he once dreamed of is probably dead. He doesn't see a path to deeply and reliably understanding what AIs are thinking. The technical and practical barriers are simply too great to get us there in time, before competitive pressures push us to deploy human-level or superhuman AIs. Indeed, Neel argues no one approach will guarantee alignment, and our only choice is the “Swiss cheese” model of accident prevention, layering multiple safeguards on top of one another.But while mech interp won't be a silver bullet for AI safety, it has nevertheless had some major successes and will be one of the best tools in our arsenal.For instance: by inspecting the neural activations in the middle of an AI's thoughts, we can pick up many of the concepts the model is thinking about — from the Golden Gate Bridge, to refusing to answer a question, to the option of deceiving the user. While we can't know all the thoughts a model is having all the time, picking up 90% of the concepts it is using 90% of the time should help us muddle through, so long as mech interp is paired with other techniques to fill in the gaps.This episode was recorded on July 17 and 21, 2025.Interested in mech interp? Apply by September 12 to be a MATS scholar with Neel as your mentor! http://tinyurl.com/neel-mats-appWhat did you think? https://forms.gle/xKyUrGyYpYenp8N4AChapters:Cold open (00:00)Who's Neel Nanda? (01:02)How would mechanistic interpretability help with AGI (01:59)What's mech interp? (05:09)How Neel changed his take on mech interp (09:47)Top successes in interpretability (15:53)Probes can cheaply detect harmful intentions in AIs (20:06)In some ways we understand AIs better than human minds (26:49)Mech interp won't solve all our AI alignment problems (29:21)Why mech interp is the 'biology' of neural networks (38:07)Interpretability can't reliably find deceptive AI – nothing can (40:28)'Black box' interpretability — reading the chain of thought (49:39)'Self-preservation' isn't always what it seems (53:06)For how long can we trust the chain of thought (01:02:09)We could accidentally destroy chain of thought's usefulness (01:11:39)Models can tell when they're being tested and act differently (01:16:56)Top complaints about mech interp (01:23:50)Why everyone's excited about sparse autoencoders (SAEs) (01:37:52)Limitations of SAEs (01:47:16)SAEs performance on real-world tasks (01:54:49)Best arguments in favour of mech interp (02:08:10)Lessons from the hype around mech interp (02:12:03)Where mech interp will shine in coming years (02:17:50)Why focus on understanding over control (02:21:02)If AI models are conscious, will mech interp help us figure it out (02:24:09)Neel's new research philosophy (02:26:19)Who should join the mech interp field (02:38:31)Advice for getting started in mech interp (02:46:55)Keeping up to date with mech interp results (02:54:41)Who's hiring and where to work? (02:57:43)Host: Rob WiblinVideo editing: Simon Monsour, Luke Monsour, Dominic Armstrong, and Milo McGuireAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongMusic: Ben CordellCamera operator: Jeremy ChevillotteCoordination, transcriptions, and web: Katy Moore

Night Science
Can Google's Co-scientist project give scientists superpowers?

Night Science

Play Episode Listen Later Sep 8, 2025 40:13


To answer this question, we speak with Dr. Alan Karthikesalingam and Vivek Natarajan from Google DeepMind about their groundbreaking AI co-scientist project. Beyond their work at Google, Alan is an honorary lecturer in vascular surgery at Imperial College London, and Vivek teaches at Harvard's T.H. Chan School of Public Health. Together, we discuss how their system has evolved to mirror parts of human hypothesis generation while also diverging in fascinating ways. We talk about its internal “tournaments” of ideas, its ability to be prompted to “think out of the box,” and whether it becomes too constrained by the need to align with every published “fact”. And we discuss how we still seem far away from a time when AI can not only answer our questions, but can ask new and exciting research questions itself.The Night Science Podcast is produced by the Night Science Institute – for more information on Night Science, visit night-science.org .

Cloud Wars Live with Bob Evans
Microsoft's Mustafa Suleyman Warns Against the Illusion of Conscious AI: “Build AI for People, Not to Be a Person”

Cloud Wars Live with Bob Evans

Play Episode Listen Later Sep 5, 2025 2:30


Highlights00:03 — Microsoft AI CEO Mustafa Suleyman has published a blog post that criticizes the notion of seemingly conscious AI. He argues that the pursuit of this idea, particularly regarding AI model welfare, is misguided.00:19 — Suleyman explains that these concepts are already causing mental health issues among users. He is growing: “more and more concerned about what is becoming known as the psychosis risk and a bunch of related issues. I don't think this will be limited to those who are already at risk of mental health issues.”00:56 — Suleyman argues that we should “build AI for people not to be a person.” He is adamant in his neglect of this previous approach, saying that: “The arrival of seemingly conscious AI is inevitable and unwelcome. Instead, we need a vision for AI that can fulfill its potential as a helpful companion without falling prey to its illusions.”AI Agent & Copilot Summit is an AI-first event to define opportunities, impact, and outcomes with Microsoft Copilot and agents. Building on its 2025 success, the 2026 event takes place March 17-19 in San Diego. Get more details. 01:18 — Meanwhile, companies like Anthropic, OpenAI, and Google DeepMind are actively pursuing AI consciousness or welfare research. Now, while there are complications and not everyone is on the same page, here, it is clear that the evolving scenarios and possibilities presented by AI are increasingly reaching into the realm of the surreal and in some cases, they're fantastical.01:42 — This is largely because, for the first time in human history, we have a tool that is evolving at a pace beyond which we can conceive. While some of these ideas may sound far-fetched, the warnings accompanying them are serious based on real concerns coming from leaders in the AI space, the ones leading this new AI revolution.02:04 — What everyone must do, and though it will be challenging, is to start imagining the unimaginable. Once you accept the vast possibilities of AI, both good and bad, you can begin to take the warnings that come with them seriously. Visit Cloud Wars for more.

Everyday AI Podcast – An AI and ChatGPT Podcast
EP 598: Nano Banana! Real Use cases for Google's new Gemini 2.5 Flash Image

Everyday AI Podcast – An AI and ChatGPT Podcast

Play Episode Listen Later Aug 27, 2025 47:55


Nano Banana is no longer a mystery.Google officially released Gemini 2.5 Flash Image on Tuesday (AKA Nano Banana), revealing it was the company behind the buzzy AI image model that had the internet talking. But... what does it actually do? And how can you put it to work for you? Find out in our newish weekly segment, AI at Work on Wednesdays.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Thoughts on this? Join the convo and connect with other AI leaders on LinkedIn.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Gemini 2.5 Flash Image (Nano Banana) RevealBenchmark Scores: Gemini 2.5 Flash Image vs. CompetitionMultimodal Model Capabilities ExplainedCharacter Consistency in AI Image GenerationAdvanced Image Editing: Removal and Object ControlIntegration with Google AI Studio and APIReal-World Business Use Cases for Gemini 2.5Live Demos: Headshots, Mockups, and InfographicsGemini 2.5 Flash Image Pricing and LimitsIterative Prompting for AI Image CreationTimestamps:00:00 "AI Highlights: Google's Gemini 2.5"06:17 "Nano Banana AI Features"09:58 "Revolutionizing Photo Editing Tools"12:31 "Nano Banana: Effortless Video Updating"14:39 "Impressions on Nano Banana"19:24 AI Growth Strategies Unlocked20:58 Turning Selfie into Professional Headshot24:48 AI-Enhanced Headshots and Team Photos29:51 "3D AI Logo Mockups"32:22 Improved Logo Design Review35:41 Photoshop Shortcut Critique38:50 Deconstructive Design with Logos44:01 "Transform Diagrams Into Presentations"46:12 "Refining AI for Jaw-Dropping Results"Keywords:Gemini 2.5, Gemini 2.5 Flash Image, Nano Banana, Google AI, Google DeepMind, AI image generation, multimodal model, AI photo editing, image manipulation, text-to-image model, image editing AI, large language model, character consistency, AI headshot generator, real estate image editing, product mockup generator, smart image blending, style transfer AI, Google AI Studio, LM Arena, Elo score, AI watermarks, synthID fingerprint, Photoshop alternative, AI-powered design, generative AI, API integration, Adobe integration, AI for business, visual content creation, creative AI tools, professional image editing, iterative prompting, interior design AI, infographic generator, training material visuals, A/B test variations, marketing asset creation, production scaling, image benchmark, AI output watermark, cost-effective AI images, scalable AI infrastructure, prompt-based editing, natural language image editing, OpenAI GPT-4o image, benchmarking leader, visSend Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner

Waking Up With AI
New Developments in World Models

Waking Up With AI

Play Episode Listen Later Aug 21, 2025 19:35


This week on “Paul, Weiss Waking Up With AI,” Katherine Forrest and Anna Gressel explore the rapid advancements and real-world applications of AI world models, highlighting cutting-edge developments like Google DeepMind's Genie 3, Fei-Fei Li's World Labs and Microsoft Muse. ## Learn More About Paul, Weiss's Artificial Intelligence practice: https://www.paulweiss.com/industries/artificial-intelligence

ACM ByteCast
Maja Matarić - Episode 73

ACM ByteCast

Play Episode Listen Later Aug 20, 2025 46:22


In this episode of ACM ByteCast, Bruke Kifle hosts 2024 ACM Athena Lecturer and ACM Eugene L. Lawler Award recipient Maja Matarić, the Chan Soon-Shiong Chaired and Distinguished Professor of Computer Science, Neuroscience, and Pediatrics at the University of Southern California (USC), and a Principal Scientist at Google DeepMind. Maja is a roboticist and AI researcher known for her work in human-robot interaction for socially assistive robotics, a field she pioneered. She is the founding director of the USC Robotics and Autonomous Systems Center and co-director of the USC Robotics Research Lab. Maja is a member of the American Academy of Arts and Sciences (AMACAD), Fellow of the American Association for the Advancement of Science (AAAS), IEEE, AAAI, and ACM. She received the US Presidential Award for Excellence in Science, Mathematics, and Engineering Mentoring (PAESMEM) from President Obama in 2011. She also received the Okawa Foundation, NSF Career, the MIT TR35 Innovation, the IEEE Robotics and Automation Society Early Career, and the Anita Borg Institute Women of Vision Innovation Awards, among others, and is an ACM Distinguished Lecturer. She is featured in the documentary movie Me & Isaac Newton. In the interview, Maja talks about moving to the U.S. from Belgrade, Serbia and how her early interest in both computer and behavioral sciences led her to socially assistive robotics, a field she saw as measurably helpful. She discusses the challenges of social assistance as compared to physical assistance and why progress in the field is slow. Maja explains why Generative AI is conducive to creating socially engaging robots, and touches on the issues of privacy, bias, ethics, and personalization in the context of assistive robotics. She also shares some concerns about the future, such as the dehumanization of AI interactions, and also what she's looking forward to in the field. We want to hear from you!

This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Genie 3: A New Frontier for World Models with Jack Parker-Holder and Shlomi Fruchter - #743

This Week in Machine Learning & Artificial Intelligence (AI) Podcast

Play Episode Listen Later Aug 19, 2025 61:01


Today, we're joined by Jack Parker-Holder and Shlomi Fruchter, researchers at Google DeepMind, to discuss the recent release of Genie 3, a model capable of generating “playable” virtual worlds. We dig into the evolution of the Genie project and review the current model's scaled-up capabilities, including creating real-time, interactive, and high-resolution environments. Jack and Shlomi share their perspectives on what defines a world model, the model's architecture, and key technical challenges and breakthroughs, including Genie 3's visual memory and ability to handle “promptable world events.” Jack, Shlomi, and Sam share their favorite Genie 3 demos, and discuss its potential as a dynamic training environment for embodied AI agents. Finally, we will explore future directions for Genie research. The complete show notes for this episode can be found at https://twimlai.com/go/743.

WeatherBrains
WeatherBrains 1022: Tighter Than A Tick's Butt

WeatherBrains

Play Episode Listen Later Aug 19, 2025 98:40


Tonight's Guest WeatherBrain is a professor of mechanical engineering and applied mathematics at the University of Pennsylvania.  He previously served as a principal scientist at Microsoft Research, where he led the development of Aurora.  This was the groundbreaking AI foundation model for earth system forecasting.  Dr. Paris Perdikaris, welcome to WeatherBrains! Our email officer Jen is continuing to handle the incoming messages from our listeners. Reach us here: email@weatherbrains.com. What is Aurora and the motivation for the program? (08:15) Practical experience and forecasting with tropical forecasting with Aurora (12:30) Success stories with Aurora (14:00) Convection-allowing models (CAMS) (17:15) Convective feedback problems with models (18:45) Emergence of the Google DeepMind ensemble and other AI models (20:00) Aurora's hardware architecture and its importance (23:30) Continuing need of physics-based models in the age of AI (28:00) Ethical considerations and biases in model training data (33:30) Role of US agencies and potential loss of funding (37:55) Communicating AI forecasts to the public and the ensuing ethical issues (42:30) Broad risks of using AI (44:30) Aurora business model (58:45) The Astronomy Outlook with Tony Rice (01:05:25) This Week in Tornado History With Jen (01:07:25) E-Mail Segment (01:09:30) and more! Web Sites from Episode 1022: Penn's Predictive Intelligence Lab Alabama Weather Network on Facebook Picks of the Week: James Aydelott - Dr. Cameron Nixon on YouTube: Cell Mergers and Nudgers Jen Narramore - "Significant Tornadoes 1680-1991" by Thomas Grazulis Rick Smith - USA Jobs Troy Kimmel - Foghorn Kim Klockow-McClain - Out John Gordon - Live Recon in the Atlantic Basin Bill Murray - Foghorn James Spann - Weather Lab: Cyclones The WeatherBrains crew includes your host, James Spann, plus other notable geeks like Troy Kimmel, Bill Murray, Rick Smith, James Aydelott, Jen Narramore, John Gordon, and Dr. Kim Klockow-McClain. They bring together a wealth of weather knowledge and experience for another fascinating podcast about weather.

Mon Carnet, l'actu numérique
{ENTREVUE} - Perch, quand l'IA de Google DeepMind sert la biodiversité

Mon Carnet, l'actu numérique

Play Episode Listen Later Aug 18, 2025 14:43


Perch, l'outil de bioacoustique développé par Google DeepMind, permet aux chercheurs et écologistes de créer leurs propres détecteurs sonores pour analyser la nature, sur terre comme sous l'eau. La nouvelle version élargit sa base de données à environ 14 000 espèces et reste suffisamment légère pour fonctionner sur un ordinateur portable, rendant l'outil accessible à travers le monde. Déjà utilisé pour protéger des habitats menacés, comme en Australie, Perch a aussi servi à évaluer la santé de récifs coralliens uniquement grâce à l'audio. Le projet, porté notamment par l'équipe de Montréal, vise à transformer ces détections en véritables connaissances écologiques et pourrait, à terme, réduire les méthodes invasives d'étude animale. Entrevue avec Vincent Dumoulin, chercheur scientifique chez Google DeepMind.

a16z
Google DeepMind Lead Researchers on Genie 3 & the Future of World-Building

a16z

Play Episode Listen Later Aug 16, 2025 41:28


Genie 3 can generate fully interactive, persistent worlds from just text, in real time.In this episode, Google DeepMind's Jack Parker-Holder (Research Scientist) and Shlomi Fruchter (Research Director) join Anjney Midha, Marco Mascorro, and Justine Moore of a16z, with host Erik Torenberg, to discuss how they built it, the breakthrough “special memory” feature, and the future of AI-powered gaming, robotics, and world models.They share:How Genie 3 generates interactive environments in real timeWhy its “special memory” feature is such a breakthroughThe evolution of generative models and emergent behaviorsInstruction following, text adherence, and model comparisonsPotential applications in gaming, robotics, simulation, and moreWhat's next: Genie 4, Genie 5, and the future of world models This conversation offers a first-hand look at one of the most advanced world models ever created. Timecodes: 0:00 Introduction & The Magic of Genie 30:41 Real-Time World Generation Breakthroughs1:22 The Team's Journey: From Genie 1 to Genie 35:03 Interactive Applications & Use Cases8:03 Special Memory and World Consistency12:29 Emergent Behaviors and Model Surprises18:37 Instruction Following and Text Adherence19:53 Comparing Genie 3 and Other Models21:25 The Future of World Models & Modality Convergence27:35 Downstream Applications and Open Questions31:42 Robotics, Simulation, and Real-World Impact39:33 Closing Thoughts & Philosophical Reflections Resources:Find Shlomi on X: https://x.com/shlomifruchterFind Jack on X: https://x.com/jparkerholderFind Anjney on X: https://x.com/anjneymidhaFind Justine on X: https://x.com/venturetwinsFind Marco on X: https://x.com/Mascobot Stay Updated: Let us know what you think: https://ratethispodcast.com/a16zFind a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

Mixture of Experts
Perplexity's bid for Chrome, Grok Imagine and GPT-5 check-in

Mixture of Experts

Play Episode Listen Later Aug 15, 2025 40:44


Would you sell Chrome for USD 34.5 billion dollars? In episode 68 of Mixture of Experts, host Tim Hwang is joined by Abraham Daniels, Sophie Kuijt and Shobhit Varshney for another packed week in AI. First, AI startup Perplexity puts out a bid for Google Chrome at over double their valuation. Why? Next, xAI released Grok Imagine and claims it will be the next Vine. Our experts analyze the future of AI video generation. Finally, one week after the GPT-5 release and skeptics are saying it did not live up to the hype. Is AI development plateauing? All that and more on Mixture of Experts! 00:00 – Intro 01:17 – MoE News: NVIDIA H20s, Apple AI devices, AI people pleasers and Google DeepMind's bioacoustics model 02:40 – Perplexity's USD 34.5B bid 12:10 – Grok Imagine 24:23 – GPT-5 check-in The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity. Subscribe to the Think Newsletter → https://www.ibm.com/account/reg/us-en/signup?formid=news-urx-52120 Learn more about artificial intelligence → https://www.ibm.com/think/artificial-intelligence Visit Mixture of Experts podcast page to get more AI content → https://www.ibm.com/think/podcasts/mixture-of-experts

The Marketing AI Show
#161: GPT-5, Google DeepMind Genie 3, Cloudflare vs. Perplexity, OpenAI's Open Source Models, Claude 4.1 & New Data on AI Layoffs

The Marketing AI Show

Play Episode Listen Later Aug 12, 2025 75:25


GPT-5 finally landed, and the hype was matched with backlash. In this episode, Paul and Mike share their takeaways from the new model, provide insights into the gravity of DeepMind's photorealistic Genie 3 world-model, unravel Perplexity's stealth crawling controversy, touch on OpenAI's open-weight release and rumored $500 billion valuation, and more in our rapid-fire section.  Show Notes: Access the show notes and show links here Timestamps:  00:00:00 — Intro 00:04:57 — GPT-5 Launch and First Reactions 00:25:29 — DeepMind's Genie 3 World Model 00:32:20 — Perplexity vs. Cloudflare Crawling Dispute 00:37:37 — OpenAI Returns to Open Weights 00:41:21 — OpenAI $500B Secondary Talks 00:44:26 — Anthropic Claude Opus 4.1 and System Prompt Update 00:49:57 — AI and the Future of Work  00:56:02 — OpenAI “universal verifiers” 01:00:42 — OpenAI Offers ChatGPT to the Federal Workforce 01:02:59 — ElevenLabs Launches AI Music 01:05:32 — Meta Buys AI Audio Startup 01:09:46 — Google AI Pro for Students This episode is brought to you by our Academy 3.0 Launch Event. Join Paul Roetzer and the SmarterX team on August 19 at 12pm ET for the launch of AI Academy 3.0 by SmarterX —your gateway to personalized AI learning for professionals and teams. Discover our new on-demand courses, live classes, certifications, and a smarter way to master AI. Register here. This week's episode is also brought to you by Intro to AI, our free, virtual monthly class, streaming live on Aug. 14 at 12 p.m. ET. Reserve your seat AND attend for a chance to win a 12-month AI Mastery Membership.  For more information on Intro to AI and to register for this month's class, visit www.marketingaiinstitute.com/intro-to-ai. Visit our website Receive our weekly newsletter Join our community: Slack LinkedIn Twitter Instagram Facebook Looking for content and resources? Register for a free webinar Come to our next Marketing AI Conference Enroll in our AI Academy 

The AI Breakdown: Daily Artificial Intelligence News and Discussions

First, a collection of first reactions to GPT-5. This week saw major AI shifts — from web-scraping battles to the brutal economics of AI coding startups. Cloudflare took aim at Perplexity over “stealth crawling,” Google defended AI overviews against claims they hurt web traffic, and reports revealed that coding firms like Windsurf and Replit face severe negative margins despite rapid growth. Also in the mix: Google's Jules coding agent launch, N8N's potential $2.3B valuation as the first pure-play agentic company, Microsoft's talent raids on Google DeepMind, and OpenAI's $500B secondary talks with $1.5M employee bonuses and $1 government-agency deals. Together, these stories show the fierce power struggles and harsh economics shaping AI's next era.Brought to you by:KPMG – Go to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://kpmg.com/ai⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ to learn more about how KPMG can help you drive value with our AI solutions.Blitzy.com - Go to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://blitzy.com/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ to build enterprise software in days, not months AGNTCY - The AGNTCY is an open-source collective dedicated to building the Internet of Agents, enabling AI agents to communicate and collaborate seamlessly across frameworks. Join a community of engineers focused on high-quality multi-agent software and support the initiative at ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠agntcy.org ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Vanta - Simplify compliance - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://vanta.com/nlw⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Plumb - The automation platform for AI experts and consultants ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://useplumb.com/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠The Agent Readiness Audit from Superintelligent - Go to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://besuper.ai/ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠to request your company's agent readiness score.The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614Subscribe to the newsletter: https://aidailybrief.beehiiv.com/Join our Discord: https://bit.ly/aibreakdownInterested in sponsoring the show? nlw@breakdown.network

VP Land
Genie 3, ElevenLabs Music, Lightcraft Spark & More! | This Week in AI for Filmmakers

VP Land

Play Episode Listen Later Aug 9, 2025 41:50 Transcription Available


Google DeepMind's Genie 3 stole the show this week with its persistent world generation, outshining OpenAI's underwhelming GPT-5 release. Join Joey and Addy as they break down how Genie 3 creates navigable AI worlds with accurate physics and memory—allowing users to paint a wall, walk away, and return to find their changes intact. Plus: ElevenLabs' music generator, Houdini 21's VFX updates, Lightcraft's new tools for indie filmmakers, and the ongoing ethics debate around Grok Imagine's controversial NSFW capabilities.---The views and opinions expressed in this podcast are the personal views of the hosts and do not necessarily reflect the views or positions of their respective employers or organizations. This show is independently produced by VP Land without the use of any outside company resources, confidential information, or affiliations.

AI For Humans
OpenAI's GPT-5 Is Here. It's Very Good AI But Not AGI

AI For Humans

Play Episode Listen Later Aug 8, 2025 53:25


OpenAI's GPT-5 is out & it's the best AI model! For now. Overall, the benchmarks are good-not-great but the vibes are off-the-charts. We dive into the DEEP end. Sam Altman brought the whole team out for a long demo of GPT-5. It's great at coding! It's better at writing! It has new AI personalities! It will not hallucinate nearly as much! Which is more than we can say for the OpenAI graphics department.  Then, Google Deepmind's Genie 3 shows us what the next generation of AI *might* look like with its new world model. Anthropic isn't left out with a 4.1 update to its Claude Opus model. Eleven Labs drops an new AI music model, also great. And finally xAI drops Imagine AI Image & Video generation which, lets just say, can get a little *too* spicy.  And, finally, Unitree's Hunter Stellar Hunter robot scares the hell out of us. NEW MODELS FOR EVERYONE! SCARY ROBOTS AT THE END FOR JUST YOU! #ai #ainews #openai Join the discord: https://discord.gg/muD2TYgC8f Join our Patreon: https://www.patreon.com/AIForHumansShow AI For Humans Newsletter: https://aiforhumans.beehiiv.com/ Follow us for more on X @AIForHumansShow Join our TikTok @aiforhumansshow To book us for speaking, please visit our website: https://www.aiforhumans.show/   // Show Links // OpenAI's GPT-5 Announcement Live Stream https://www.youtube.com/live/0Uu_VJeVVfo?si=PzayOarIJqw1AUTA ChatGPT-5 Promo Video https://x.com/OpenAI/status/1953504357821165774 Ethan Mollick “it's very good and it just does stuff…” https://x.com/emollick/status/1953502029126549597 Benchmarks Are…Fine? https://x.com/ArtificialAnlys/status/1953504796549558527 OpenAI Chart Crime https://x.com/EMostaque/status/1953503036501877053 Theo T3-GG's Tweet https://x.com/theo/status/1953514692439347419 New OpenAI ChatGPT Personalities https://x.com/OpenAI/status/1953534071772262511 OpenAI Open Weight Models https://openai.com/open-models/ OSS Minecraft Test https://x.com/adonis_singh/status/1952806201617510759 Google DeepMind's Genie 3: World Model https://deepmind.google/discover/blog/genie-3-a-new-frontier-for-world-models/ Good Information Dense hands on from Google Scientist https://x.com/tejasdkulkarni/status/1952737669894574264 Good Knight Example https://x.com/philipjohnball/status/1952767070623437063 Emergent Behavior of Video T-Rex https://x.com/jkbr_ai/status/1953154961988305384 Claude Opus 4.1 https://www.anthropic.com/news/claude-opus-4-1 Eleven Labs Music https://x.com/elevenlabsio/status/1952754097976721737 Musician Using It: https://x.com/elevenlabsio/status/1953424556246384814 Grok Imagine https://techcrunch.com/2025/08/04/grok-imagine-xais-new-ai-image-and-video-generator-lets-you-make-nsfw-content/ Our Grok Imagine Examples https://x.com/AIForHumansShow/status/1952067002488779102 Mesh-Blend 3D AI for Unreal Engine https://x.com/EHuanglu/status/1952017300946911368 Kitten TTS Tiny Model Runs On Devices https://x.com/divamgupta/status/1952762876504187065 Unitree Stellar Hunter Robot (good lord) https://x.com/UnitreeRobotics/status/1952672597558309136 Kwindla's Voice Game Demo https://x.com/kwindla/status/1952947685012717659 Neural Viz Sydney Sweeney Parody https://x.com/NeuralViz/status/1952531202482856053  

The ChatGPT Report
149 - Genie 3 and OpenAI oss

The ChatGPT Report

Play Episode Listen Later Aug 7, 2025 10:42


What is Ryan Yapping on about todayOpenAI releases new "open" models: OpenAI launched two new open-weight reasoning models, GPT-OSS-120B and GPT-OSS-20B, which can run locally on devices like laptops and even phones.Shifting business model: The new OpenAI models signal a move away from a subscription-based, pay-per-API model to a one-time hardware investment, offering more privacy since data stays on the device.Google DeepMind's "Genie 3": Google DeepMind unveiled Genie 3, a highly advanced world simulator that can generate entire game worlds from natural language prompts, featuring high-fidelity visuals and in-world memory.New coding benchmark: Claude upgraded its Opus model to version 4.1, which is being praised as the new top-tier model for coding and advanced agentic tasks—at least until the next major release from a competitor.Amazon backs "Netflix of AI": Amazon is supporting an AI-generated streaming service that allows users to type prompts to create scenes or full episodes, with Disney and other major studios reportedly in talks to license their IPs.AI for job interviews: A growing number of major companies, including Unilever, Delta, and Vodafone, are using AI to score job interviews by analyzing a candidate's facial expressions, eye movements, and voice for "confidence" and "trustworthiness."Potential privacy concerns: With the new local-run models and AI-driven interview tools, there's a growing conversation about user privacy and data security, especially with the news that OpenAI is providing ChatGPT to the government for just $1.Publicity vs. practical use: While the new OpenAI models are generating buzz, some observers are questioning their real-world utility, suggesting the release may be more of a publicity stunt without clear use cases or examples.

Airacast
2025 Convention Special NFB

Airacast

Play Episode Listen Later Aug 6, 2025 60:19


Episode Notes In our second convention special we go to New Orleans with our presentation to the National Federation of the Blind. Staff from Service Delivery, Marketing and Sales, and Engineering talk about their roles and this year's updates. Then CEO Troy Otillio talks AI and Aira's involvement with Google Deep Mind's Project Astra.  Sign up to become a trusted tester at https://aira.io/projectastra. Learn more about visual interpreting at https://aira.io. Call our Customer Care Team at 1-800-835-1934 or email support@aira.io. Email us: airacast@aira.io Find out more at https://airacast.pinecast.co This podcast is powered by Pinecast.

Mixture of Experts
Gpt-oss, Genie 3, Personal Superintelligence and Claude pricing

Mixture of Experts

Play Episode Listen Later Aug 6, 2025 43:44


OpenAI goes open-weight? In episode 67 of Mixture of Experts, host Tim Hwang is joined by Kaoutar El Maghraoui, Chris Hay and Bruno Aziza to debrief OpenAI's release of gpt-oss, their new open-source models. Next, the model releases continue as Google DeepMind dropped Genie 3. Our experts analyze the various AI model performance. Then, there's been some drama surrounding the pricing of Claude Code. We'll discuss where Anthropic landed and what it means. Finally, Mark Zuckerberg shared a new essay on Personal Superintelligence. Our experts take us through what superintelligence looks like across the major AI players. All that and more on today's episode of Mixture of Experts. 00:00 -- Intro 01:20 -- Gpt-oss 12:26 -- Genie 3 22:12 -- Claude pricing 33:04 -- Zuckerberg on Personal Superintelligence The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity. Subscribe for AI updates → https://www.ibm.com/account/reg/us-en/signup?formid=news-urx-52120 Learn more about artificial intelligence → https://www.ibm.com/think/artificial-intelligence Visit Mixture of Experts podcast page to get more AI content → https://www.ibm.com/think/podcasts/mixture-of-experts

Daily Tech News Show
Inside Google DeepMind's Genie 3 world model - DTNSB 5075

Daily Tech News Show

Play Episode Listen Later Aug 5, 2025 32:01


Cloudflare calls out Perplexity for evading no-crawl directives, and Windows XP and Clippy are coming for your feet!Starring Jason Howell, Tom Merritt, and Ryan Whitwam.Links to stories discussed in this episode can be found here. Hosted on Acast. See acast.com/privacy for more information.

Machine Learning Street Talk
DeepMind Genie 3 [World Exclusive] (Jack Parker Holder, Shlomi Fruchter)

Machine Learning Street Talk

Play Episode Listen Later Aug 5, 2025 58:22


This episode features Shlomi Fuchter and Jack Parker Holder from Google DeepMind, who are unveiling a new AI called Genie 3. The host, Tim Scarfe, describes it as the most mind-blowing technology he has ever seen. We were invited to their offices to conduct the interview (not sponsored).Imagine you could create a video game world just by describing it. That's what Genie 3 does. It's an AI "world model" that learns how the real world works by watching massive amounts of video. Unlike a normal video game engine (like Unreal or the one for Doom) that needs to be programmed manually, Genie generates a realistic, interactive, 3D world from a simple text prompt.**SPONSOR MESSAGES***Prolific: Quality data. From real people. For faster breakthroughs.https://prolific.com/mlst?utm_campaign=98404559-MLST&utm_source=youtube&utm_medium=podcast&utm_content=script-gen***Here's a breakdown of what makes it so revolutionary:From Text to a Virtual World: You can type "a drone flying by a beautiful lake" or "a ski slope," and Genie 3 creates that world for you in about three seconds. You can then navigate and interact with it in real-time.It's Consistent: The worlds it creates have a reliable memory. If you look away from an object and then look back, it will still be there, just as it was. The guests explain that this consistency isn't explicitly programmed in; it's a surprising, "emergent" capability of the powerful AI model.A Huge Leap Forward: The previous version, Genie 2, was a major step, but it wasn't fast enough for real-time interaction and was much lower resolution. Genie 3 is 720p, interactive, and photorealistic, running smoothly for several minutes at a time.The Killer App - Training Robots: Beyond entertainment, the team sees Genie 3 as a game-changer for training AI. Instead of training a self-driving car or a robot in the real world (which is slow and dangerous), you can create infinite simulations. You can even prompt rare events to happen, like a deer running across the road, to teach an AI how to handle unexpected situations safely.The Future of Entertainment: this could lead to a "YouTube version 2" or a new form of VR, where users can create and explore endless, interconnected worlds together, like the experience machine from philosophy.While the technology is still a research prototype and not yet available to the public, it represents a monumental step towards creating true artificial worlds from the ground up.Jack Parker Holder [Research Scientist at Google DeepMind in the Open-Endedness Team]https://jparkerholder.github.io/Shlomi Fruchter [Research Director, Google DeepMind]https://shlomifruchter.github.io/TOC:[00:00:00] - Introduction: "The Most Mind-Blowing Technology I've Ever Seen"[00:02:30] - The Evolution from Genie 1 to Genie 2[00:04:30] - Enter Genie 3: Photorealistic, Interactive Worlds from Text[00:07:00] - Promptable World Events & Training Self-Driving Cars[00:14:21] - Guest Introductions: Shlomi Fuchter & Jack Parker Holder[00:15:08] - Core Concepts: What is a "World Model"?[00:19:30] - The Challenge of Consistency in a Generated World[00:21:15] - Context: The Neural Network Doom Simulation[00:25:25] - How Do You Measure the Quality of a World Model?[00:28:09] - The Vision: Using Genie to Train Advanced Robots[00:32:21] - Open-Endedness: Human Skill and Prompting Creativity[00:38:15] - The Future: Is This the Next YouTube or VR?[00:42:18] - The Next Step: Multi-Agent Simulations[00:52:51] - Limitations: Thinking, Computation, and the Sim-to-Real Gap[00:58:07] - Conclusion & The Future of Game EnginesREFS:World Models [David Ha, Jürgen Schmidhuber]https://arxiv.org/abs/1803.10122POEThttps://arxiv.org/abs/1901.01753[Akarsh Kumar, Jeff Clune, Joel Lehman, Kenneth O. Stanley]The Fractured Entangled Representation Hypothesishttps://arxiv.org/pdf/2505.11581TRANSCRIPT:https://app.rescript.info/public/share/Zk5tZXk6mb06yYOFh6nSja7Lg6_qZkgkuXQ-kl5AJqM

60 Minutes
08/03/2025: Demis Hassabis and Freezing the Biological Clock

60 Minutes

Play Episode Listen Later Aug 4, 2025 46:32


Demis Hassabis, a pioneer in artificial intelligence, is shaping the future of humanity. As the CEO of Google DeepMind, he was first interviewed by correspondent Scott Pelley in 2023, during a time when chatbots marked the beginning of a new technological era. Since that interview, Hassabis has made headlines for his innovative work, including using an AI model to predict the structure of proteins, which earned him a Nobel Prize. Pelley returns to DeepMind's headquarters in London to discuss what's next for Hassabis, particularly his leadership in the effort to develop artificial general intelligence (AGI) – a type of AI that has the potential to match the versatility and creativity of the human brain. Fertility rates in the United States are currently near historic lows, largely because fewer women are having children in their 20s. As women delay starting families, many are opting for egg freezing, the process of retrieving and freezing unfertilized eggs, to preserve their fertility for the future. Does egg freezing provide women with a way to pause their biological clock? Correspondent Lesley Stahl interviews women who have decided to freeze their eggs and explores what the process entails physically, emotionally and financially. She also speaks with fertility specialists and an ethicist about success rates, equity issues and the increasing market potential of egg freezing. This is a double-length segment. To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices

Airacast
2025 Convention Special ACB

Airacast

Play Episode Listen Later Aug 4, 2025 48:11


Episode Notes In the first of our convention specials we feature our presentation at the American Council of the Blind's 2025 Conference and Convention in Dallas, Texas. Our Learning and Development Team take the spotlight to talk about how they craft training for visual interpreters and what's in store for the future. Then CEO Troy Otillio talks AI and Aira's involvement with Google Deep Mind's Project Astra.  Sign up to become a trusted tester at https://aira.io/projectastra. Learn more about visual interpreting at https://aira.io. Email us: airacast@aira.io Find out more at https://airacast.pinecast.co This podcast is powered by Pinecast.

Lex Fridman Podcast
#475 – Demis Hassabis: Future of AI, Simulating Reality, Physics and Video Games

Lex Fridman Podcast

Play Episode Listen Later Jul 23, 2025 154:56


Demis Hassabis is the CEO of Google DeepMind and Nobel Prize winner for his groundbreaking work in protein structure prediction using AI. Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep475-sc See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc. Transcript: https://lexfridman.com/demis-hassabis-2-transcript CONTACT LEX: Feedback - give feedback to Lex: https://lexfridman.com/survey AMA - submit questions, videos or call-in: https://lexfridman.com/ama Hiring - join our team: https://lexfridman.com/hiring Other - other ways to get in touch: https://lexfridman.com/contact EPISODE LINKS: Demis's X: https://x.com/demishassabis DeepMind's X: https://x.com/GoogleDeepMind DeepMind's Instagram: https://instagram.com/GoogleDeepMind DeepMind's Website: https://deepmind.google/ Gemini's Website: https://gemini.google.com/ Isomorphic Labs: https://isomorphiclabs.com/ The MANIAC (book): https://amzn.to/4lOXJ81 Life Ascending (book): https://amzn.to/3AhUP7z SPONSORS: To support this podcast, check out our sponsors & get discounts: Hampton: Community for high-growth founders and CEOs. Go to https://joinhampton.com/lex Fin: AI agent for customer service. Go to https://fin.ai/lex Shopify: Sell stuff online. Go to https://shopify.com/lex LMNT: Zero-sugar electrolyte drink mix. Go to https://drinkLMNT.com/lex AG1: All-in-one daily nutrition drink. Go to https://drinkag1.com/lex OUTLINE: (00:00) - Introduction (00:29) - Sponsors, Comments, and Reflections (08:40) - Learnable patterns in nature (12:22) - Computation and P vs NP (21:00) - Veo 3 and understanding reality (25:24) - Video games (37:26) - AlphaEvolve (43:27) - AI research (47:51) - Simulating a biological organism (52:34) - Origin of life (58:49) - Path to AGI (1:09:35) - Scaling laws (1:12:51) - Compute (1:15:38) - Future of energy (1:19:34) - Human nature (1:24:28) - Google and the race to AGI (1:42:27) - Competition and AI talent (1:49:01) - Future of programming (1:55:27) - John von Neumann (2:04:41) - p(doom) (2:09:24) - Humanity (2:12:30) - Consciousness and quantum computation (2:18:40) - David Foster Wallace (2:25:54) - Education and research PODCAST LINKS: - Podcast Website: https://lexfridman.com/podcast - Apple Podcasts: https://apple.co/2lwqZIr - Spotify: https://spoti.fi/2nEwCF8 - RSS: https://lexfridman.com/feed/podcast/ - Podcast Playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4 - Clips Channel: https://www.youtube.com/lexclips

Aiming For The Moon
129. AI Needs You: Verity Harding (director of the AI & Geopolitics Project @ the Bennett Institute for Public Policy at the University of Cambridge | Founder of Formation Advisory)

Aiming For The Moon

Play Episode Listen Later Jul 23, 2025 25:04 Transcription Available


Send us a textWith the development of artificial intelligence on the rise, we are at a crossroads. How will we continue our innovations and regulations of this new technology? But, this is more than a technological question. As my guest, Verity Harding states, “AI needs you.”In this episode, I sit down with Verity Harding to discuss her book, AI Needs You: How We Can Change AI's Future and Save Our Own. How we apply AI is a multi-disciplinary issue. We need everyone, from tech people to teachers, to students, to nurses and doctors, and to everyone else.  Topics:Why AI Needs EveryoneTechnology's Shadow SelfThe Socio-Technical Approach to AI"What books have had an impact on you?""What advice do you have for teenagers?Bio:One of TIME's 100 Most Influential People in AI, Verity Harding is director of the AI & Geopolitics Project at the Bennett Institute for Public Policy at the University of Cambridge and founder of Formation Advisory, a consultancy firm that advises on the future of technology and society. She worked for many years as Global Head of Policy for Google DeepMind and as a political adviser to Britain's deputy prime minister.Socials -Lessons from Interesting People substack: https://taylorbledsoe.substack.com/Website: https://www.aimingforthemoon.com/Instagram: https://www.instagram.com/aiming4moon/Twitter: https://twitter.com/Aiming4Moon

The AI Breakdown: Daily Artificial Intelligence News and Discussions

A groundbreaking Harvard study trained AI on 10 million solar systems and found it perfectly predicted orbits but completely failed to understand gravity, raising questions about whether LLMs can develop true world models. While companies pour billions into scaling, Meta's Yann LeCun believes current systems will be "obsolete within 3-5 years" and argues that world models could revolutionize AI by giving machines genuine understanding of physical reality. This explores three competing paths to AGI, why Google DeepMind is hiring world model teams, and whether we're looking at the missing piece for human-level AI.Brought to you by:KPMG – Go to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://kpmg.com/ai⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ to learn more about how KPMG can help you drive value with our AI solutions.Blitzy.com - Go to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://blitzy.com/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ to build enterprise software in days, not months AGNTCY - The AGNTCY is an open-source collective dedicated to building the Internet of Agents, enabling AI agents to communicate and collaborate seamlessly across frameworks. Join a community of engineers focused on high-quality multi-agent software and support the initiative at ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠agntcy.org ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Vanta - Simplify compliance - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://vanta.com/nlw⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Plumb - The automation platform for AI experts and consultants ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://useplumb.com/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠The Agent Readiness Audit from Superintelligent - Go to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://besuper.ai/ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠to request your company's agent readiness score.The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614Subscribe to the newsletter: https://aidailybrief.beehiiv.com/Join our Discord: https://bit.ly/aibreakdownInterested in sponsoring the show? nlw@breakdown.network

The Next Wave - Your Chief A.I. Officer
Inside Google's AI Lab: Drug Discovery, World AI Model & AlphaEvolve

The Next Wave - Your Chief A.I. Officer

Play Episode Listen Later Jul 22, 2025 18:29


Want the ultimate guide to Google's Gemini? Get it here: https://clickhubspot.com/evt Episode 68: How is Google DeepMind pushing the boundaries of AI to tackle drug discovery, robotics, and even autonomous AI agents? Matt Wolfe (https://x.com/mreflow) sits down with DeepMind CEO Sir Demis Hassabis (https://x.com/demishassabis), a neuroscientist, AI pioneer, Nobel laureate, and knight, to peel back the curtain on Google's latest advances—and the ethical challenges that come with them. In this episode, Matt and Demis go deep on what's powering the newest generation of AI agents, how models like AlphaFold and AlphaEvolve are accelerating scientific breakthroughs, and why world models are so important for the future of robotics. Demis shares why he believes AI is poised to reshape society—for better and for worse—and what Google is doing to build public trust in its systems. Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd — Show Notes: (00:00) AI Revolutionizing Drug Discovery (03:35) Advanced Model Training Methods (07:06) Accelerating Drug Discovery with AI (11:12) AI's Responsible Role in Society (13:56) AI Revolutionizing Science & Life — Mentions: Sir Demis Hassabis: https://www.linkedin.com/in/demishassabis/ Google DeepMind: https://deepmind.google/ AlphaFold: https://alphafold.ebi.ac.uk/ AlphaEvolve: https://deepmind.google/discover/blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/ Isomorphic Labs: https://www.isomorphiclabs.com/ Android XR glasses: http://blog.google/products/android/android-xr-gemini-glasses-headsets/ Get the guide to build your own Custom GPT: https://clickhubspot.com/tnw — Check Out Matt's Stuff: • Future Tools - https://futuretools.beehiiv.com/ • Blog - https://www.mattwolfe.com/ • YouTube- https://www.youtube.com/@mreflow — Check Out Nathan's Stuff: Newsletter: https://news.lore.com/ Blog - https://lore.com/ The Next Wave is a HubSpot Original Podcast // Brought to you by Hubspot Media // Production by Darren Clarke // Editing by Ezra Bakker Trupiano

Group Chat
It's 4 Trillion Dollars | Group Chat News Ep. 956

Group Chat

Play Episode Listen Later Jul 21, 2025 65:27


Group Chat News is back with the hottest stories of the week including Scottie Scheffler crushes the compition again, an Epstein files update, Google Deepmind's alphageometry just did what most humans can't, 58% of students who graduated within the last year are still looking for their first job, there are at least ten employees at OpenAI who have turned down $300 million offers from Mark Zuckerberg, Ethereum Could Hit OVER $15k PER COIN in 2025, how Bessent made the case to Trump against firing fed chair Powell, Nvidia is now worth 4 trillion dollars, In-N-Out Burger Owner Teases Chain's Move Into South plus more

Middle Tech
319 | Lexington AI Meetup Live: The Story of AlphaFold with Founding Member Steve Crossan

Middle Tech

Play Episode Listen Later Jul 21, 2025 57:28


In this special live recording from our Lexington AI Meetup, we sit down with Steve Crossan, a founding member of Google DeepMind's AlphaFold team and former Google product leader. Steve helped launch groundbreaking AI research as part of the team that built AlphaFold, the model that cracked one of biology's grand challenges.AlphaFold can predict a protein's 3D structure using only its amino acid sequence - a task that once took scientists months or years now completed in minutes. With the release of AlphaFold 3, the model now maps not just proteins, but how they interact with DNA, RNA, drugs, and antibodies - a huge leap for drug discovery and synthetic biology.Steve breaks down the origin story of AlphaFold, the future of AI-powered science, and what's next for healthcare, drug development, and beyond. A special thank you to Brent Seales and Randall Stevens for helping us coordinate Steve's talk during his visit in Lexington!If you'd like to stay up to date about upcoming Middle Tech events, subscribe to our newsletter at middletech.beehiiv.com.

Everyday AI Podcast – An AI and ChatGPT Podcast
EP 566: Google acquihires Windsurf, OpenAI and Perplexity go for browsers and more AI News That Matters

Everyday AI Podcast – An AI and ChatGPT Podcast

Play Episode Listen Later Jul 14, 2025 34:13


Just about every big tech company made splashes in AI this week.↳ Google acqui-hired a company that OpenAI failed to straight up acquire↳Microsoft is investing $4 billion into AI training↳ OpenAI is going after the AI browser space↳ and Meta reportedly spent more than $200 million on one employeeSheeeeesh. It's been a whirlwind. Don't get left behind. We'll help you be the smartest person in AI at your company.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion:Thoughts on this? Join the convo and connect with other AI leaders on LinkedIn.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:OpenAI's AI Browsers vs. Google ChromePerplexity's Comet Browser Launch StrategyXAI's Controversial Grok4 Alignment IssuesTeacher Union's AI Training PartnershipMicrosoft Elevate AI Training InitiativeNVIDIA's $4 Trillion Market Cap MilestoneGoogle Acqui-hires Windsurf for DeepMindMeta's $200M Apple AI Talent AcquisitionTimestamps:00:00 "Everyday AI: Weekly News Recap"05:36 Perplexity's Comet: Agentic Browser Shift06:49 AI-Powered Browsing with Perplexity's Comet10:44 "Grok 4: Aligns with Musk's Views"15:37 AI Workshops for Teachers Nationwide18:32 Microsoft's Elevate Program: AI Access & Trust19:51 AI Strategy and Training Partners23:13 Google Hires Wind Surf Executives29:04 AI Updates: Claude's MCP and Google Gemini31:36 AI Developments: Browsers, Training, Market MovesKeywords:Google acquihire, Windsurf, OpenAI browser, Perplexity, agentic browsers, Meta superintelligence, AI news, Google Chrome dominance, ChatGPT users, Chromium, computer vision, AI agents, reservation booking, AI services, market dominance, browser wars, anthropic, Claude, Perplexity's Comet, AI powered web, conversational interface, AI assistant, natural language, OpenAI's GPT, anthropic claw, Grok four, XAI, Elon Musk AI, AI benchmarks, content moderation, Grok controversy, AI alignment, TechCrunch, contentious questions, AI partnerships, educational AI, American Federation of Teachers, Microsoft Copilot, Microsoft funding, ethical AI, Elevate Academy, NVIDIA market cap, AI chips, Google DeepMind, AI coding, Apple AI executive, AI metrics, Anthropic, Microsoft layoffs.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Square keeps up so you don't have to slow down. Get everything you need to run and grow your business—without any long-term commitments. And why wait? Right now, you can get up to $200 off Square hardware at square.com/go/jordan. Run your business smarter with Square. Get started today.

This Week in Startups
The Grammarly–Superhuman Megadeal, plus TWiST 500 chats with LabelBox and Apptronik's founders | E2146

This Week in Startups

Play Episode Listen Later Jul 1, 2025 87:24


Today's show:Grammarly is acquiring the beloved email app Superhuman! In today's extremely timely episode, @alex sits down with Grammarly CEO Shishir Mehrotra and Superhuman founder Rahul Vohra to unpack why they're merging, how they plan to combine apps and AI agents, and what it means for the future of email and work. PLUS they reveal how Grammarly's 40M+ daily users already rely on email—and why this deal is the key to building the ultimate communication assistant. Don't miss this deep dive into one of the most exciting AI acquisitions yet!#AI #Startups #Productivity #Grammarly #Superhuman #VentureCapital #MergersAndAcquisitionsTimestamps:(0:00) Introduction to the future of robotics and AI regulation(1:11) Introduction of hosts and overview of today's topics(4:19) Cloudflare's new pay per crawl product(09:37) Northwest Registered Agent. Form your entire business identity in just 10 clicks and 10 minutes. Get more privacy, more options, and more done—visit northwestregisteredagent.com/twist today!(10:41) Polymarket segment on Apple's acquisition prospects(14:29) Grammarly and Superhuman CEOs interviews(19:42) Retool - Visit https://www.retool.com/twist and try it out today.(24:09) AI in productivity tools and email agents discussion(29:23) AWS Activate - AWS Activate helps startups bring their ideas to life. Apply to AWS Activate today to learn more. Visit aws.amazon.com/startups/credits(30:46) Grammarly's AI platform and the acquisition of Superhuman(41:16) Introduction of Manu Sharma from Labelbox(41:47) The role of data labeling in AI and Labelbox's data factory(53:44) Labelbox's market position and capital efficiency post-ChatGPT(1:03:00) Advances in humanoid robotics with Jeff Cardenas from Apptronik(1:07:09) Market readiness and capabilities of Apollo 2 robots(1:13:27) Humanoid robotics: Google DeepMind partnership and international competition(1:20:54) US federal support for R&D in robotics(1:22:10) The evolution of humanoid robotics and Apptronik's growth in AustinSubscribe to the TWiST500 newsletter: https://ticker.thisweekinstartups.comCheck out the TWIST500: https://www.twist500.comSubscribe to This Week in Startups on Apple: https://rb.gy/v19fcpFollow Lon:X: https://x.com/lonsFollow Alex:X: https://x.com/alexLinkedIn: ⁠https://www.linkedin.com/in/alexwilhelmFollow Jason:X: https://twitter.com/JasonLinkedIn: https://www.linkedin.com/in/jasoncalacanisThank you to our partners:(09:37) Northwest Registered Agent. Form your entire business identity in just 10 clicks and 10 minutes. Get more privacy, more options, and more done—visit northwestregisteredagent.com/twist today!(19:42) Retool - Visit https://www.retool.com/twist and try it out today.(29:23) AWS Activate - AWS Activate helps startups bring their ideas to life. Apply to AWS Activate today to learn more. Visit aws.amazon.com/startups/creditsGreat TWIST interviews: Will Guidara, Eoghan McCabe, Steve Huffman, Brian Chesky, Bob Moesta, Aaron Levie, Sophia Amoruso, Reid Hoffman, Frank Slootman, Billy McFarlandCheck out Jason's suite of newsletters: https://substack.com/@calacanisFollow TWiST:Twitter: https://twitter.com/TWiStartupsYouTube: https://www.youtube.com/thisweekinInstagram: https://www.instagram.com/thisweekinstartupsTikTok: https://www.tiktok.com/@thisweekinstartupsSubstack: https://twistartups.substack.comSubscribe to the Founder University Podcast: https://www.youtube.com/@founderuniversity1916