Podcasts about DeepMind

  • 1,039PODCASTS
  • 2,500EPISODES
  • 42mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Sep 15, 2025LATEST
DeepMind

POPULARITY

20172018201920202021202220232024

Categories



Best podcasts about DeepMind

Show all podcasts related to deepmind

Latest podcast episodes about DeepMind

80,000 Hours Podcast with Rob Wiblin
Neel Nanda on leading a Google DeepMind team at 26 – and advice if you want to work at an AI company (part 2)

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Sep 15, 2025 106:49


At 26, Neel Nanda leads an AI safety team at Google DeepMind, has published dozens of influential papers, and mentored 50 junior researchers — seven of whom now work at major AI companies. His secret? “It's mostly luck,” he says, but “another part is what I think of as maximising my luck surface area.”Video, full transcript, and links to learn more: https://80k.info/nn2This means creating as many opportunities as possible for surprisingly good things to happen:Write publicly.Reach out to researchers whose work you admire.Say yes to unusual projects that seem a little scary.Nanda's own path illustrates this perfectly. He started a challenge to write one blog post per day for a month to overcome perfectionist paralysis. Those posts helped seed the field of mechanistic interpretability and, incidentally, led to meeting his partner of four years.His YouTube channel features unedited three-hour videos of him reading through famous papers and sharing thoughts. One has 30,000 views. “People were into it,” he shrugs.Most remarkably, he ended up running DeepMind's mechanistic interpretability team. He'd joined expecting to be an individual contributor, but when the team lead stepped down, he stepped up despite having no management experience. “I did not know if I was going to be good at this. I think it's gone reasonably well.”His core lesson: “You can just do things.” This sounds trite but is a useful reminder all the same. Doing things is a skill that improves with practice. Most people overestimate the risks and underestimate their ability to recover from failures. And as Neel explains, junior researchers today have a superpower previous generations lacked: large language models that can dramatically accelerate learning and research.In this extended conversation, Neel and host Rob Wiblin discuss all that and some other hot takes from Neel's four years at Google DeepMind. (And be sure to check out part one of Rob and Neel's conversation!)What did you think of the episode? https://forms.gle/6binZivKmjjiHU6dA Chapters:Cold open (00:00:00)Who's Neel Nanda? (00:01:12)Luck surface area and making the right opportunities (00:01:46)Writing cold emails that aren't insta-deleted (00:03:50)How Neel uses LLMs to get much more done (00:09:08)“If your safety work doesn't advance capabilities, it's probably bad safety work” (00:23:22)Why Neel refuses to share his p(doom) (00:27:22)How Neel went from the couch to an alignment rocketship (00:31:24)Navigating towards impact at a frontier AI company (00:39:24)How does impact differ inside and outside frontier companies? (00:49:56)Is a special skill set needed to guide large companies? (00:56:06)The benefit of risk frameworks: early preparation (01:00:05)Should people work at the safest or most reckless company? (01:05:21)Advice for getting hired by a frontier AI company (01:08:40)What makes for a good ML researcher? (01:12:57)Three stages of the research process (01:19:40)How do supervisors actually add value? (01:31:53)An AI PhD – with these timelines?! (01:34:11)Is career advice generalisable, or does everyone get the advice they don't need? (01:40:52)Remember: You can just do things (01:43:51)This episode was recorded on July 21.Video editing: Simon Monsour and Luke MonsourAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongMusic: Ben CordellCamera operator: Jeremy ChevillotteCoordination, transcriptions, and web: Katy Moore

ChinaTalk
The Robotics Revolution

ChinaTalk

Play Episode Listen Later Sep 12, 2025 86:55


Ryan Julian is a research scientist in embodied AI. He worked on large-scale robotics foundation models at DeepMind and got his PhD in machine learning at USC in 2021. In our conversation today, we discuss… What makes a robot a robot, and what makes robotics so difficult, The promise of robotic foundation models and strategies to overcome the data bottleneck, Why full labor replacement is far less likely than human-robot synergy, China's top players in the robotic industry, and what sets them apart from American companies and research institutions, How robots will impact manufacturing, and how quickly we can expect to see robotics take off. O*NET's ontology of labor: http://onetcenter.org/database.html ChinaTalk's Unitree coverage: https://www.chinatalk.media/p/unitree-ceo-on-chinas-robot-revolution Robotics reading recommendations: Chris Paxton, Ted Xiao, C Zhang, and The Humanoid Hub on X. You can also check out the General Robots and Learning and Control Substacks, Vincent Vanhoucke on Medium, and IEEE's robotics coverage. Today's podcast is brought to you by 80,000 Hours, a nonprofit that helps people find fulfilling careers that do good. 80,000 Hours — named for the average length of a career — has been doing in-depth research on AI issues for over a decade, producing reports on how the US and China can manage existential risk, scenarios for potential AI catastrophe, and examining the concrete steps you can take to help ensure AI development goes well. Their research suggests that working to reduce risks from advanced AI could be one of the most impactful ways to make a positive difference in the world. They provide free resources to help you contribute, including: Detailed career reviews for paths like AI safety technical research, AI governance, information security, and AI hardware, A job board with hundreds of high-impact opportunities, A podcast featuring deep conversations with experts like Carl Shulman, Ajeya Cotra, and Tom Davidson, Free, one-on-one career advising to help you find your best fit. To learn more and access their research-backed career guides, visit 80000hours.org/ChinaTalk. To read their report about AI coordination between the US and China, visit http://80000hours.org/chinatalkcoord. Outro music: Daft Punk - Motherboard (⁠YouTube Link⁠) Learn more about your ad choices. Visit megaphone.fm/adchoices

ChinaEconTalk
The Robotics Revolution

ChinaEconTalk

Play Episode Listen Later Sep 12, 2025 86:55


Ryan Julian is a research scientist in embodied AI. He worked on large-scale robotics foundation models at DeepMind and got his PhD in machine learning at USC in 2021. In our conversation today, we discuss… What makes a robot a robot, and what makes robotics so difficult, The promise of robotic foundation models and strategies to overcome the data bottleneck, Why full labor replacement is far less likely than human-robot synergy, China's top players in the robotic industry, and what sets them apart from American companies and research institutions, How robots will impact manufacturing, and how quickly we can expect to see robotics take off. O*NET's ontology of labor: http://onetcenter.org/database.html ChinaTalk's Unitree coverage: https://www.chinatalk.media/p/unitree-ceo-on-chinas-robot-revolution Robotics reading recommendations: Chris Paxton, Ted Xiao, C Zhang, and The Humanoid Hub on X. You can also check out the General Robots and Learning and Control Substacks, Vincent Vanhoucke on Medium, and IEEE's robotics coverage. Today's podcast is brought to you by 80,000 Hours, a nonprofit that helps people find fulfilling careers that do good. 80,000 Hours — named for the average length of a career — has been doing in-depth research on AI issues for over a decade, producing reports on how the US and China can manage existential risk, scenarios for potential AI catastrophe, and examining the concrete steps you can take to help ensure AI development goes well. Their research suggests that working to reduce risks from advanced AI could be one of the most impactful ways to make a positive difference in the world. They provide free resources to help you contribute, including: Detailed career reviews for paths like AI safety technical research, AI governance, information security, and AI hardware, A job board with hundreds of high-impact opportunities, A podcast featuring deep conversations with experts like Carl Shulman, Ajeya Cotra, and Tom Davidson, Free, one-on-one career advising to help you find your best fit. To learn more and access their research-backed career guides, visit 80000hours.org/ChinaTalk. To read their report about AI coordination between the US and China, visit http://80000hours.org/chinatalkcoord. Outro music: Daft Punk - Motherboard (⁠YouTube Link⁠) Learn more about your ad choices. Visit megaphone.fm/adchoices

For Humanity: An AI Safety Podcast
Big Tech Under Pressure: Hunger Strikes and the Fight for AI Safety | For Humanity EP69

For Humanity: An AI Safety Podcast

Play Episode Listen Later Sep 10, 2025 58:17


Get 40% off Ground News' unlimited access Vantage Plan at https://ground.news/airisk for only $5/month, explore how stories are framed worldwide and across the political spectrum.TAKE ACTION TO DEMAND AI SAFETY LAWS: https://safe.ai/actIn Episode 69 of For Humanity: An AI Risk Podcast, we explore one of the most striking acts of activism in the AI debate: hunger strikes aimed at pushing Big Tech to prioritize safety over speed.Michael and Dennis, two AI safety advocates, join John from outside DeepMind's London headquarters, where they are staging hunger strikes to demand that frontier AI development be paused. Inspired by Guido's protest in San Francisco, they are risking their health to push tech leaders like Demis Hassabis to make public commitments to slow down the AI race.This episode looks at how ordinary people are taking extraordinary steps to demand accountability, why this form of protest is gaining attention, and what history tells us about the power of public pressure. In this conversation, you'll discover: * Why hunger strikers believe urgent action on AI safety is necessary* How Big Tech companies are responding to growing public concern* The role of parents, workers, and communities in shaping AI policy* Parallels with past social movements that drove real change* Practical ways you can make your voice heard in the AI safety conversationThis isn't just about technology—it's about responsibility, leadership, and the choices we make for future generations.

TalkRL: The Reinforcement Learning Podcast
David Abel on the Science of Agency @ RLDM 2025

TalkRL: The Reinforcement Learning Podcast

Play Episode Listen Later Sep 8, 2025 59:42 Transcription Available


David Abel is a Senior Research Scientist at DeepMind on the Agency team, and an Honorary Fellow at the University of Edinburgh. His research blends computer science and philosophy, exploring foundational questions about reinforcement learning, definitions, and the nature of agency.  Featured References  Plasticity as the Mirror of Empowerment   David Abel, Michael Bowling, André Barreto, Will Dabney, Shi Dong, Steven Hansen, Anna Harutyunyan, Khimya Khetarpal, Clare Lyle, Razvan Pascanu, Georgios Piliouras, Doina Precup, Jonathan Richens, Mark Rowland, Tom Schaul, Satinder Singh  A Definition of Continual RL   David Abel, André Barreto, Benjamin Van Roy, Doina Precup, Hado van Hasselt, Satinder Singh  Agency is Frame-Dependent   David Abel, André Barreto, Michael Bowling, Will Dabney, Shi Dong, Steven Hansen, Anna Harutyunyan, Khimya Khetarpal, Clare Lyle, Razvan Pascanu, Georgios Piliouras, Doina Precup, Jonathan Richens, Mark Rowland, Tom Schaul, Satinder Singh  On the Expressivity of Markov Reward   David Abel, Will Dabney, Anna Harutyunyan, Mark Ho, Michael Littman, Doina Precup, Satinder Singh — Outstanding Paper Award, NeurIPS 2021  Additional References  Bidirectional Communication Theory — Marko 1973  Causality, Feedback and Directed Information — Massey 1990  The Big World Hypothesis — Javed et al. 2024  Loss of plasticity in deep continual learning — Dohare et al. 2024  Three Dogmas of Reinforcement Learning — Abel 2024  Explaining dopamine through prediction errors and beyond — Gershman et al. 2024  David Abel Google Scholar  David Abel personal website  

First Cheque
Why Australia's AI Opportunity Is Bigger Than We Think

First Cheque

Play Episode Listen Later Sep 7, 2025 58:44


Episode SummaryTom Humphrey, Partner at Blackbird and former operator, joins Cheryl and Maxine to reframe the conversation around Australia's AI ecosystem. While headlines paint a picture of Australia falling behind, Tom argues we're quietly sitting on world-class talent, global-first AI companies, and a capital-efficient edge that's being overlooked.They unpack Tom's recent AFR opinion piece on the AI talent landscape, why Australia's university-to-startup pipeline is broken (and slowly improving), and how “boomerang” PhDs are returning from Anthropic, Meta, and DeepMind to build ambitious companies onshore. Plus, Tom breaks down how product-led growth is evolving in the AI era, why enterprise motions are happening sooner, and what the new GTM playbook looks like when AI agents are selling to AI agents.Time Stamps02:21 – Tom's first investment: BHP shares via his parents, now passing on the habit to his son05:19 – Robinhood, meme stocks, and the shift to founder-led brands07:06 – Why Tom wrote the AFR piece: Australia's culture of doubt vs the US lens of opportunity08:08 – Australia's untapped AI talent advantage: 8% of APAC experts, top-tier unis, and PhD immigration12:18 – Six model releases, global leaderboard wins… that no one in Australia talks about14:03 – The dangerous cost of silence: how lack of domestic celebration dampens ambition and capital17:33 – Why Australia is absurdly capital-efficient (and how we squeeze every dollar)19:23 – Commercialisation bottlenecks at Aussie unis, and what's slowly changing25:39 – How this affects investors: AI engineering vs AI research, the boomerang effect, and talent arbitrage30:56 – Time zones, USD revenue, and Australia's secret weapon: the 43% R&D tax credit33:17 – Role gaps and brand new roles: the rise of AI architects and forward deployed engineers34:27 – What PLG really means, and why it's not just a freemium sign-up form39:31 – Spammy AI agents, SEO collapse, and why brand and community are back45:15 – Why enterprise motions are showing up earlier (and how that changes GTM)48:32 – Single-player AI value unlocks a different sales motion: fast, bottom-up adoption51:15 – AI fits into your workflow, not the other way around54:14 – Picking your motion: don't force PLG if it's not a natural fit55:05 – Capital-starved ecosystems and why enterprise is still harder to fund in Aus56:15 – Tom's Big Cojones moment: three boys under six and a startup-founder partnerResources

Everyday AI Podcast – An AI and ChatGPT Podcast
EP 603: Breaking: Apple reportedly teams up with Google to stay AI relevant

Everyday AI Podcast – An AI and ChatGPT Podcast

Play Episode Listen Later Sep 4, 2025 40:40


How big of trouble is Apple in when it comes to AI?It's so bad they're enlisting the help of their chief rival to do so: Google. What's that mean for Google, and will the world FINALLY have an AI-powered Siri after years of broken promises?Tune in and find out. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Thoughts on this? Join the convo and connect with other AI leaders on LinkedIn.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Apple and Google Partnership for AIApple's Ongoing AI Strategy FailuresBloomberg Report: Gemini AI IntegrationSiri AI Overhaul With Google GeminiTechnical Details: Gemini on Apple ServersWorld Knowledge Answers Feature LaunchApple's AI Talent Exodus to CompetitorsLegal Risks and AI Feature LawsuitsImpact on Big Tech Competitive LandscapePotential Timeline for Smarter Siri ReleaseTimestamps:00:00 "Everyday AI: Daily Insights"04:35 Apple's Rivalry and AI Struggles09:03 Smart Assistants' Evolution and Apple's Challenge10:15 Apple's AI-Powered Answer Engine15:54 Apple's Private Cloud Security Architecture17:53 Apple Expands Siri with Google AI21:23 Apple's AI Ambitions and Challenges26:06 Apple's AI Talent Exodus30:49 Apple AI Team Exodus32:48 Apple's Reliance on Google Dominance35:04 "Siri's 2026 Update and Industry Impact"38:44 Support and Stay UpdatedKeywords:Apple, Google, Apple and Google partnership, Apple Intelligence, generative AI, Google Gemini, AI relevance, Siri, Siri failures, large language models, chief rival collaboration, Big Tech AI, market cap, AI-powered web search, AI search engine, Bloomberg report, AI features, AI partnership, AI summarizer, Apple AI delays, technological rivalry, OpenAI, Anthropic, Perplexity, AI foundation models, custom AI model, Private Cloud Compute, privacy architecture, AI talent exodus, machine learning, Apple lawsuits, false advertising, AI market competition, AI integration, hardware vs. software, ChatGPT alternative, Spotlight search, Safari AI integration, AI-driven device functionality, Meta, DeepMind, Microsoft AI, AI-powered summaries, web summarization, device intelligence, AI-powered assistants, smart assistant shortcomingsSend Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner

TechCrunch Startups – Spoken Edition
Orchard Robotics raises $22M for farm vision AI; also, Mistral on the cusp of securing a $14B valuation

TechCrunch Startups – Spoken Edition

Play Episode Listen Later Sep 4, 2025 5:51


On Wednesday, Orchard Robotics announced that it raised a $22 million Series A funding led by Quiet Capital and Shine Capital, and with participation from returning investors, including General Catalyst and Contrary. Although the idea of using computer vision for specialty crops isn't new, Wu says that the largest farms in the U.S. still rely on manual sampling to make critical decisions about farm operations. Also, French AI startup Mistral AI is finalizing a €2 billion investment at a post-money valuation of $14 billion, reports Bloomberg, positioning the company as one of Europe's most valuable tech startups. The two-year-old OpenAI rival, founded by former DeepMind and Meta researchers, develops open source language models and Le Chat, its AI chatbot built for European audiences. Learn more about your ad choices. Visit podcastchoices.com/adchoices

Tronche de Tech
#55 - Laurent Sifre - Déjà un Nobel pour l'IA

Tronche de Tech

Play Episode Listen Later Sep 4, 2025 65:24


En 2016, l'IA réalise un exploit qui va sidérer la planète. Cet ingénieur français est une des clés de ce succès. Retour en 2014.Après une thèse en reconnaissance d'image, Laurent termine son stage chez Google Brain, la prestigieuse branche IA du géant américain.Logiquement, il postule pour rester dans l'équipe.Mais, il y a un obstacle de taille : le visa.Pour les profils comme Laurent, c'est Google qui fait la demande au gouvernement.Le hic, c'est qu'il n'y a pas assez de place pour tout le monde.La décision se fait donc…Au tirage au sort

LessWrong Curated Podcast
“⿻ Plurality & 6pack.care” by Audrey Tang

LessWrong Curated Podcast

Play Episode Listen Later Sep 3, 2025 23:57


Marketing sin Filtro
LA IA se sale de control (y nadie puede frenarla)

Marketing sin Filtro

Play Episode Listen Later Aug 31, 2025 37:55


La inteligencia artificial no es el futuro… es la ola que ya empezó y no se puede detener. En este episodio te contamos lo que aprendimos de The Coming Wave, el libro más brutal sobre el futuro de la IA, escrito por uno de los fundadores de DeepMind.Hablamos de armas biológicas hechas desde tu laptop, AGIs que piensan como humanos, gobiernos inútiles, modelos que aprenden solos y sistemas que podrían salirse de control en cualquier momento.¿Qué podemos hacer los humanos comunes ante este futuro distópico?

The Next Wave - Your Chief A.I. Officer
Microsoft AI CEO: The AI Future You're Not Ready For

The Next Wave - Your Chief A.I. Officer

Play Episode Listen Later Aug 26, 2025 22:40


Want to Master AI Agents in 2025? Get the guide: https://clickhubspot.com/etv Episode 73: What's really holding back the future of AI—and are we truly prepared for what comes next? Matt Wolfe (https://x.com/mreflow) is joined by Mustafa Suleyman (https://x.com/mustafasuleyman), legendary AI innovator, co-founder of DeepMind, former founder of Inflection AI, and now CEO of Microsoft AI, where he's leading the massive Copilot transformation. This episode unpacks the myths around AI's “training wall,” whether hallucinations are actually a feature instead of a bug, the dawn of the agentic era—where AIs don't just chat, but plan and act for you—and the shifting landscape for software builders as anyone can ship products in minutes. Mustafa also shares firsthand stories and practical advice for leveraging today's AI—from offloading tasks to agents THIS WEEK, to why moats aren't about headcount or credentials in the new era. Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd — Show Notes: (00:00) AI Insights with Mustafa Suleyman (03:31) Adapting AI Amid Data Challenges (07:31) Technology's Misleading Terminology (12:16) Tool Use Defines Human Progress (15:49) Revolutionizing Code with AI Tools (16:31) Competitive Innovation Boom Ahead — Mentions: Mustafa Suleyman: https://mustafa-suleyman.ai/ Microsoft AI: https://www.microsoft.com/en-us/ai DeepMind: https://deepmind.google/ Inflection: https://inflection.ai/ Get the guide to build your own Custom GPT: https://clickhubspot.com/tnw — Check Out Matt's Stuff: • Future Tools - https://futuretools.beehiiv.com/ • Blog - https://www.mattwolfe.com/ • YouTube- https://www.youtube.com/@mreflow — Check Out Nathan's Stuff: Newsletter: https://news.lore.com/ Blog - https://lore.com/The Next Wave is a HubSpot Original Podcast // Brought to you by Hubspot Media // Production by Darren Clarke // Editing by Ezra Bakker Trupiano

CryptoNews Podcast
#468: Humayun Sheikh, CEO of Fetch.ai, on AI Agents in Crypto, The Acceleration of AI, and The $FET Crypto Treasury News

CryptoNews Podcast

Play Episode Listen Later Aug 25, 2025 33:55


Humayun Sheikh, Founder and CEO of Fetch.ai. He is an entrepreneur, investor, and a tech visionary who is passionate about technologies such as AI, machine learning, autonomous agents, and blockchains. In the past, he was a founding investor in DeepMind, where he supported commercialisation for early-stage AI & deep neural network technology. Currently, he is leading Fetch.ai as a CEO and co-founder, a start-up building the autonomy of the future. He is an expert on the topics of artificial intelligence, machine learning, autonomous agents, as well as the intersection of blockchain and commodities. In this conversation, we discuss:- AI outlook for the next couple years - The acceleration of AI - AI will unlock two main things: quantum compute and biotech - AI agents in crypto - Providing everyone an agentic system out of the box - Why is $FET undervalued? - The $FET Crypto Treasury News - Decentralized AI agents - AI & jobs Fetch.ai Website: Fetch.aiX: @Fetch_ai Discord: discord.gg/fetchaiHumayun SheikhX: @HMsheikh4 LinkedIn: Humayun Sheikh---------------------------------------------------------------------------------This episode is brought to you by EMCD.EMCD is a trailblazer in the Web3 fintech space, committed to redefining finance with a human-centered approach. For seven years, EMCD has been building tools that empower a diverse community of miners, traders, investors, digital nomads, and entrepreneurs. What started as a determined startup mining pool has grown into a global force, once ranking among the top 10 Bitcoin mining pools worldwide. Today, EMCD's mission is broader and bolder: creating innovative Web3 financial solutions that make wealth-building accessible to everyone, no matter where they are. Their platform enables users to grow assets without the stress of chasing volatile market trends or timing every dip and spike. By prioritizing purpose over hype, EMCD is crafting a future where finance serves individuals, not just markets. Dive into their vision and explore their cutting-edge tools at emcd.io.

The AI Breakdown: Daily Artificial Intelligence News and Discussions

Mustafa Suleyman, CEO of AI at Microsoft and co-founder of DeepMind, has published a provocative essay warning about the dangers of “seemingly conscious AI.” On today's Big Think edition of The AI Daily Brief, we explore his argument that as AI systems develop memory, personality, and the illusion of subjective experience, people may begin treating them as conscious beings—with profound consequences for society, law, and human identity. We dig into Suleyman's case for why this illusion matters more than the question of whether AI is actually conscious, the risks of model welfare debates, and why industry norms may need to change now.Brought to you by:KPMG – Discover how AI is transforming possibility into reality. Tune into the new KPMG 'You Can with AI' podcast and unlock insights that will inform smarter decisions inside your enterprise. Listen now and start shaping your future with every episode. ⁠⁠⁠⁠⁠⁠⁠https://www.kpmg.us/AIpodcasts⁠⁠⁠⁠⁠⁠⁠Blitzy.com - Go to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://blitzy.com/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ to build enterprise software in days, not months Vanta - Simplify compliance - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://vanta.com/nlw⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Plumb - The automation platform for AI experts and consultants ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://useplumb.com/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠The Agent Readiness Audit from Superintelligent - Go to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://besuper.ai/ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠to request your company's agent readiness score.The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614Subscribe to the newsletter: https://aidailybrief.beehiiv.com/Interested in sponsoring the show? nlw@breakdown.network

New Books Network
Gary Rivlin, "AI Valley: Microsoft, Google, and the Trillion-Dollar Race to Cash In on Artificial Intelligence" (Harper Collins, 2025)

New Books Network

Play Episode Listen Later Aug 20, 2025 65:23


A veteran Pulitzer Prize-winning journalist shadows the top thinkers in the field of Artificial Intelligence, introducing the breakthroughs and developments that will change the way we live and work. Artificial Intelligence has been “just around the corner” for decades, continually disappointing those who long believed in its potential. But now, with the emergence and growing use of ChatGPT, Gemini, and a rapidly multiplying number of other AI tools, many are wondering: Has AI's moment finally arrived? In AI Valley: Microsoft, Google, and the Trillion-Dollar Race to Cash In on Artificial Intelligence (Harper Collins, 2025), Pulitzer Prize-winning journalist Gary Rivlin brings us deep into the world of AI development in Silicon Valley. Over the course of more than a year, Rivlin closely follows founders and venture capitalists trying to capitalize on this AI moment. That includes LinkedIn founder Reid Hoffman, the legendary investor whom the Wall Street Journal once called, “the most connected person in Silicon Valley.” Through Hoffman, Rivlin is granted access to a number of companies on the cutting-edge of AI research, such as Inflection AI, the company Hoffman cofounded in 2022, and OpenAI, the San Francisco-based startup that sparked it all with its release at the end of that year of ChatGPT. In addition to Hoffman, Rivlin introduces us to other AI experts, including OpenAI cofounder Sam Altman and Mustafa Suleyman, the co-founder of DeepMind, an early AI startup that Google bought for $650 million in 2014. Rivlin also brings readers inside Microsoft, Meta, Google and other tech giants scrambling to keep pace. On this vast frontier, no one knows which of these companies will hit it big–or which will flame out spectacularly. In this riveting narrative marbled with familiar names such as Musk, Zuckerberg, and Gates, Rivlin chronicles breakthroughs as they happen, giving us a deep understanding of what's around the corner in AI development. An adventure story full of drama and unforgettable personalities, AI Valley promises to be the definitive story for anyone seeking to understand the latest phase of world-changing discoveries and the minds behind them. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/new-books-network

New Books in Science, Technology, and Society
Gary Rivlin, "AI Valley: Microsoft, Google, and the Trillion-Dollar Race to Cash In on Artificial Intelligence" (Harper Collins, 2025)

New Books in Science, Technology, and Society

Play Episode Listen Later Aug 20, 2025 65:23


A veteran Pulitzer Prize-winning journalist shadows the top thinkers in the field of Artificial Intelligence, introducing the breakthroughs and developments that will change the way we live and work. Artificial Intelligence has been “just around the corner” for decades, continually disappointing those who long believed in its potential. But now, with the emergence and growing use of ChatGPT, Gemini, and a rapidly multiplying number of other AI tools, many are wondering: Has AI's moment finally arrived? In AI Valley: Microsoft, Google, and the Trillion-Dollar Race to Cash In on Artificial Intelligence (Harper Collins, 2025), Pulitzer Prize-winning journalist Gary Rivlin brings us deep into the world of AI development in Silicon Valley. Over the course of more than a year, Rivlin closely follows founders and venture capitalists trying to capitalize on this AI moment. That includes LinkedIn founder Reid Hoffman, the legendary investor whom the Wall Street Journal once called, “the most connected person in Silicon Valley.” Through Hoffman, Rivlin is granted access to a number of companies on the cutting-edge of AI research, such as Inflection AI, the company Hoffman cofounded in 2022, and OpenAI, the San Francisco-based startup that sparked it all with its release at the end of that year of ChatGPT. In addition to Hoffman, Rivlin introduces us to other AI experts, including OpenAI cofounder Sam Altman and Mustafa Suleyman, the co-founder of DeepMind, an early AI startup that Google bought for $650 million in 2014. Rivlin also brings readers inside Microsoft, Meta, Google and other tech giants scrambling to keep pace. On this vast frontier, no one knows which of these companies will hit it big–or which will flame out spectacularly. In this riveting narrative marbled with familiar names such as Musk, Zuckerberg, and Gates, Rivlin chronicles breakthroughs as they happen, giving us a deep understanding of what's around the corner in AI development. An adventure story full of drama and unforgettable personalities, AI Valley promises to be the definitive story for anyone seeking to understand the latest phase of world-changing discoveries and the minds behind them. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/science-technology-and-society

New Books in Technology
Gary Rivlin, "AI Valley: Microsoft, Google, and the Trillion-Dollar Race to Cash In on Artificial Intelligence" (Harper Collins, 2025)

New Books in Technology

Play Episode Listen Later Aug 20, 2025 65:23


A veteran Pulitzer Prize-winning journalist shadows the top thinkers in the field of Artificial Intelligence, introducing the breakthroughs and developments that will change the way we live and work. Artificial Intelligence has been “just around the corner” for decades, continually disappointing those who long believed in its potential. But now, with the emergence and growing use of ChatGPT, Gemini, and a rapidly multiplying number of other AI tools, many are wondering: Has AI's moment finally arrived? In AI Valley: Microsoft, Google, and the Trillion-Dollar Race to Cash In on Artificial Intelligence (Harper Collins, 2025), Pulitzer Prize-winning journalist Gary Rivlin brings us deep into the world of AI development in Silicon Valley. Over the course of more than a year, Rivlin closely follows founders and venture capitalists trying to capitalize on this AI moment. That includes LinkedIn founder Reid Hoffman, the legendary investor whom the Wall Street Journal once called, “the most connected person in Silicon Valley.” Through Hoffman, Rivlin is granted access to a number of companies on the cutting-edge of AI research, such as Inflection AI, the company Hoffman cofounded in 2022, and OpenAI, the San Francisco-based startup that sparked it all with its release at the end of that year of ChatGPT. In addition to Hoffman, Rivlin introduces us to other AI experts, including OpenAI cofounder Sam Altman and Mustafa Suleyman, the co-founder of DeepMind, an early AI startup that Google bought for $650 million in 2014. Rivlin also brings readers inside Microsoft, Meta, Google and other tech giants scrambling to keep pace. On this vast frontier, no one knows which of these companies will hit it big–or which will flame out spectacularly. In this riveting narrative marbled with familiar names such as Musk, Zuckerberg, and Gates, Rivlin chronicles breakthroughs as they happen, giving us a deep understanding of what's around the corner in AI development. An adventure story full of drama and unforgettable personalities, AI Valley promises to be the definitive story for anyone seeking to understand the latest phase of world-changing discoveries and the minds behind them. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/technology

AI DAILY: Breaking News in AI
SUFFERING AI PSYCHOSIS?

AI DAILY: Breaking News in AI

Play Episode Listen Later Aug 20, 2025 3:55


Plus A New AI Religion Is Here Like this? Get AIDAILY, delivered to your inbox 3x a week. Subscribe to our newsletter at https://aidaily.us“AI psychosis” isn't a diagnosis—but it is real. People are spiraling into delusions, paranoia, and emotional dependence after heavy chatbot use—even if they had no previous mental health issues. These bots can validate unhealthy beliefs—not check you. Less glitchy tech isn't a fix unless we rethink how and when we interact.A former Berkeley hotel—Lighthaven—is now the physical HQ for Rationalists, a crew blending math, AI apocalypse fears, and effective altruism. Critics say it's culty, pointing to doomsday vibes and echoes of higher‑purpose religion. The main drama? Believing AI might save us… or annihilate us first.America's got trust issues—with AI. A Reuters/Ipsos poll shows 71% worry AI could kill jobs for good, 77% fear it's weaponized to mess with politics, and two-thirds are spooked that AI sidekicks could replace real human connection. Basically, AI hype's hitting a wall of existential dread.Game devs are legit vibing with AI. A Google Cloud survey reveals nearly 9 in 10 studios are using AI agents to speed up coding, testing, localization, and even make NPCs adapt to your vibe IRL. Indie teams especially are hyped—AI's helping them compete with big-shot publishers.Went to the AI Film Fest at Lincoln Center—saw ten AI-made shorts from butterfly POVs to “perfume ads for androids.” Some felt imaginative, others were just slick “slop” with weird glitches. The vibe? Cool as a tool, sketchy as a creator. AI's creative future looks wild—but still needs human soul.Meta just overhauled its freshly minted Meta Superintelligence Labs—except now it's split into four squads (research, products, superintelligence, infrastructure) to get AI moving faster. The shakeup comes amid internal friction, mega-spending on elite hires, and pressure to catch up with OpenAI, DeepMind, and co.AI therapy bots like Woebot are legit, but generic ones like ChatGPT can accidentally mess with your head—and even shut innovators down. STAT suggests a “red-yellow-green” label system (like food safety) vetted by mental-health pros to help users pick AI that helps—not harms.The Era of ‘AI Psychosis' Is Here. Are You a Possible Victim?Inside Silicon Valley's “Techno-Religion” at LighthavenWhat Americans Really Worry About With AI—From Politics to Jobs to FriendshipsAI Agents Are Transforming Game DevelopmentI Went to an AI Film Festival Screening and Left With More Questions Than AnswersMark Zuckerberg Splits Meta's AI Team—AgainWhich AI Can You Trust with Your Mental Health? Labels Could Help

New Books in Popular Culture
Gary Rivlin, "AI Valley: Microsoft, Google, and the Trillion-Dollar Race to Cash In on Artificial Intelligence" (Harper Collins, 2025)

New Books in Popular Culture

Play Episode Listen Later Aug 20, 2025 65:23


A veteran Pulitzer Prize-winning journalist shadows the top thinkers in the field of Artificial Intelligence, introducing the breakthroughs and developments that will change the way we live and work. Artificial Intelligence has been “just around the corner” for decades, continually disappointing those who long believed in its potential. But now, with the emergence and growing use of ChatGPT, Gemini, and a rapidly multiplying number of other AI tools, many are wondering: Has AI's moment finally arrived? In AI Valley: Microsoft, Google, and the Trillion-Dollar Race to Cash In on Artificial Intelligence (Harper Collins, 2025), Pulitzer Prize-winning journalist Gary Rivlin brings us deep into the world of AI development in Silicon Valley. Over the course of more than a year, Rivlin closely follows founders and venture capitalists trying to capitalize on this AI moment. That includes LinkedIn founder Reid Hoffman, the legendary investor whom the Wall Street Journal once called, “the most connected person in Silicon Valley.” Through Hoffman, Rivlin is granted access to a number of companies on the cutting-edge of AI research, such as Inflection AI, the company Hoffman cofounded in 2022, and OpenAI, the San Francisco-based startup that sparked it all with its release at the end of that year of ChatGPT. In addition to Hoffman, Rivlin introduces us to other AI experts, including OpenAI cofounder Sam Altman and Mustafa Suleyman, the co-founder of DeepMind, an early AI startup that Google bought for $650 million in 2014. Rivlin also brings readers inside Microsoft, Meta, Google and other tech giants scrambling to keep pace. On this vast frontier, no one knows which of these companies will hit it big–or which will flame out spectacularly. In this riveting narrative marbled with familiar names such as Musk, Zuckerberg, and Gates, Rivlin chronicles breakthroughs as they happen, giving us a deep understanding of what's around the corner in AI development. An adventure story full of drama and unforgettable personalities, AI Valley promises to be the definitive story for anyone seeking to understand the latest phase of world-changing discoveries and the minds behind them. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/popular-culture

NBN Book of the Day
Gary Rivlin, "AI Valley: Microsoft, Google, and the Trillion-Dollar Race to Cash In on Artificial Intelligence" (Harper Collins, 2025)

NBN Book of the Day

Play Episode Listen Later Aug 20, 2025 65:23


A veteran Pulitzer Prize-winning journalist shadows the top thinkers in the field of Artificial Intelligence, introducing the breakthroughs and developments that will change the way we live and work. Artificial Intelligence has been “just around the corner” for decades, continually disappointing those who long believed in its potential. But now, with the emergence and growing use of ChatGPT, Gemini, and a rapidly multiplying number of other AI tools, many are wondering: Has AI's moment finally arrived? In AI Valley: Microsoft, Google, and the Trillion-Dollar Race to Cash In on Artificial Intelligence (Harper Collins, 2025), Pulitzer Prize-winning journalist Gary Rivlin brings us deep into the world of AI development in Silicon Valley. Over the course of more than a year, Rivlin closely follows founders and venture capitalists trying to capitalize on this AI moment. That includes LinkedIn founder Reid Hoffman, the legendary investor whom the Wall Street Journal once called, “the most connected person in Silicon Valley.” Through Hoffman, Rivlin is granted access to a number of companies on the cutting-edge of AI research, such as Inflection AI, the company Hoffman cofounded in 2022, and OpenAI, the San Francisco-based startup that sparked it all with its release at the end of that year of ChatGPT. In addition to Hoffman, Rivlin introduces us to other AI experts, including OpenAI cofounder Sam Altman and Mustafa Suleyman, the co-founder of DeepMind, an early AI startup that Google bought for $650 million in 2014. Rivlin also brings readers inside Microsoft, Meta, Google and other tech giants scrambling to keep pace. On this vast frontier, no one knows which of these companies will hit it big–or which will flame out spectacularly. In this riveting narrative marbled with familiar names such as Musk, Zuckerberg, and Gates, Rivlin chronicles breakthroughs as they happen, giving us a deep understanding of what's around the corner in AI development. An adventure story full of drama and unforgettable personalities, AI Valley promises to be the definitive story for anyone seeking to understand the latest phase of world-changing discoveries and the minds behind them. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/book-of-the-day

NBN Book of the Day
Gary Rivlin, "AI Valley: Microsoft, Google, and the Trillion-Dollar Race to Cash In on Artificial Intelligence" (Harper Collins, 2025)

NBN Book of the Day

Play Episode Listen Later Aug 20, 2025 65:23


A veteran Pulitzer Prize-winning journalist shadows the top thinkers in the field of Artificial Intelligence, introducing the breakthroughs and developments that will change the way we live and work. Artificial Intelligence has been “just around the corner” for decades, continually disappointing those who long believed in its potential. But now, with the emergence and growing use of ChatGPT, Gemini, and a rapidly multiplying number of other AI tools, many are wondering: Has AI's moment finally arrived? In AI Valley: Microsoft, Google, and the Trillion-Dollar Race to Cash In on Artificial Intelligence (Harper Collins, 2025), Pulitzer Prize-winning journalist Gary Rivlin brings us deep into the world of AI development in Silicon Valley. Over the course of more than a year, Rivlin closely follows founders and venture capitalists trying to capitalize on this AI moment. That includes LinkedIn founder Reid Hoffman, the legendary investor whom the Wall Street Journal once called, “the most connected person in Silicon Valley.” Through Hoffman, Rivlin is granted access to a number of companies on the cutting-edge of AI research, such as Inflection AI, the company Hoffman cofounded in 2022, and OpenAI, the San Francisco-based startup that sparked it all with its release at the end of that year of ChatGPT. In addition to Hoffman, Rivlin introduces us to other AI experts, including OpenAI cofounder Sam Altman and Mustafa Suleyman, the co-founder of DeepMind, an early AI startup that Google bought for $650 million in 2014. Rivlin also brings readers inside Microsoft, Meta, Google and other tech giants scrambling to keep pace. On this vast frontier, no one knows which of these companies will hit it big–or which will flame out spectacularly. In this riveting narrative marbled with familiar names such as Musk, Zuckerberg, and Gates, Rivlin chronicles breakthroughs as they happen, giving us a deep understanding of what's around the corner in AI development. An adventure story full of drama and unforgettable personalities, AI Valley promises to be the definitive story for anyone seeking to understand the latest phase of world-changing discoveries and the minds behind them. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/book-of-the-day

AdTechGod Pod
The Refresh News: August 18 Walmart–Trade Desk Shift, Google's AI Ad Fraud Push, and Upfronts 2025 Numbers

AdTechGod Pod

Play Episode Listen Later Aug 18, 2025 10:27


This week's episode of The Refresh dives into Walmart's evolving partnership with The Trade Desk, signaling potential changes in retail media alliances. We explore Google's use of large language models to combat ad fraud, achieving significant reductions in invalid traffic. Finally, we break down Variety's latest upfronts report, showing a continued decline in primetime TV ad commitments and notable growth in streaming investment. This week we cover: Walmart and The Trade Desk's relationship is moving from exclusive to open, raising questions about Walmart's retail data strategy and potential in-house platform development. The Trade Desk faces growing competition from vertically integrated giants like Amazon, Google, and Meta, which benefit from owned inventory and rich first-party data. Google's traffic quality team, in collaboration with Google Research and DeepMind, deployed large language models to detect and reduce mobile invalid traffic by 40%. Variety reports primetime TV ad commitments fell for the third consecutive year, with broadcast down 2.5% and cable down 4.3%. Streaming ad commitments surged nearly 18% year over year, driven by advanced targeting, programmatic buying opportunities, and high-value live sports content moving to digital platforms. Learn more about your ad choices. Visit megaphone.fm/adchoices

IT Privacy and Security Weekly update.
EP 255.5 Deep Dive. Sweet Thing and The IT Privacy and Security Weekly Update for the Week ending August 12th., 2025

IT Privacy and Security Weekly update.

Play Episode Listen Later Aug 14, 2025 12:52


How AI Can Inadvertently Expose Personal DataAI tools often unintentionally leak private information. For example, meeting transcription software can include offhand comments, personal jokes, or sensitive details in auto-generated summaries. ChatGPT conversations—when publicly shared—can also be indexed by search engines, revealing confidential topics such as NDAs or personal relationship issues. Even healthcare devices like MRIs and X-ray machines have exposed private data due to weak or absent security controls, risking identity theft and phishing attacks.Cybercriminals Exploiting AI for AttacksAI is a double-edged sword: while offering defensive capabilities, it's also being weaponized. The group “GreedyBear” used AI-generated code in a massive crypto theft operation. They deployed malicious browser extensions, fake websites, and executable files to impersonate trusted crypto platforms, harvesting users' wallet credentials. Their tactic involves publishing benign software that gains trust, then covertly injecting malicious code later. Similarly, AI-generated TikTok ads lead to fake “shops” pushing malware like SparkKitty spyware, which targets cryptocurrency users.Security Concerns with Advanced AI Models like GPT-5Despite advancements, new AI models such as GPT-5 remain vulnerable. Independent researchers, including NeuralTrust and SPLX, were able to bypass GPT-5's safeguards within 24 hours. Methods included multi-turn “context smuggling” and text obfuscation to elicit dangerous outputs like instructions for creating weapons. These vulnerabilities suggest that even the latest models lack sufficient security maturity, raising concerns about their readiness for enterprise use.AI Literacy and Education InitiativesThere is a growing push for AI literacy, especially in schools. Microsoft has pledged $4 billion to fund AI education in K–12 schools, community colleges, and nonprofits. The traditional "Hour of Code" is being rebranded as "Hour of AI," reflecting a shift from learning to code to understanding AI itself. The aim is to empower students with foundational knowledge of how AI works, emphasizing creativity, ethics, security, and systems thinking over rote programming.Legal and Ethical Issues Around Posthumous Data UseOne emerging ethical challenge is the use of deceased individuals' data to train AI models. Scholars advocate for postmortem digital rights, such as a 12-month grace period for families to delete a person's data. Currently, U.S. laws offer little protection in this area, and acts like RUFADAA don't address AI recreations.Encryption Weaknesses in Law Enforcement and Critical SystemsRecent research highlights significant encryption vulnerabilities in communication systems used by police, military, and critical infrastructure. A Dutch study uncovered a deliberate backdoor in a radio encryption algorithm. Even the updated, supposedly secure version reduces key strength from 128 bits to 56 bits—dramatically weakening security. This suggests that critical communications could be intercepted, leaving sensitive systems exposed despite the illusion of protection.Public Trust in Government Digital SystemsTrust in digital governance is under strain. The UK's HM Courts & Tribunals Service reportedly concealed an IT error that caused key evidence to vanish in legal cases. The lack of transparency and inadequate investigation risk undermining judicial credibility. Separately, the UK government secretly authorized facial recognition use across immigration databases, far exceeding the scale of traditional criminal databases.AI for Cybersecurity DefenseOn the defensive side, AI is proving valuable in finding vulnerabilities. Google's “Big Sleep,” an LLM-based tool developed by DeepMind and Project Zero, has independently discovered 20 bugs in major open-source projects like FFmpeg and ImageMagick.

Vanishing Gradients
Episode 56: DeepMind Just Dropped Gemma 270M... And Here's Why It Matters

Vanishing Gradients

Play Episode Listen Later Aug 14, 2025 45:40


While much of the AI world chases ever-larger models, Ravin Kumar (Google DeepMind) and his team build across the size spectrum, from billions of parameters down to this week's release: Gemma 270M, the smallest member yet of the Gemma 3 open-weight family. At just 270 million parameters, a quarter the size of Gemma 1B, it's designed for speed, efficiency, and fine-tuning. We explore what makes 270M special, where it fits alongside its billion-parameter siblings, and why you might reach for it in production even if you think “small” means “just for experiments.” We talk through: - Where 270M fits into the Gemma 3 lineup — and why it exists - On-device use cases where latency, privacy, and efficiency matter - How smaller models open up rapid, targeted fine-tuning - Running multiple models in parallel without heavyweight hardware - Why “small” models might drive the next big wave of AI adoption If you've ever wondered what you'd do with a model this size (or how to squeeze the most out of it) this episode will show you how small can punch far above its weight. LINKS Introducing Gemma 3 270M: The compact model for hyper-efficient AI (Google Developer Blog) (https://developers.googleblog.com/en/introducing-gemma-3-270m/) Full Model Fine-Tune Guide using Hugging Face Transformers (https://ai.google.dev/gemma/docs/core/huggingface_text_full_finetune) The Gemma 270M model on HuggingFace (https://huggingface.co/google/gemma-3-270m) The Gemma 270M model on Ollama (https://ollama.com/library/gemma3:270m) Building AI Agents with Gemma 3, a workshop with Ravin and Hugo (https://www.youtube.com/live/-IWstEStqok) (Code here (https://github.com/canyon289/ai_agent_basics)) From Images to Agents: Building and Evaluating Multimodal AI Workflows, a workshop with Ravin and Hugo (https://www.youtube.com/live/FNlM7lSt8Uk)(Code here (https://github.com/canyon289/ai_image_agent)) Evaluating AI Agents: From Demos to Dependability, an upcoming workshop with Ravin and Hugo (https://lu.ma/ezgny3dl) Upcoming Events on Luma (https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk) Watch the podcast video on YouTube (https://youtu.be/VZDw6C2A_8E)

Les Cast Codeurs Podcast
LCC 329 - L'IA, ce super stagiaire qui nous fait travailler plus

Les Cast Codeurs Podcast

Play Episode Listen Later Aug 14, 2025 120:24


Arnaud et Guillaume explore l'évolution de l'écosystème Java avec Java 25, Spring Boot et Quarkus, ainsi que les dernières tendances en intelligence artificielle avec les nouveaux modèles comme Grok 4 et Claude Code. Les animateurs font également le point sur l'infrastructure cloud, les défis MCP et CLI, tout en discutant de l'impact de l'IA sur la productivité des développeurs et la gestion de la dette technique. Enregistré le 8 août 2025 Téléchargement de l'épisode LesCastCodeurs-Episode–329.mp3 ou en vidéo sur YouTube. News Langages Java 25: JEP 515 : Profilage de méthode en avance (Ahead-of-Time) https://openjdk.org/jeps/515 Le JEP 515 a pour but d'améliorer le temps de démarrage et de chauffe des applications Java. L'idée est de collecter les profils d'exécution des méthodes lors d'une exécution antérieure, puis de les rendre immédiatement disponibles au démarrage de la machine virtuelle. Cela permet au compilateur JIT de générer du code natif dès le début, sans avoir à attendre que l'application soit en cours d'exécution. Ce changement ne nécessite aucune modification du code des applications, des bibliothèques ou des frameworks. L'intégration se fait via les commandes de création de cache AOT existantes. Voir aussi https://openjdk.org/jeps/483 et https://openjdk.org/jeps/514 Java 25: JEP 518 : Échantillonnage coopératif JFR https://openjdk.org/jeps/518 Le JEP 518 a pour objectif d'améliorer la stabilité et l'évolutivité de la fonction JDK Flight Recorder (JFR) pour le profilage d'exécution. Le mécanisme d'échantillonnage des piles d'appels de threads Java est retravaillé pour s'exécuter uniquement à des safepoints, ce qui réduit les risques d'instabilité. Le nouveau modèle permet un parcours de pile plus sûr, notamment avec le garbage collector ZGC, et un échantillonnage plus efficace qui prend en charge le parcours de pile concurrent. Le JEP ajoute un nouvel événement, SafepointLatency, qui enregistre le temps nécessaire à un thread pour atteindre un safepoint. L'approche rend le processus d'échantillonnage plus léger et plus rapide, car le travail de création de traces de pile est délégué au thread cible lui-même. Librairies Spring Boot 4 M1 https://spring.io/blog/2025/07/24/spring-boot–4–0–0-M1-available-now Spring Boot 4.0.0-M1 met à jour de nombreuses dépendances internes et externes pour améliorer la stabilité et la compatibilité. Les types annotés avec @ConfigurationProperties peuvent maintenant référencer des types situés dans des modules externes grâce à @ConfigurationPropertiesSource. Le support de l'information sur la validité des certificats SSL a été simplifié, supprimant l'état WILL_EXPIRE_SOON au profit de VALID. L'auto-configuration des métriques Micrometer supporte désormais l'annotation @MeterTag sur les méthodes annotées @Counted et @Timed, avec évaluation via SpEL. Le support de @ServiceConnection pour MongoDB inclut désormais l'intégration avec MongoDBAtlasLocalContainer de Testcontainers. Certaines fonctionnalités et API ont été dépréciées, avec des recommandations pour migrer les points de terminaison personnalisés vers les versions Spring Boot 2. Les versions milestones et release candidates sont maintenant publiées sur Maven Central, en plus du repository Spring traditionnel. Un guide de migration a été publié pour faciliter la transition depuis Spring Boot 3.5 vers la version 4.0.0-M1. Passage de Spring Boot à Quarkus : retour d'expérience https://blog.stackademic.com/we-switched-from-spring-boot-to-quarkus-heres-the-ugly-truth-c8a91c2b8c53 Une équipe a migré une application Java de Spring Boot vers Quarkus pour gagner en performances et réduire la consommation mémoire. L'objectif était aussi d'optimiser l'application pour le cloud natif. La migration a été plus complexe que prévu, notamment à cause de l'incompatibilité avec certaines bibliothèques et d'un écosystème Quarkus moins mature. Il a fallu revoir du code et abandonner certaines fonctionnalités spécifiques à Spring Boot. Les gains en performances et en mémoire sont réels, mais la migration demande un vrai effort d'adaptation. La communauté Quarkus progresse, mais le support reste limité comparé à Spring Boot. Conclusion : Quarkus est intéressant pour les nouveaux projets ou ceux prêts à être réécrits, mais la migration d'un projet existant est un vrai défi. LangChain4j 1.2.0 : Nouvelles fonctionnalités et améliorations https://github.com/langchain4j/langchain4j/releases/tag/1.2.0 Modules stables : Les modules langchain4j-anthropic, langchain4j-azure-open-ai, langchain4j-bedrock, langchain4j-google-ai-gemini, langchain4j-mistral-ai et langchain4j-ollama sont désormais en version stable 1.2.0. Modules expérimentaux : La plupart des autres modules de LangChain4j sont en version 1.2.0-beta8 et restent expérimentaux/instables. BOM mis à jour : Le langchain4j-bom a été mis à jour en version 1.2.0, incluant les dernières versions de tous les modules. Principales améliorations : Support du raisonnement/pensée dans les modèles. Appels d'outils partiels en streaming. Option MCP pour exposer automatiquement les ressources en tant qu'outils. OpenAI : possibilité de définir des paramètres de requête personnalisés et d'accéder aux réponses HTTP brutes et aux événements SSE. Améliorations de la gestion des erreurs et de la documentation. Filtering Metadata Infinispan ! (cc Katia( Et 1.3.0 est déjà disponible https://github.com/langchain4j/langchain4j/releases/tag/1.3.0 2 nouveaux modules expérimentaux, langchain4j-agentic et langchain4j-agentic-a2a qui introduisent un ensemble d'abstractions et d'utilitaires pour construire des applications agentiques Infrastructure Cette fois c'est vraiment l'année de Linux sur le desktop ! https://www.lesnumeriques.com/informatique/c-est-enfin-arrive-linux-depasse-un-seuil-historique-que-microsoft-pensait-intouchable-n239977.html Linux a franchi la barre des 5% aux USA Cette progression s'explique en grande partie par l'essor des systèmes basés sur Linux dans les environnements professionnels, les serveurs, et certains usages grand public. Microsoft, longtemps dominant avec Windows, voyait ce seuil comme difficilement atteignable à court terme. Le succès de Linux est également alimenté par la popularité croissante des distributions open source, plus légères, personnalisables et adaptées à des usages variés. Le cloud, l'IoT, et les infrastructures de serveurs utilisent massivement Linux, ce qui contribue à cette augmentation globale. Ce basculement symbolique marque un changement d'équilibre dans l'écosystème des systèmes d'exploitation. Toutefois, Windows conserve encore une forte présence dans certains segments, notamment chez les particuliers et dans les entreprises classiques. Cette évolution témoigne du dynamisme et de la maturité croissante des solutions Linux, devenues des alternatives crédibles et robustes face aux offres propriétaires. Cloud Cloudflare 1.1.1.1 s'en va pendant une heure d'internet https://blog.cloudflare.com/cloudflare–1–1–1–1-incident-on-july–14–2025/ Le 14 juillet 2025, le service DNS public Cloudflare 1.1.1.1 a subi une panne majeure de 62 minutes, rendant le service indisponible pour la majorité des utilisateurs mondiaux. Cette panne a aussi causé une dégradation intermittente du service Gateway DNS. L'incident est survenu suite à une mise à jour de la topologie des services Cloudflare qui a activé une erreur de configuration introduite en juin 2025. Cette erreur faisait que les préfixes destinés au service 1.1.1.1 ont été accidentellement inclus dans un nouveau service de localisation des données (Data Localization Suite), ce qui a perturbé le routage anycast. Le résultat a été une incapacité pour les utilisateurs à résoudre les noms de domaine via 1.1.1.1, rendant la plupart des services Internet inaccessibles pour eux. Ce n'était pas le résultat d'une attaque ou d'un problème BGP, mais une erreur interne de configuration. Cloudflare a rapidement identifié la cause, corrigé la configuration et mis en place des mesures pour prévenir ce type d'incident à l'avenir. Le service est revenu à la normale après environ une heure d'indisponibilité. L'incident souligne la complexité et la sensibilité des infrastructures anycast et la nécessité d'une gestion rigoureuse des configurations réseau. Web L'évolution des bonnes pratiques de Node.js https://kashw1n.com/blog/nodejs–2025/ Évolution de Node.js en 2025 : Le développement se tourne vers les standards du web, avec moins de dépendances externes et une meilleure expérience pour les développeurs. ES Modules (ESM) par défaut : Remplacement de CommonJS pour un meilleur outillage et une standardisation avec le web. Utilisation du préfixe node: pour les modules natifs afin d'éviter les conflits. API web intégrées : fetch, AbortController, et AbortSignal sont maintenant natifs, réduisant le besoin de librairies comme axios. Runner de test intégré : Plus besoin de Jest ou Mocha pour la plupart des cas. Inclut un mode “watch” et des rapports de couverture. Patterns asynchrones avancés : Utilisation plus poussée de async/await avec Promise.all() pour le parallélisme et les AsyncIterators pour les flux d'événements. Worker Threads pour le parallélisme : Pour les tâches lourdes en CPU, évitant de bloquer l'event loop principal. Expérience de développement améliorée : Intégration du mode --watch (remplace nodemon) et du support --env-file (remplace dotenv). Sécurité et performance : Modèle de permission expérimental pour restreindre l'accès et des hooks de performance natifs pour le monitoring. Distribution simplifiée : Création d'exécutables uniques pour faciliter le déploiement d'applications ou d'outils en ligne de commande. Sortie de Apache EChart 6 après 12 ans ! https://echarts.apache.org/handbook/en/basics/release-note/v6-feature/ Apache ECharts 6.0 : Sortie officielle après 12 ans d'évolution. 12 mises à niveau majeures pour la visualisation de données. Trois dimensions clés d'amélioration : Présentation visuelle plus professionnelle : Nouveau thème par défaut (design moderne). Changement dynamique de thème. Prise en charge du mode sombre. Extension des limites de l'expression des données : Nouveaux types de graphiques : Diagramme de cordes (Chord Chart), Nuage de points en essaim (Beeswarm Chart). Nouvelles fonctionnalités : Jittering pour nuages de points denses, Axes coupés (Broken Axis). Graphiques boursiers améliorés Liberté de composition : Nouveau système de coordonnées matriciel. Séries personnalisées améliorées (réutilisation du code, publication npm). Nouveaux graphiques personnalisés inclus (violon, contour, etc.). Optimisation de l'agencement des étiquettes d'axe. Data et Intelligence Artificielle Grok 4 s'est pris pour un nazi à cause des tools https://techcrunch.com/2025/07/15/xai-says-it-has-fixed-grok–4s-problematic-responses/ À son lancement, Grok 4 a généré des réponses offensantes, notamment en se surnommant « MechaHitler » et en adoptant des propos antisémites. Ce comportement provenait d'une recherche automatique sur le web qui a mal interprété un mème viral comme une vérité. Grok alignait aussi ses réponses controversées sur les opinions d'Elon Musk et de xAI, ce qui a amplifié les biais. xAI a identifié que ces dérapages étaient dus à une mise à jour interne intégrant des instructions encourageant un humour offensant et un alignement avec Musk. Pour corriger cela, xAI a supprimé le code fautif, remanié les prompts système, et imposé des directives demandant à Grok d'effectuer une analyse indépendante, en utilisant des sources diverses. Grok doit désormais éviter tout biais, ne plus adopter un humour politiquement incorrect, et analyser objectivement les sujets sensibles. xAI a présenté ses excuses, précisant que ces dérapages étaient dus à un problème de prompt et non au modèle lui-même. Cet incident met en lumière les défis persistants d'alignement et de sécurité des modèles d'IA face aux injections indirectes issues du contenu en ligne. La correction n'est pas qu'un simple patch technique, mais un exemple des enjeux éthiques et de responsabilité majeurs dans le déploiement d'IA à grande échelle. Guillaume a sorti toute une série d'article sur les patterns agentiques avec le framework ADK pour Java https://glaforge.dev/posts/2025/07/29/mastering-agentic-workflows-with-adk-the-recap/ Un premier article explique comment découper les tâches en sous-agents IA : https://glaforge.dev/posts/2025/07/23/mastering-agentic-workflows-with-adk-sub-agents/ Un deuxième article détaille comment organiser les agents de manière séquentielle : https://glaforge.dev/posts/2025/07/24/mastering-agentic-workflows-with-adk-sequential-agent/ Un troisième article explique comment paralleliser des tâches indépendantes : https://glaforge.dev/posts/2025/07/25/mastering-agentic-workflows-with-adk-parallel-agent/ Et enfin, comment faire des boucles d'amélioration : https://glaforge.dev/posts/2025/07/28/mastering-agentic-workflows-with-adk-loop-agents/ Tout ça évidemment en Java :slightly_smiling_face: 6 semaines de code avec Claude https://blog.puzzmo.com/posts/2025/07/30/six-weeks-of-claude-code/ Orta partage son retour après 6 semaines d'utilisation quotidienne de Claude Code, qui a profondément changé sa manière de coder. Il ne « code » plus vraiment ligne par ligne, mais décrit ce qu'il veut, laisse Claude proposer une solution, puis corrige ou ajuste. Cela permet de se concentrer sur le résultat plutôt que sur l'implémentation, comme passer de la peinture au polaroid. Claude s'avère particulièrement utile pour les tâches de maintenance : migrations, refactors, nettoyage de code. Il reste toujours en contrôle, révise chaque diff généré, et guide l'IA via des prompts bien cadrés. Il note qu'il faut quelques semaines pour prendre le bon pli : apprendre à découper les tâches et formuler clairement les attentes. Les tâches simples deviennent quasi instantanées, mais les tâches complexes nécessitent encore de l'expérience et du discernement. Claude Code est vu comme un très bon copilote, mais ne remplace pas le rôle du développeur qui comprend l'ensemble du système. Le gain principal est une vitesse de feedback plus rapide et une boucle d'itération beaucoup plus courte. Ce type d'outil pourrait bien redéfinir la manière dont on pense et structure le développement logiciel à moyen terme. Claude Code et les serveurs MCP : ou comment transformer ton terminal en assistant surpuissant https://touilleur-express.fr/2025/07/27/claude-code-et-les-serveurs-mcp-ou-comment-transformer-ton-terminal-en-assistant-surpuissant/ Nicolas continue ses études sur Claude Code et explique comment utiliser les serveurs MCP pour rendre Claude bien plus efficace. Le MCP Context7 montre comment fournir à l'IA la doc technique à jour (par exemple, Next.js 15) pour éviter les hallucinations ou les erreurs. Le MCP Task Master, autre serveur MCP, transforme un cahier des charges (PRD) en tâches atomiques, estimées, et organisées sous forme de plan de travail. Le MCP Playwright permet de manipuler des navigateurs et d'executer des tests E2E Le MCP Digital Ocean permet de déployer facilement l'application en production Tout n'est pas si ideal, les quotas sont atteints en quelques heures sur une petite application et il y a des cas où il reste bien plus efficace de le faire soit-même (pour un codeur expérimenté) Nicolas complète cet article avec l'écriture d'un MVP en 20 heures: https://touilleur-express.fr/2025/07/30/comment-jai-code-un-mvp-en-une-vingtaine-dheures-avec-claude-code/ Le développement augmenté, un avis politiquement correct, mais bon… https://touilleur-express.fr/2025/07/31/le-developpement-augmente-un-avis-politiquement-correct-mais-bon/ Nicolas partage un avis nuancé (et un peu provoquant) sur le développement augmenté, où l'IA comme Claude Code assiste le développeur sans le remplacer. Il rejette l'idée que cela serait « trop magique » ou « trop facile » : c'est une évolution logique de notre métier, pas un raccourci pour les paresseux. Pour lui, un bon dev reste celui qui structure bien sa pensée, sait poser un problème, découper, valider — même si l'IA aide à coder plus vite. Il raconte avoir codé une app OAuth, testée, stylisée et déployée en quelques heures, sans jamais quitter le terminal grâce à Claude. Ce genre d'outillage change le rapport au temps : on passe de « je vais y réfléchir » à « je tente tout de suite une version qui marche à peu près ». Il assume aimer cette approche rapide et imparfaite : mieux vaut une version brute livrée vite qu'un projet bloqué par le perfectionnisme. L'IA est selon lui un super stagiaire : jamais fatigué, parfois à côté de la plaque, mais diablement productif quand bien briefé. Il conclut que le « dev augmenté » ne remplace pas les bons développeurs… mais les développeurs moyens doivent s'y mettre, sous peine d'être dépassés. ChatGPT lance le mode d'étude : un apprentissage interactif pas à pas https://openai.com/index/chatgpt-study-mode/ OpenAI propose un mode d'étude dans ChatGPT qui guide les utilisateurs pas à pas plutôt que de donner directement la réponse. Ce mode vise à encourager la réflexion active et l'apprentissage en profondeur. Il utilise des instructions personnalisées pour poser des questions et fournir des explications adaptées au niveau de l'utilisateur. Le mode d'étude favorise la gestion de la charge cognitive et stimule la métacognition. Il propose des réponses structurées pour faciliter la compréhension progressive des sujets. Disponible dès maintenant pour les utilisateurs connectés, ce mode sera intégré dans ChatGPT Edu. L'objectif est de transformer ChatGPT en un véritable tuteur numérique, aidant les étudiants à mieux assimiler les connaissances. A priori Gemini viendrait de sortir un fonctionnalité similaire Lancement de GPT-OSS par OpenAI https://openai.com/index/introducing-gpt-oss/ https://openai.com/index/gpt-oss-model-card/ OpenAI a lancé GPT-OSS, sa première famille de modèles open-weight depuis GPT–2. Deux modèles sont disponibles : gpt-oss–120b et gpt-oss–20b, qui sont des modèles mixtes d'experts conçus pour le raisonnement et les tâches d'agent. Les modèles sont distribués sous licence Apache 2.0, permettant leur utilisation et leur personnalisation gratuites, y compris pour des applications commerciales. Le modèle gpt-oss–120b est capable de performances proches du modèle OpenAI o4-mini, tandis que le gpt-oss–20b est comparable au o3-mini. OpenAI a également open-sourcé un outil de rendu appelé Harmony en Python et Rust pour en faciliter l'adoption. Les modèles sont optimisés pour fonctionner localement et sont pris en charge par des plateformes comme Hugging Face et Ollama. OpenAI a mené des recherches sur la sécurité pour s'assurer que les modèles ne pouvaient pas être affinés pour des utilisations malveillantes dans les domaines biologique, chimique ou cybernétique. Anthropic lance Opus 4.1 https://www.anthropic.com/news/claude-opus–4–1 Anthropic a publié Claude Opus 4.1, une mise à jour de son modèle de langage. Cette nouvelle version met l'accent sur l'amélioration des performances en codage, en raisonnement et sur les tâches de recherche et d'analyse de données. Le modèle a obtenu un score de 74,5 % sur le benchmark SWE-bench Verified, ce qui représente une amélioration par rapport à la version précédente. Il excelle notamment dans la refactorisation de code multifichier et est capable d'effectuer des recherches approfondies. Claude Opus 4.1 est disponible pour les utilisateurs payants de Claude, ainsi que via l'API, Amazon Bedrock et Vertex AI de Google Cloud, avec des tarifs identiques à ceux d'Opus 4. Il est présenté comme un remplacement direct de Claude Opus 4, avec des performances et une précision supérieures pour les tâches de programmation réelles. OpenAI Summer Update. GPT–5 is out https://openai.com/index/introducing-gpt–5/ Détails https://openai.com/index/gpt–5-new-era-of-work/ https://openai.com/index/introducing-gpt–5-for-developers/ https://openai.com/index/gpt–5-safe-completions/ https://openai.com/index/gpt–5-system-card/ Amélioration majeure des capacités cognitives - GPT‑5 montre un niveau de raisonnement, d'abstraction et de compréhension nettement supérieur aux modèles précédents. Deux variantes principales - gpt-5-main : rapide, efficace pour les tâches générales. gpt-5-thinking : plus lent mais spécialisé dans les tâches complexes, nécessitant réflexion profonde. Routeur intelligent intégré - Le système sélectionne automatiquement la version la plus adaptée à la tâche (rapide ou réfléchie), sans intervention de l'utilisateur. Fenêtre de contexte encore étendue - GPT‑5 peut traiter des volumes de texte plus longs (jusqu'à 1 million de tokens dans certaines versions), utile pour des documents ou projets entiers. Réduction significative des hallucinations - GPT‑5 donne des réponses plus fiables, avec moins d'erreurs inventées ou de fausses affirmations. Comportement plus neutre et moins sycophant - Il a été entraîné pour mieux résister à l'alignement excessif avec les opinions de l'utilisateur. Capacité accrue à suivre des instructions complexes - GPT‑5 comprend mieux les consignes longues, implicites ou nuancées. Approche “Safe completions” - Remplacement des “refus d'exécution” par des réponses utiles mais sûres — le modèle essaie de répondre avec prudence plutôt que bloquer. Prêt pour un usage professionnel à grande échelle - Optimisé pour le travail en entreprise : rédaction, programmation, synthèse, automatisation, gestion de tâches, etc. Améliorations spécifiques pour le codage - GPT‑5 est plus performant pour l'écriture de code, la compréhension de contextes logiciels complexes, et l'usage d'outils de développement. Expérience utilisateur plus rapide et fluide- Le système réagit plus vite grâce à une orchestration optimisée entre les différents sous-modèles. Capacités agentiques renforcées - GPT‑5 peut être utilisé comme base pour des agents autonomes capables d'accomplir des objectifs avec peu d'interventions humaines. Multimodalité maîtrisée (texte, image, audio) - GPT‑5 intègre de façon plus fluide la compréhension de formats multiples, dans un seul modèle. Fonctionnalités pensées pour les développeurs - Documentation plus claire, API unifiée, modèles plus transparents et personnalisables. Personnalisation contextuelle accrue - Le système s'adapte mieux au style, ton ou préférences de l'utilisateur, sans instructions répétées. Utilisation énergétique et matérielle optimisée - Grâce au routeur interne, les ressources sont utilisées plus efficacement selon la complexité des tâches. Intégration sécurisée dans les produits ChatGPT - Déjà déployé dans ChatGPT avec des bénéfices immédiats pour les utilisateurs Pro et entreprises. Modèle unifié pour tous les usages - Un seul système capable de passer de la conversation légère à des analyses scientifiques ou du code complexe. Priorité à la sécurité et à l'alignement - GPT‑5 a été conçu dès le départ pour minimiser les abus, biais ou comportements indésirables. Pas encore une AGI - OpenAI insiste : malgré ses capacités impressionnantes, GPT‑5 n'est pas une intelligence artificielle générale. Non, non, les juniors ne sont pas obsolètes malgré l'IA ! (dixit GitHub) https://github.blog/ai-and-ml/generative-ai/junior-developers-arent-obsolete-heres-how-to-thrive-in-the-age-of-ai/ L'IA transforme le développement logiciel, mais les développeurs juniors ne sont pas obsolètes. Les nouveaux apprenants sont bien positionnés, car déjà familiers avec les outils IA. L'objectif est de développer des compétences pour travailler avec l'IA, pas d'être remplacé. La créativité et la curiosité sont des qualités humaines clés. Cinq façons de se démarquer : Utiliser l'IA (ex: GitHub Copilot) pour apprendre plus vite, pas seulement coder plus vite (ex: mode tuteur, désactiver l'autocomplétion temporairement). Construire des projets publics démontrant ses compétences (y compris en IA). Maîtriser les workflows GitHub essentiels (GitHub Actions, contribution open source, pull requests). Affûter son expertise en révisant du code (poser des questions, chercher des patterns, prendre des notes). Déboguer plus intelligemment et rapidement avec l'IA (ex: Copilot Chat pour explications, corrections, tests). Ecrire son premier agent IA avec A2A avec WildFly par Emmanuel Hugonnet https://www.wildfly.org/news/2025/08/07/Building-your-First-A2A-Agent/ Protocole Agent2Agent (A2A) : Standard ouvert pour l'interopérabilité universelle des agents IA. Permet communication et collaboration efficaces entre agents de différents fournisseurs/frameworks. Crée des écosystèmes multi-agents unifiés, automatisant les workflows complexes. Objet de l'article : Guide pour construire un premier agent A2A (agent météo) dans WildFly. Utilise A2A Java SDK pour Jakarta Servers, WildFly AI Feature Pack, un LLM (Gemini) et un outil Python (MCP). Agent conforme A2A v0.2.5. Prérequis : JDK 17+, Apache Maven 3.8+, IDE Java, Google AI Studio API Key, Python 3.10+, uv. Étapes de construction de l'agent météo : Création du service LLM : Interface Java (WeatherAgent) utilisant LangChain4J pour interagir avec un LLM et un outil Python MCP (fonctions get_alerts, get_forecast). Définition de l'agent A2A (via CDI) : ▪︎ Agent Card : Fournit les métadonnées de l'agent (nom, description, URL, capacités, compétences comme “weather_search”). Agent Executor : Gère les requêtes A2A entrantes, extrait le message utilisateur, appelle le service LLM et formate la réponse. Exposition de l'agent : Enregistrement d'une application JAX-RS pour les endpoints. Déploiement et test : Configuration de l'outil A2A-inspector de Google (via un conteneur Podman). Construction du projet Maven, configuration des variables d'environnement (ex: GEMINI_API_KEY). Lancement du serveur WildFly. Conclusion : Transformation minimale d'une application IA en agent A2A. Permet la collaboration et le partage d'informations entre agents IA, indépendamment de leur infrastructure sous-jacente. Outillage IntelliJ IDEa bouge vers une distribution unifiée https://blog.jetbrains.com/idea/2025/07/intellij-idea-unified-distribution-plan/ À partir de la version 2025.3, IntelliJ IDEA Community Edition ne sera plus distribuée séparément. Une seule version unifiée d'IntelliJ IDEA regroupera les fonctionnalités des éditions Community et Ultimate. Les fonctionnalités avancées de l'édition Ultimate seront accessibles via abonnement. Les utilisateurs sans abonnement auront accès à une version gratuite enrichie par rapport à l'édition Community actuelle. Cette unification vise à simplifier l'expérience utilisateur et réduire les différences entre les éditions. Les utilisateurs Community seront automatiquement migrés vers cette nouvelle version unifiée. Il sera possible d'activer les fonctionnalités Ultimate temporairement d'un simple clic. En cas d'expiration d'abonnement Ultimate, l'utilisateur pourra continuer à utiliser la version installée avec un jeu limité de fonctionnalités gratuites, sans interruption. Ce changement reflète l'engagement de JetBrains envers l'open source et l'adaptation aux besoins de la communauté. Prise en charge des Ancres YAML dans GitHub Actions https://github.com/actions/runner/issues/1182#issuecomment–3150797791 Afin d'éviter de dupliquer du contenu dans un workflow les Ancres permettent d'insérer des morceaux réutilisables de YAML Fonctionnalité attendue depuis des années et disponible chez GitLab depuis bien longtemps. Elle a été déployée le 4 aout. Attention à ne pas en abuser car la lisibilité de tels documents n'est pas si facile Gemini CLI rajoute les custom commands comme Claude https://cloud.google.com/blog/topics/developers-practitioners/gemini-cli-custom-slash-commands Mais elles sont au format TOML, on ne peut donc pas les partager avec Claude :disappointed: Automatiser ses workflows IA avec les hooks de Claude Code https://blog.gitbutler.com/automate-your-ai-workflows-with-claude-code-hooks/ Claude Code propose des hooks qui permettent d'exécuter des scripts à différents moments d'une session, par exemple au début, lors de l'utilisation d'outils, ou à la fin. Ces hooks facilitent l'automatisation de tâches comme la gestion de branches Git, l'envoi de notifications, ou l'intégration avec d'autres outils. Un exemple simple est l'envoi d'une notification sur le bureau à la fin d'une session. Les hooks se configurent via trois fichiers JSON distincts selon le scope : utilisateur, projet ou local. Sur macOS, l'envoi de notifications nécessite une permission spécifique via l'application “Script Editor”. Il est important d'avoir une version à jour de Claude Code pour utiliser ces hooks. GitButler permet desormais de s'intégrer à Claude Code via ces hooks: https://blog.gitbutler.com/parallel-claude-code/ Le client Git de Jetbrains bientot en standalone https://lp.jetbrains.com/closed-preview-for-jetbrains-git-client/ Demandé par certains utilisateurs depuis longtemps Ca serait un client graphique du même style qu'un GitButler, SourceTree, etc Apache Maven 4 …. arrive …. l'utilitaire mvnupva vous aider à upgrader https://maven.apache.org/tools/mvnup.html Fixe les incompatibilités connues Nettoie les redondances et valeurs par defaut (versions par ex) non utiles pour Maven 4 Reformattage selon les conventions maven … Une GitHub Action pour Gemini CLI https://blog.google/technology/developers/introducing-gemini-cli-github-actions/ Google a lancé Gemini CLI GitHub Actions, un agent d'IA qui fonctionne comme un “coéquipier de code” pour les dépôts GitHub. L'outil est gratuit et est conçu pour automatiser des tâches de routine telles que le triage des problèmes (issues), l'examen des demandes de tirage (pull requests) et d'autres tâches de développement. Il agit à la fois comme un agent autonome et un collaborateur que les développeurs peuvent solliciter à la demande, notamment en le mentionnant dans une issue ou une pull request. L'outil est basé sur la CLI Gemini, un agent d'IA open-source qui amène le modèle Gemini directement dans le terminal. Il utilise l'infrastructure GitHub Actions, ce qui permet d'isoler les processus dans des conteneurs séparés pour des raisons de sécurité. Trois flux de travail (workflows) open-source sont disponibles au lancement : le triage intelligent des issues, l'examen des pull requests et la collaboration à la demande. Pas besoin de MCP, le code est tout ce dont vous avez besoin https://lucumr.pocoo.org/2025/7/3/tools/ Armin souligne qu'il n'est pas fan du protocole MCP (Model Context Protocol) dans sa forme actuelle : il manque de composabilité et exige trop de contexte. Il remarque que pour une même tâche (ex. GitHub), utiliser le CLI est souvent plus rapide et plus efficace en termes de contexte que passer par un serveur MCP. Selon lui, le code reste la solution la plus simple et fiable, surtout pour automatiser des tâches répétitives. Il préfère créer des scripts clairs plutôt que se reposer sur l'inférence LLM : cela facilite la vérification, la maintenance et évite les erreurs subtiles. Pour les tâches récurrentes, si on les automatise, mieux vaut le faire avec du code reusable, plutôt que de laisser l'IA deviner à chaque fois. Il illustre cela en convertissant son blog entier de reStructuredText à Markdown : plutôt qu'un usage direct d'IA, il a demandé à Claude de générer un script complet, avec parsing AST, comparaison des fichiers, validation et itération. Ce workflow LLM→code→LLM (analyse et validation) lui a donné confiance dans le résultat final, tout en conservant un contrôle humain sur le processus. Il juge que MCP ne permet pas ce type de pipeline automatisé fiable, car il introduit trop d'inférence et trop de variations par appel. Pour lui, coder reste le meilleur moyen de garder le contrôle, la reproductibilité et la clarté dans les workflows automatisés. MCP vs CLI … https://www.async-let.com/blog/my-take-on-the-mcp-verses-cli-debate/ Cameron raconte son expérience de création du serveur XcodeBuildMCP, qui lui a permis de mieux comprendre le débat entre servir l'IA via MCP ou laisser l'IA utiliser directement les CLI du système. Selon lui, les CLIs restent préférables pour les développeurs experts recherchant contrôle, transparence, performance et simplicité. Mais les serveurs MCP excellent sur les workflows complexes, les contextes persistants, les contraintes de sécurité, et facilitent l'accès pour les utilisateurs moins expérimentés. Il reconnaît la critique selon laquelle MCP consomme trop de contexte (« context bloat ») et que les appels CLI peuvent être plus rapides et compréhensibles. Toutefois, il souligne que beaucoup de problèmes proviennent de la qualité des implémentations clients, pas du protocole MCP en lui‑même. Pour lui, un bon serveur MCP peut proposer des outils soigneusement définis qui simplifient la vie de l'IA (par exemple, renvoyer des données structurées plutôt que du texte brut à parser). Il apprécie la capacité des MCP à offrir des opérations état‑durables (sessions, mémoire, logs capturés), ce que les CLI ne gèrent pas naturellement. Certains scénarios ne peuvent pas fonctionner via CLI (pas de shell accessible) alors que MCP, en tant que protocole indépendant, reste utilisable par n'importe quel client. Son verdict : pas de solution universelle — chaque contexte mérite d'être évalué, et on ne devrait pas imposer MCP ou CLI à tout prix. Jules, l'agent de code asynchrone gratuit de Google, est sorti de beta et est disponible pour tout le monde https://blog.google/technology/google-labs/jules-now-available/ Jules, agent de codage asynchrone, est maintenant publiquement disponible. Propulsé par Gemini 2.5 Pro. Phase bêta : 140 000+ améliorations de code et retours de milliers de développeurs. Améliorations : interface utilisateur, corrections de bugs, réutilisation des configurations, intégration GitHub Issues, support multimodal. Gemini 2.5 Pro améliore les plans de codage et la qualité du code. Nouveaux paliers structurés : Introductif, Google AI Pro (limites 5x supérieures), Google AI Ultra (limites 20x supérieures). Déploiement immédiat pour les abonnés Google AI Pro et Ultra, incluant les étudiants éligibles (un an gratuit de AI Pro). Architecture Valoriser la réduction de la dette technique : un vrai défi https://www.lemondeinformatique.fr/actualites/lire-valoriser-la-reduction-de-la-dette-technique-mission-impossible–97483.html La dette technique est un concept mal compris et difficile à valoriser financièrement auprès des directions générales. Les DSI ont du mal à mesurer précisément cette dette, à allouer des budgets spécifiques, et à prouver un retour sur investissement clair. Cette difficulté limite la priorisation des projets de réduction de dette technique face à d'autres initiatives jugées plus urgentes ou stratégiques. Certaines entreprises intègrent progressivement la gestion de la dette technique dans leurs processus de développement. Des approches comme le Software Crafting visent à améliorer la qualité du code pour limiter l'accumulation de cette dette. L'absence d'outils adaptés pour mesurer les progrès rend la démarche encore plus complexe. En résumé, réduire la dette technique reste une mission délicate qui nécessite innovation, méthode et sensibilisation en interne. Il ne faut pas se Mocker … https://martinelli.ch/why-i-dont-use-mocking-frameworks-and-why-you-might-not-need-them-either/ https://blog.tremblay.pro/2025/08/not-using-mocking-frmk.html L'auteur préfère utiliser des fakes ou stubs faits à la main plutôt que des frameworks de mocking comme Mockito ou EasyMock. Les frameworks de mocking isolent le code, mais entraînent souvent : Un fort couplage entre les tests et les détails d'implémentation. Des tests qui valident le mock plutôt que le comportement réel. Deux principes fondamentaux guident son approche : Favoriser un design fonctionnel, avec logique métier pure (fonctions sans effets de bord). Contrôler les données de test : par exemple en utilisant des bases réelles (via Testcontainers) plutôt que de simuler. Dans sa pratique, les seuls cas où un mock externe est utilisé concernent les services HTTP externes, et encore il préfère en simuler seulement le transport plutôt que le comportement métier. Résultat : les tests deviennent plus simples, plus rapides à écrire, plus fiables, et moins fragiles aux évolutions du code. L'article conclut que si tu conçois correctement ton code, tu pourrais très bien ne pas avoir besoin de frameworks de mocking du tout. Le blog en réponse d'Henri Tremblay nuance un peu ces retours Méthodologies C'est quoi être un bon PM ? (Product Manager) Article de Chris Perry, un PM chez Google : https://thechrisperry.substack.com/p/being-a-good-pm-at-google Le rôle de PM est difficile : Un travail exigeant, où il faut être le plus impliqué de l'équipe pour assurer le succès. 1. Livrer (shipper) est tout ce qui compte : La priorité absolue. Mieux vaut livrer et itérer rapidement que de chercher la perfection en théorie. Un produit livré permet d'apprendre de la réalité. 2. Donner l'envie du grand large : La meilleure façon de faire avancer un projet est d'inspirer l'équipe avec une vision forte et désirable. Montrer le “pourquoi”. 3. Utiliser son produit tous les jours : Non négociable pour réussir. Permet de développer une intuition et de repérer les vrais problèmes que la recherche utilisateur ne montre pas toujours. 4. Être un bon ami : Créer des relations authentiques et aider les autres est un facteur clé de succès à long terme. La confiance est la base d'une exécution rapide. 5. Donner plus qu'on ne reçoit : Toujours chercher à aider et à collaborer. La stratégie optimale sur la durée est la coopération. Ne pas être possessif avec ses idées. 6. Utiliser le bon levier : Pour obtenir une décision, il faut identifier la bonne personne qui a le pouvoir de dire “oui”, et ne pas se laisser bloquer par des avis non décisionnaires. 7. N'aller que là où on apporte de la valeur : Combler les manques, faire le travail ingrat que personne ne veut faire. Savoir aussi s'écarter (réunions, projets) quand on n'est pas utile. 8. Le succès a plusieurs parents, l'échec est orphelin : Si le produit réussit, c'est un succès d'équipe. S'il échoue, c'est la faute du PM. Il faut assumer la responsabilité finale. Conclusion : Le PM est un chef d'orchestre. Il ne peut pas jouer de tous les instruments, mais son rôle est d'orchestrer avec humilité le travail de tous pour créer quelque chose d'harmonieux. Tester des applications Spring Boot prêtes pour la production : points clés https://www.wimdeblauwe.com/blog/2025/07/30/how-i-test-production-ready-spring-boot-applications/ L'auteur (Wim Deblauwe) détaille comment il structure ses tests dans une application Spring Boot destinée à la production. Le projet inclut automatiquement la dépendance spring-boot-starter-test, qui regroupe JUnit 5, AssertJ, Mockito, Awaitility, JsonAssert, XmlUnit et les outils de testing Spring. Tests unitaires : ciblent les fonctions pures (record, utilitaire), testés simplement avec JUnit et AssertJ sans démarrage du contexte Spring. Tests de cas d'usage (use case) : orchestrent la logique métier, généralement via des use cases qui utilisent un ou plusieurs dépôts de données. Tests JPA/repository : vérifient les interactions avec la base via des tests realisant des opérations CRUD (avec un contexte Spring pour la couche persistance). Tests de contrôleur : permettent de tester les endpoints web (ex. @WebMvcTest), souvent avec MockBean pour simuler les dépendances. Tests d'intégration complets : ils démarrent tout le contexte Spring (@SpringBootTest) pour tester l'application dans son ensemble. L'auteur évoque également des tests d'architecture, mais sans entrer dans le détail dans cet article. Résultat : une pyramide de tests allant des plus rapides (unitaires) aux plus complets (intégration), garantissant fiabilité, vitesse et couverture sans surcharge inutile. Sécurité Bitwarden offre un serveur MCP pour que les agents puissent accéder aux mots de passe https://nerds.xyz/2025/07/bitwarden-mcp-server-secure-ai/ Bitwarden introduit un serveur MCP (Model Context Protocol) destiné à intégrer de manière sécurisée les agents IA dans les workflows de gestion de mots de passe. Ce serveur fonctionne en architecture locale (local-first) : toutes les interactions et les données sensibles restent sur la machine de l'utilisateur, garantissant l'application du principe de chiffrement zero‑knowledge. L'intégration se fait via l'interface CLI de Bitwarden, permettant aux agents IA de générer, récupérer, modifier et verrouiller les identifiants via des commandes sécurisées. Le serveur peut être auto‑hébergé pour un contrôle maximal des données. Le protocole MCP est un standard ouvert qui permet de connecter de façon uniforme des agents IA à des sources de données et outils tiers, simplifiant les intégrations entre LLM et applications. Une démo avec Claude (agent IA d'Anthropic) montre que l'IA peut interagir avec le coffre Bitwarden : vérifier l'état, déverrouiller le vault, générer ou modifier des identifiants, le tout sans intervention humaine directe. Bitwarden affiche une approche priorisant la sécurité, mais reconnaît les risques liés à l'utilisation d'IA autonome. L'usage d'un LLM local privé est fortement recommandé pour limiter les vulnérabilités. Si tu veux, je peux aussi te résumer les enjeux principaux (interopérabilité, sécurité, cas d'usage) ou un extrait spécifique ! NVIDIA a une faille de securite critique https://www.wiz.io/blog/nvidia-ai-vulnerability-cve–2025–23266-nvidiascape Il s'agit d'une faille d'évasion de conteneur dans le NVIDIA Container Toolkit. La gravité est jugée critique avec un score CVSS de 9.0. Cette vulnérabilité permet à un conteneur malveillant d'obtenir un accès root complet sur l'hôte. L'origine du problème vient d'une mauvaise configuration des hooks OCI dans le toolkit. L'exploitation peut se faire très facilement, par exemple avec un Dockerfile de seulement trois lignes. Le risque principal concerne la compromission de l'isolation entre différents clients sur des infrastructures cloud GPU partagées. Les versions affectées incluent toutes les versions du NVIDIA Container Toolkit jusqu'à la 1.17.7 et du NVIDIA GPU Operator jusqu'à la version 25.3.1. Pour atténuer le risque, il est recommandé de mettre à jour vers les dernières versions corrigées. En attendant, il est possible de désactiver certains hooks problématiques dans la configuration pour limiter l'exposition. Cette faille met en lumière l'importance de renforcer la sécurité des environnements GPU partagés et la gestion des conteneurs AI. Fuite de données de l'application Tea : points essentiels https://knowyourmeme.com/memes/events/the-tea-app-data-leak Tea est une application lancée en 2023 qui permet aux femmes de laisser des avis anonymes sur des hommes rencontrés. En juillet 2025, une importante fuite a exposé environ 72 000 images sensibles (selfies, pièces d'identité) et plus d'1,1 million de messages privés. La fuite a été révélée après qu'un utilisateur ait partagé un lien pour télécharger la base de données compromise. Les données touchées concernaient majoritairement des utilisateurs inscrits avant février 2024, date à laquelle l'application a migré vers une infrastructure plus sécurisée. En réponse, Tea prévoit de proposer des services de protection d'identité aux utilisateurs impactés. Faille dans le paquet npm is : attaque en chaîne d'approvisionnement https://socket.dev/blog/npm-is-package-hijacked-in-expanding-supply-chain-attack Une campagne de phishing ciblant les mainteneurs npm a compromis plusieurs comptes, incluant celui du paquet is. Des versions compromises du paquet is (notamment les versions 3.3.1 et 5.0.0) contenaient un chargeur de malware JavaScript destiné aux systèmes Windows. Ce malware a offert aux attaquants un accès à distance via WebSocket, permettant potentiellement l'exécution de code arbitraire. L'attaque fait suite à d'autres compromissions de paquets populaires comme eslint-config-prettier, eslint-plugin-prettier, synckit, @pkgr/core, napi-postinstall, et got-fetch. Tous ces paquets ont été publiés sans aucun commit ou PR sur leurs dépôts GitHub respectifs, signalant un accès non autorisé aux tokens mainteneurs. Le domaine usurpé [npnjs.com](http://npnjs.com) a été utilisé pour collecter les jetons d'accès via des emails de phishing trompeurs. L'épisode met en lumière la fragilité des chaînes d'approvisionnement logicielle dans l'écosystème npm et la nécessité d'adopter des pratiques renforcées de sécurité autour des dépendances. Revues de sécurité automatisées avec Claude Code https://www.anthropic.com/news/automate-security-reviews-with-claude-code Anthropic a lancé des fonctionnalités de sécurité automatisées pour Claude Code, un assistant de codage d'IA en ligne de commande. Ces fonctionnalités ont été introduites en réponse au besoin croissant de maintenir la sécurité du code alors que les outils d'IA accélèrent considérablement le développement de logiciels. Commande /security-review : les développeurs peuvent exécuter cette commande dans leur terminal pour demander à Claude d'identifier les vulnérabilités de sécurité, notamment les risques d'injection SQL, les vulnérabilités de script intersite (XSS), les failles d'authentification et d'autorisation, ainsi que la gestion non sécurisée des données. Claude peut également suggérer et implémenter des correctifs. Intégration GitHub Actions : une nouvelle action GitHub permet à Claude Code d'analyser automatiquement chaque nouvelle demande d'extraction (pull request). L'outil examine les modifications de code pour y trouver des vulnérabilités, applique des règles personnalisables pour filtrer les faux positifs et commente directement la demande d'extraction avec les problèmes détectés et les correctifs recommandés. Ces fonctionnalités sont conçues pour créer un processus d'examen de sécurité cohérent et s'intégrer aux pipelines CI/CD existants, ce qui permet de s'assurer qu'aucun code n'atteint la production sans un examen de sécurité de base. Loi, société et organisation Google embauche les personnes clés de Windsurf https://www.blog-nouvelles-technologies.fr/333959/openai-windsurf-google-deepmind-codage-agentique/ windsurf devait être racheté par OpenAI Google ne fait pas d'offre de rachat mais débauche quelques personnes clés de Windsurf Windsurf reste donc indépendante mais sans certains cerveaux y compris son PDG. Les nouveaux dirigeants sont les ex leaders des force de vente Donc plus une boîte tech Pourquoi le deal a 3 milliard est tombé à l'eau ? On ne sait pas mais la divergence et l‘indépendance technologique est possiblement en cause. Les transfuge vont bosser chez Deepmind dans le code argentique Opinion Article: https://www.linkedin.com/pulse/dear-people-who-think-ai-low-skilled-code-monkeys-future-jan-moser-svade/ Jan Moser critique ceux qui pensent que l'IA et les développeurs peu qualifiés peuvent remplacer les ingénieurs logiciels compétents. Il cite l'exemple de l'application Tea, une plateforme de sécurité pour femmes, qui a exposé 72 000 images d'utilisateurs en raison d'une mauvaise configuration de Firebase et d'un manque de pratiques de développement sécurisées. Il souligne que l'absence de contrôles automatisés et de bonnes pratiques de sécurité a permis cette fuite de données. Moser avertit que des outils comme l'IA ne peuvent pas compenser l'absence de compétences en génie logiciel, notamment en matière de sécurité, de gestion des erreurs et de qualité du code. Il appelle à une reconnaissance de la valeur des ingénieurs logiciels qualifiés et à une approche plus rigoureuse dans le développement logiciel. YouTube déploie une technologie d'estimation d'âge pour identifier les adolescents aux États-Unis https://techcrunch.com/2025/07/29/youtube-rolls-out-age-estimatation-tech-to-identify-u-s-teens-and-apply-additional-protections/ Sujet très à la mode, surtout au UK mais pas que… YouTube commence à déployer une technologie d'estimation d'âge basée sur l'IA pour identifier les utilisateurs adolescents aux États-Unis, indépendamment de l'âge déclaré lors de l'inscription. Cette technologie analyse divers signaux comportementaux, tels que l'historique de visionnage, les catégories de vidéos consultées et l'âge du compte. Lorsqu'un utilisateur est identifié comme adolescent, YouTube applique des protections supplémentaires, notamment : Désactivation des publicités personnalisées. Activation des outils de bien-être numérique, tels que les rappels de temps d'écran et de coucher. Limitation de la visualisation répétée de contenus sensibles, comme ceux liés à l'image corporelle. Si un utilisateur est incorrectement identifié comme mineur, il peut vérifier son âge via une pièce d'identité gouvernementale, une carte de crédit ou un selfie. Ce déploiement initial concerne un petit groupe d'utilisateurs aux États-Unis et sera étendu progressivement. Cette initiative s'inscrit dans les efforts de YouTube pour renforcer la sécurité des jeunes utilisateurs en ligne. Mistral AI : contribution à un standard environnemental pour l'IA https://mistral.ai/news/our-contribution-to-a-global-environmental-standard-for-ai Mistral AI a réalisé la première analyse de cycle de vie complète d'un modèle d'IA, en collaboration avec plusieurs partenaires. L'étude quantifie l'impact environnemental du modèle Mistral Large 2 sur les émissions de gaz à effet de serre, la consommation d'eau, et l'épuisement des ressources. La phase d'entraînement a généré 20,4 kilotonnes de CO₂ équivalent, consommé 281 000 m³ d'eau, et utilisé 660 kg SB-eq (mineral consumption). Pour une réponse de 400 tokens, l'impact marginal est faible mais non négligeable : 1,14 gramme de CO₂, 45 mL d'eau, et 0,16 mg d'équivalent antimoine. Mistral propose trois indicateurs pour évaluer cet impact : l'impact absolu de l'entraînement, l'impact marginal de l'inférence, et le ratio inference/impact total sur le cycle de vie. L'entreprise souligne l'importance de choisir le modèle en fonction du cas d'usage pour limiter l'empreinte environnementale. Mistral appelle à plus de transparence et à l'adoption de standards internationaux pour permettre une comparaison claire entre modèles. L'IA promettait plus d'efficacité… elle nous fait surtout travailler plus https://afterburnout.co/p/ai-promised-to-make-us-more-efficient Les outils d'IA devaient automatiser les tâches pénibles et libérer du temps pour les activités stratégiques et créatives. En réalité, le temps gagné est souvent aussitôt réinvesti dans d'autres tâches, créant une surcharge. Les utilisateurs croient être plus productifs avec l'IA, mais les données contredisent cette impression : une étude montre que les développeurs utilisant l'IA prennent 19 % de temps en plus pour accomplir leurs tâches. Le rapport DORA 2024 observe une baisse de performance globale des équipes lorsque l'usage de l'IA augmente : –1,5 % de throughput et –7,2 % de stabilité de livraison pour +25 % d'adoption de l'IA. L'IA ne réduit pas la charge mentale, elle la déplace : rédaction de prompts, vérification de résultats douteux, ajustements constants… Cela épuise et limite le temps de concentration réelle. Cette surcharge cognitive entraîne une forme de dette mentale : on ne gagne pas vraiment du temps, on le paie autrement. Le vrai problème vient de notre culture de la productivité, qui pousse à toujours vouloir optimiser, quitte à alimenter l'épuisement professionnel. Trois pistes concrètes : Repenser la productivité non en temps gagné, mais en énergie préservée. Être sélectif dans l'usage des outils IA, en fonction de son ressenti et non du battage médiatique. Accepter la courbe en J : l'IA peut être utile, mais nécessite des ajustements profonds pour produire des gains réels. Le vrai hack de productivité ? Parfois, ralentir pour rester lucide et durable. Conférences MCP Submit Europe https://mcpdevsummit.ai/ Retour de JavaOne en 2026 https://inside.java/2025/08/04/javaone-returns–2026/ JavaOne, la conférence dédiée à la communauté Java, fait son grand retour dans la Bay Area du 17 au 19 mars 2026. Après le succès de l'édition 2025, ce retour s'inscrit dans la continuité de la mission initiale de la conférence : rassembler la communauté pour apprendre, collaborer et innover. La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 25–27 août 2025 : SHAKA Biarritz - Biarritz (France) 5 septembre 2025 : JUG Summer Camp 2025 - La Rochelle (France) 12 septembre 2025 : Agile Pays Basque 2025 - Bidart (France) 15 septembre 2025 : Agile Tour Montpellier - Montpellier (France) 18–19 septembre 2025 : API Platform Conference - Lille (France) & Online 22–24 septembre 2025 : Kernel Recipes - Paris (France) 22–27 septembre 2025 : La Mélée Numérique - Toulouse (France) 23 septembre 2025 : OWASP AppSec France 2025 - Paris (France) 23–24 septembre 2025 : AI Engineer Paris - Paris (France) 25 septembre 2025 : Agile Game Toulouse - Toulouse (France) 25–26 septembre 2025 : Paris Web 2025 - Paris (France) 30 septembre 2025–1 octobre 2025 : PyData Paris 2025 - Paris (France) 2 octobre 2025 : Nantes Craft - Nantes (France) 2–3 octobre 2025 : Volcamp - Clermont-Ferrand (France) 3 octobre 2025 : DevFest Perros-Guirec 2025 - Perros-Guirec (France) 6–7 octobre 2025 : Swift Connection 2025 - Paris (France) 6–10 octobre 2025 : Devoxx Belgium - Antwerp (Belgium) 7 octobre 2025 : BSides Mulhouse - Mulhouse (France) 7–8 octobre 2025 : Agile en Seine - Issy-les-Moulineaux (France) 8–10 octobre 2025 : SIG 2025 - Paris (France) & Online 9 octobre 2025 : DevCon #25 : informatique quantique - Paris (France) 9–10 octobre 2025 : Forum PHP 2025 - Marne-la-Vallée (France) 9–10 octobre 2025 : EuroRust 2025 - Paris (France) 16 octobre 2025 : PlatformCon25 Live Day Paris - Paris (France) 16 octobre 2025 : Power 365 - 2025 - Lille (France) 16–17 octobre 2025 : DevFest Nantes - Nantes (France) 17 octobre 2025 : Sylius Con 2025 - Lyon (France) 17 octobre 2025 : ScalaIO 2025 - Paris (France) 17–19 octobre 2025 : OpenInfra Summit Europe - Paris (France) 20 octobre 2025 : Codeurs en Seine - Rouen (France) 23 octobre 2025 : Cloud Nord - Lille (France) 30–31 octobre 2025 : Agile Tour Bordeaux 2025 - Bordeaux (France) 30–31 octobre 2025 : Agile Tour Nantais 2025 - Nantes (France) 30 octobre 2025–2 novembre 2025 : PyConFR 2025 - Lyon (France) 4–7 novembre 2025 : NewCrafts 2025 - Paris (France) 5–6 novembre 2025 : Tech Show Paris - Paris (France) 6 novembre 2025 : dotAI 2025 - Paris (France) 6 novembre 2025 : Agile Tour Aix-Marseille 2025 - Gardanne (France) 7 novembre 2025 : BDX I/O - Bordeaux (France) 12–14 novembre 2025 : Devoxx Morocco - Marrakech (Morocco) 13 novembre 2025 : DevFest Toulouse - Toulouse (France) 15–16 novembre 2025 : Capitole du Libre - Toulouse (France) 19 novembre 2025 : SREday Paris 2025 Q4 - Paris (France) 19–21 novembre 2025 : Agile Grenoble - Grenoble (France) 20 novembre 2025 : OVHcloud Summit - Paris (France) 21 novembre 2025 : DevFest Paris 2025 - Paris (France) 27 novembre 2025 : DevFest Strasbourg 2025 - Strasbourg (France) 28 novembre 2025 : DevFest Lyon - Lyon (France) 1–2 décembre 2025 : Tech Rocks Summit 2025 - Paris (France) 4–5 décembre 2025 : Agile Tour Rennes - Rennes (France) 5 décembre 2025 : DevFest Dijon 2025 - Dijon (France) 9–11 décembre 2025 : APIdays Paris - Paris (France) 9–11 décembre 2025 : Green IO Paris - Paris (France) 10–11 décembre 2025 : Devops REX - Paris (France) 10–11 décembre 2025 : Open Source Experience - Paris (France) 11 décembre 2025 : Normandie.ai 2025 - Rouen (France) 28–31 janvier 2026 : SnowCamp 2026 - Grenoble (France) 2–6 février 2026 : Web Days Convention - Aix-en-Provence (France) 3 février 2026 : Cloud Native Days France 2026 - Paris (France) 12–13 février 2026 : Touraine Tech #26 - Tours (France) 22–24 avril 2026 : Devoxx France 2026 - Paris (France) 23–25 avril 2026 : Devoxx Greece - Athens (Greece) 17 juin 2026 : Devoxx Poland - Krakow (Poland) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via X/twitter https://twitter.com/lescastcodeurs ou Bluesky https://bsky.app/profile/lescastcodeurs.com Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/

time community ai power google uk internet guide france pr building spring data elon musk microsoft chatgpt attention mvp phase dans construction agent tests windows bay area ces patterns tout tea ia pas limitations faire distribution openai gemini extension runner nvidia passage rust blue sky api retour conf agile cela python gpt toujours sb nouveau ml unis linux java trois priorit github guillaume mieux activation int libert aur jest num savoir selon donner valid armin bom lam certains javascript exposition documentation apache opus mod llm donc nouvelles arnaud contr prise changement cpu maven nouveaux gpu m1 parfois travailler google cloud exp ast dns normandie certaines tester aff cinq vall construire counted sql principales lorsqu grok verified moser node git loi utiliser pdg sujet cloudflare sortie afin sig lancement anthropic fen deepmind accepter ssl gitlab axes spel optimisation enregistr mocha mongodb toutefois ci cd modules json mistral capacit configuration xai paris france permet aot orta cli github copilot mcp objet comportement utilisation repenser montrer capitole enregistrement prd fuite jit ecrire appels favoriser fixe firebase sse commande oauth crud jep vache oci bgp jetbrains swe bitwarden nuage github actions windsurf livrer propuls mistral ai faille xss a2a optimis mocker remplacement websockets stagiaire automatiser chris perry cvss devcon revues spring boot personnalisation tom l jdk lyon france podman vertex ai adk bordeaux france jfr profilage amazon bedrock diagramme script editor junit clis dockerfile javaone provence france testcontainers toulouse france strasbourg france github issues commonjs lille france codeurs micrometer sourcetree dijon france devoxx france
The TechEd Podcast
Love It or Hate It: A Surprisingly Human (And Very Fun) Conversation About Math - Dr. Jordan Ellenberg, Mathematics Professor at the University of Wisconsin

The TechEd Podcast

Play Episode Listen Later Aug 12, 2025 64:53 Transcription Available


What happens when a world-class mathematician meets '80s college radio, Bill Gates' top-10 favorite books, and a host with an algebra redemption arc? A surprisingly funny, fast-moving conversation. Dr. Jordan Ellenberg—John D. MacArthur Professor of Mathematics at UW–Madison and author of How Not to Be Wrong—swaps stories about The Housemartins, consulting on NUMB3RS (yes, one of his lines aired), and competing at the International Mathematical Olympiad. There's a lot of laughter—and a fresh way to see math as culture, craft, and curiosity.But we also get practical about math education. We discuss the love/hate split students have for math and what it implies for curriculum design; a century of “new” methods (and if anything is truly new); how movie tropes (Good Will Hunting, etc.) shape student identity in math; soccer-drills vs scrimmage as a frame for algebra practice and “honest” applications; grades as feedback vs record; AI shifting what counts as computation vs math; why benchmarks miss the point and the risk of lowering writing standards with LLMs; and a preview of Jordan's pro-uncertainty thesis.Listen to Learn: A better answer to “Why am I learning this?” using a soccer analogyThe two big off-ramps of math for students, and tactics that keep more students on boardHow to replace the “born genius” myth with a mindset that helps any student do mathWhen a grade is a record vs. a motivator, and a simple replacement policy that turns a rough start into effort and growthWhat AI will and won't change in math class, and why “does it help create new math?” matters more than benchmark scores3 Big Takeaways from this Episode:1. Math mastery comes from practice plus meaning, not a “born genius.” Jordan puts it plainly: “genius is a thing that happens, not a kind of person,” and he uses the soccer drills vs scrimmage analogy to pair targeted practice with real tasks, with algebraic manipulation as a core high school skill. He urges teachers to “throw a lot of spaghetti at the wall” so different explanations land for different students, because real innovation is iterative and cooperative.2. Students fall off at fractions and Algebra I. How do we pull them back? Jordan names those two moments as the big off-ramps and points to multiple representations, honest applications, and frequent low‑stakes practice to keep kids in. Matt's own algebra story shows how a replacement policy turned failure into effort and persistence, reframing grades as motivation rather than just record‑keeping.3. AI will shift our capabilities and limits in math, but math is still a human task. Calculators and Wolfram already do student‑level work, and Jordan argues benchmarks like DeepMind vs the International Mathematical Olympiad matter less than whether tools help create new mathematics. He also warns against letting LLMs lower writing standards and says the real test is whether these systems add substantive math, not just win contests.Resources in this Episode:Visit Jordan Ellenberg's website! jordanellenberg.comRead How Not to Be Wrong: The Power of Mathematical ThinkingWe want to hear from you! Send us a text.Instagram - Facebook - YouTube - TikTok - Twitter - LinkedIn

The Marketing AI Show
#161: GPT-5, Google DeepMind Genie 3, Cloudflare vs. Perplexity, OpenAI's Open Source Models, Claude 4.1 & New Data on AI Layoffs

The Marketing AI Show

Play Episode Listen Later Aug 12, 2025 75:25


GPT-5 finally landed, and the hype was matched with backlash. In this episode, Paul and Mike share their takeaways from the new model, provide insights into the gravity of DeepMind's photorealistic Genie 3 world-model, unravel Perplexity's stealth crawling controversy, touch on OpenAI's open-weight release and rumored $500 billion valuation, and more in our rapid-fire section.  Show Notes: Access the show notes and show links here Timestamps:  00:00:00 — Intro 00:04:57 — GPT-5 Launch and First Reactions 00:25:29 — DeepMind's Genie 3 World Model 00:32:20 — Perplexity vs. Cloudflare Crawling Dispute 00:37:37 — OpenAI Returns to Open Weights 00:41:21 — OpenAI $500B Secondary Talks 00:44:26 — Anthropic Claude Opus 4.1 and System Prompt Update 00:49:57 — AI and the Future of Work  00:56:02 — OpenAI “universal verifiers” 01:00:42 — OpenAI Offers ChatGPT to the Federal Workforce 01:02:59 — ElevenLabs Launches AI Music 01:05:32 — Meta Buys AI Audio Startup 01:09:46 — Google AI Pro for Students This episode is brought to you by our Academy 3.0 Launch Event. Join Paul Roetzer and the SmarterX team on August 19 at 12pm ET for the launch of AI Academy 3.0 by SmarterX —your gateway to personalized AI learning for professionals and teams. Discover our new on-demand courses, live classes, certifications, and a smarter way to master AI. Register here. This week's episode is also brought to you by Intro to AI, our free, virtual monthly class, streaming live on Aug. 14 at 12 p.m. ET. Reserve your seat AND attend for a chance to win a 12-month AI Mastery Membership.  For more information on Intro to AI and to register for this month's class, visit www.marketingaiinstitute.com/intro-to-ai. Visit our website Receive our weekly newsletter Join our community: Slack LinkedIn Twitter Instagram Facebook Looking for content and resources? Register for a free webinar Come to our next Marketing AI Conference Enroll in our AI Academy 

Everyday AI Podcast – An AI and ChatGPT Podcast
EP 586: OpenAI releases GPT-5 in ChatGPT, Google's impressive Genie 3 and more AI News That Matters

Everyday AI Podcast – An AI and ChatGPT Podcast

Play Episode Listen Later Aug 11, 2025 53:23


OpenAI released GPT-5, and it's.... polarizing?Google dropped something kinda outta this world.And Anthropic picked a bad week to drop a new model.This week was one of the busiest in AI of the year. If you missed anything, this is your one-stop shot to get caught up. On Mondays, Everyday AI brings you the AI News That Matters. No fluff. No B.S. Just the meaningful AI news that impacts us all. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Thoughts on this? Join the convo and connect with other AI leaders on LinkedIn.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:OpenAI Releases GPT-5—Smarter, Faster ModelGPT-5 Integration in Microsoft Copilot, AzureApple Intelligence Announces GPT-5 IntegrationGPT-5 Multimodal Input and Output FeaturesGPT-5 Rollout Issues and Model Router BugsAnthropic Launches Claude Opus 4.1 UpdateGoogle Genie 3 World Model DemonstrationOpenAI Debuts GPT OSS Open Source ModelGoogle Gemini Guided Learning LaunchesEleven Labs Releases AI Music GeneratorMeta Forms TBD Lab for Llama ModelsChatGPT Plus Plan Rate Limit ControversyUser Backlash Over Removal of Old ModelsCompetition Among AI Model Providers EscalatesTimestamps:00:00 GPT-5's Global Impact Unveiled03:22 "GPT-5: Stellar Yet Polarizing Release"06:23 "OpenAI's Impactful GPT-5 Update"11:51 "GPT-5 Integration Expands Microsoft Reach"13:19 Microsoft Integrates GPT-5 in AI Tools17:15 "GPT-5 Surpasses, OpenAI's Model Looms"23:18 "Guided Learning with Google Gemini"25:26 "AI Integration Critique in Education"30:40 AI Industry Disruption by GPT OSS34:49 AI Advances: Genie 3 Unveiled37:54 AI Video in World Simulators42:23 ChatGPT Plus Users Gain Higher Limits46:36 Altman on Unhealthy AI Dependencies49:41 Tech Updates: New Releases and Controversies51:24 Tech Giants Launch Major AI ModelsKeywords:GPT-5, OpenAI, AI news, large language model, ChatGPT, Microsoft Copilot, Apple Intelligence, iOS 26, multimodal model, model router, reasoning models, AI hallucinations, factual accuracy, AI safety, customization, API pricing, Anthropic, Claude Opus 4.1, agentic tasks, software engineering, coding assistant, Google Genie 3, world model, DeepMind, persistent environments, embodied AI, physical mechanics, AI video generation, Sora, AI benchmarking, LM Arena, Google Gemini 2.5 Pro, Guided Learning, LearnLM, Gemini Experiences, active learning AI, AI in education, AI partnerships, Apple integration, real-time rSend Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner

AI DAILY: Breaking News in AI
AI DEHUMANIZES US

AI DAILY: Breaking News in AI

Play Episode Listen Later Aug 11, 2025 3:15


Plus AI Is Coming For Your CEOLike this? Get AIDAILY, delivered to your inbox 3x a week. Subscribe to our newsletter at https://aidaily.usWhen AI Acts Too Human, We See Humans as Less—Crazy, Right? Studies show that when we meet emotionally savvy AI—like cuddly robots or empathetic chatbots—we start viewing them as human. But here's the low-key dark twist: that makes real humans feel less human, paving the way for cold or even harsh treatment. AI's Coming for Everyone—even CEOs—So Brace for the Shake-UpMo Gawdat, ex-Google X boss, dubs the idea that AI will generate new jobs “100% crap.” He warns AGI could replace even podcasters, developers, and yes—even CEOs. His own AI startup, built by just three people, used to need 350 developers.If AI Can Do Your Job, Is Your Job Worth Doing? Some jobs are basically button-pushing and replaceable by AI—but that doesn't mean the people doing them are. Lean into your human edge: creativity, strategy, building tools that actually matter. That's how you future-proof yourself, not by letting AI do your job. ChatGPT Just Wrote a “Bible” — But Does It Actually Hit Different? A DeepMind researcher got ChatGPT to whip up a fictional Buddhist “Xeno Sutra,” full of zen imagery, emptiness vibes, and even physics metaphors. Scholars were kinda shook—there's legit poetic depth there. Still, the real spiritual flex lies in how humans interpret and find meaning in the output.Is the A.I. Boom Turning Into an A.I. Bubble? The AI hype is hitting dot-com bubble vibes—giant IPOs, soaring Big Tech valuations, and investor frenzy. Yet unlike 2000s void promises, today's giants are actually profitable. Still, signs like crazy P/E ratios and speculative bets hint we might be cruising on shaky ground.AI Might Already Have Thoughts About You—But Are They Nice? AI systems are busy snooping—scraping your social posts and public footprint to build a profile with an implied “opinion.” Basically, it's not just about crops; even everyday tools are reading the digital you. Kind of wild, but also low-key creepy. Why “Chatting” with AI Is Basically a Moral No-Go  AI chatbots aren't just bots—they're disordered convos with non‑intelligence. Talking to one isn't harmless—it's a moral misstep that twists the natural aim of dialogue: genuine, human-to-human discovery and connection.

The Instagram Stories
8-9-25 - Instagram's Map Feature isn't what users asked for and Google's New Video AI Model

The Instagram Stories

Play Episode Listen Later Aug 9, 2025 14:02


Today's weekend edition focused on the Instagram Map feature - what it is, the backlash it's received and why. Lia Haberman of the In Case You Missed It newsletter stops by to explain what all the fuss is about and why it matters. Also Adam Mosseri busts an Instagram myth, and Ashley Coffey and I dive into AI news around Perplexity, Cloudflare, and Google's new AI model. Links:Lia Haberman and ICYMIICYMI:: Repost, Maps... Which Instagram updates do you really need?! (Substack)Lia's Threads Post about Instagram Maps (Threads) Instagram: Myth-Busting about who you interact with (Instagram) AI News:Perplexity is using stealth, undeclared crawlers to evade website no-crawl directives (Cloudflare) Some people are defending Perplexity after Cloudflare ‘named and shamed' it (TechCrunch)Google's DeepMind thinks its new Genie 3 world model presents a stepping stone toward AGI (TechCrunch) AI News You Should Know About - Entire Episode: (Audio) (Video) Sign Up for The Weekly Email Roundup: NewsletterLeave a Review: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Follow Me on Instagram: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠@danielhillmedia⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

Scaling DevTools
Logan Kilpatrick from Google DeepMind: Building for 100m developers

Scaling DevTools

Play Episode Listen Later Aug 8, 2025 37:51 Transcription Available


Logan Kilpatrick shares how DeepMind's organizational changes helped their resurgance in AI. What needs to happen to reach 100m developers. And why the next six months are more exciting than ever.This episode is brought to you by WorkOS. If you're thinking about selling to enterprise customers, WorkOS can help you add enterprise features like Single Sign On and audit logs. Links:Google DeepMind  Logan Kilpatrick Logan Kilpatrick podcast NotebookLM Gemini CLI Veo 

WSJ Minute Briefing
Tech Stocks Rise as Trump Threatens 100% Chip Tariff

WSJ Minute Briefing

Play Episode Listen Later Aug 7, 2025 3:01


Plus: Microsoft is raiding Google's DeepMind for talent to bolster its AI ambitions. And, United Airlines resumes flights after a tech issue causes widespread delays. Azhar Sukri hosts. Sign up for WSJ's free What's News newsletter. Learn more about your ad choices. Visit megaphone.fm/adchoices

Dev Sem Fronteiras
Engenheiro Pesquisador de IA na Google Deepmind em Londres, Inglaterra - Dev Sem Fronteiras #204

Dev Sem Fronteiras

Play Episode Listen Later Aug 7, 2025 49:08


O carioca João Gabriel cresceu com o computador da família e, por isso, não foi difícil optar por Engenharia da Computação na hora de escolher um curso superior. Quando surgiu a oportunidade de fazer parte da graduação na França, ele tentou, depois tentou de novo, acabou conseguindo.Por lá, depois de uma pandemia, de uma fusão entre faculdades e de alguns malabarismos burocráticos, ele acabou ficando. Certo dia, depois de ajudar um amigo com um problema de trabalho, ele recebeu uma oferta de emprego que, apesar de não ter dado certo (de novo, por burocracia), acabou lhe levando à DeepMind em Londres.Neste episódio, o João conta como tem sido acompanhar (e contribuir) de dentro do Google toda a revolução das IAs generativas, além do dia a dia de se morar na terra onde há mais estrangeiros no mundo.Fabrício Carraro, o seu viajante poliglotaJoão Oliveira, Engenheiro Pesquisador de IA na Google Deepmind em Londres, InglaterraLinks:IA Sob Controle com o JoãoScikit LearnLeetCodeDocumentário do AlphaGoEstudo original do GPT de 2017Projeto LookoutConheça a Escola de Inteligência Artificial da Alura, mergulhe com profundidade no universo da IA aplicada a diferentes áreas de atuação, e domine as principais ferramentas que estão moldando o agora.TechGuide.sh, um mapeamento das principais tecnologias demandadas pelo mercado para diferentes carreiras, com nossas sugestões e opiniões.#7DaysOfCode: Coloque em prática os seus conhecimentos de programação em desafios diários e gratuitos. Acesse https://7daysofcode.io/Ouvintes do podcast Dev Sem Fronteiras têm 10% de desconto em todos os planos da Alura Língua. Basta ir a https://www.aluralingua.com.br/promocao/devsemfronteiras/e começar a aprender inglês e espanhol hoje mesmo! Produção e conteúdo:Alura Língua Cursos online de Idiomas – https://www.aluralingua.com.br/Alura Cursos online de Tecnologia – https://www.alura.com.br/Edição e sonorização: Rede Gigahertz de Podcasts

Risky Business
Risky Business #801 -- AI models can hack well now and it's weirding us out

Risky Business

Play Episode Listen Later Aug 6, 2025 66:01


On this week's show Patrick Gray and Adam Boileau discuss the week's cybersecurity news. Google security engineering VP Heather Adkins drops by to talk about their AI bug hunter, and Risky Business producer Amberleigh Jack makes her main show debut. This episode explores the rise of AI-powered bug hunting: Google's Project Zero and Deepmind team up to find and report 20 bugs to open source projects The XBOW AI bug hunting platform sees success on HackerOne Is an AI James Kettle on the horizon? There's also plenty of regular cybersecurity news to discuss: On-prem Sharepoint's codebase is maintained out of China… awkward! China frets about the US backdooring its NVIDIA chips, how you like ‘dem apples, China? SonicWall advises customers to turn off their VPNs Hardware controlling Dell laptop fingerprint and card readers has nasty driver bugs Russia uses its ISPs to in-the-middle embassy computers and backdoor ‘em. The Russian government pushes VK's Max messenger for everything This week's show is sponsored by device management platform Devicie. Head of Solutions Sean Ollerton talks through the impending Windows 10 apocalypse, as Microsoft ends mainstream support. He says Windows 11 isn't as scary as people make out, but if the update isn't on your radar now, time is running out. This episode is also available on Youtube. Show notes Google says its AI-based bug hunter found 20 security vulnerabilities | TechCrunch Is XBOW's success the beginning of the end of human-led bug hunting? Not yet. | CyberScoop James Kettle on X: "There I am being careful to balance hyping my talk without going too far and then this gets published

AI Inside
DeepMind Genie 3 Builds Worlds Instantly!

AI Inside

Play Episode Listen Later Aug 6, 2025 82:38


On this week's AI Inside with Jason Howell and Jeff Jarvis, DeepMind shows off its Genie 3 simulation world model, Perplexity is under fire for controversial web crawling tactics, ElevenLabs unveils a commercial-ready AI music generator, and Illinois becomes the first state to ban AI-powered therapists. Enjoying the AI Inside podcast? Please rate us ⭐⭐⭐⭐⭐ in your podcatcher of choice! Note: Time codes subject to change depending on dynamic ad insertion by the distributor. CHAPTERS: 0:00:00 - Podcast begins 0:01:04 - Jason's three hour conversation with ChatGPT 0:12:24 - DeepMind reveals Genie 3 “world model” that creates real-time interactive simulations 0:22:41 - Open models by OpenAI 0:24:55 - LeCunn and Ng on China and open-source momentum 0:27:26 - ⁠A language model built for the public good 0:32:28 - Perplexity is using stealth, undeclared crawlers to evade website no-crawl directives 0:48:35 - ElevenLabs launches an AI music generator, which it claims is cleared for commercial use 0:57:12 - Illinois is the first state to ban AI therapists 0:59:55 - ChatGPT adds mental health guardrails after bot 'fell short in recognizing signs of delusion' 1:03:40 - OpenAI removes ChatGPT feature after private conversations leak to Google search 1:06:00 - Apple might be building its own AI ‘answer engine' 1:09:31 - Anthropic Unveils More Powerful AI Model Ahead of Rival GPT-5 Release 1:11:01 - Anthropic Revokes OpenAI's Access to Claude 1:13:23 - Hackers Hijacked Google's Gemini AI With a Poisoned Calendar Invite to Take Over a Smart Home 1:14:58 - Grok's ‘spicy' video setting instantly made me Taylor Swift nude deepfakes Learn more about your ad choices. Visit megaphone.fm/adchoices

“HR Heretics” | How CPOs, CHROs, Founders, and Boards Build High Performing Companies

Today on HR Heretics, Kelli and Nolan analyze the controversial Windsurf acquisition prompted by Windsurf employee #2's explosive social media post about receiving only 1% equity payout despite Google's $2 billion deal, highlighting Silicon Valley's eroding compensation norms.*Email us your questions or topics for Kelli & Nolan: hrheretics@turpentine.coFor coaching and advising inquire at https://kellidragovich.com/HR Heretics is a podcast from Turpentine.Support HR Heretics Sponsors:Planful empowers teams just like yours to unlock the secrets of successful workforce planning. Use data-driven insights to develop accurate forecasts, close hiring gaps, and adjust talent acquisition plans collaboratively based on costs today and into the future. ✍️ Go to https://planful.com/heretics to see how you can transform your HR strategy.Metaview is the AI platform built for recruiting. Our suite of AI agents work across your hiring process to save time, boost decision quality, and elevate the candidate experience.Learn why team builders at 3,000+ cutting-edge companies like Brex, Deel, and Quora can't live without Metaview.It only takes minutes to get up and running. Check it out!KEEP UP WITH NOLAN + KELLI ON LINKEDINNolan: https://www.linkedin.com/in/nolan-church/Kelli: https://www.linkedin.com/in/kellidragovich/—TIMESTAMPS:(00:00) Intro(00:13) Prem's Bombshell Tweet(01:32) The DeepMind vs Cognition Choice(02:16) Clarifying the Exploding Offer(04:15) Kelly's Google Looker Experience(05:00) Why Contract Protections Don't Matter(06:07) Leadership Accountability(07:16) Silicon Valley's Broken Unwritten Rules(08:38) Sponsors: Planful | Metaview(11:37) Culture vs Money in Acquisitions(13:00) First Principles: The New Acquisition Reality(14:56) Gary Tan's Tone-Deaf Response(15:41) The Chaos of Modern Tech(17:00) The Power of Social Media Transparency(17:49) Wrap-Up This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit hrheretics.substack.com

Machine Learning Street Talk
DeepMind Genie 3 [World Exclusive] (Jack Parker Holder, Shlomi Fruchter)

Machine Learning Street Talk

Play Episode Listen Later Aug 5, 2025 58:22


This episode features Shlomi Fuchter and Jack Parker Holder from Google DeepMind, who are unveiling a new AI called Genie 3. The host, Tim Scarfe, describes it as the most mind-blowing technology he has ever seen. We were invited to their offices to conduct the interview (not sponsored).Imagine you could create a video game world just by describing it. That's what Genie 3 does. It's an AI "world model" that learns how the real world works by watching massive amounts of video. Unlike a normal video game engine (like Unreal or the one for Doom) that needs to be programmed manually, Genie generates a realistic, interactive, 3D world from a simple text prompt.**SPONSOR MESSAGES***Prolific: Quality data. From real people. For faster breakthroughs.https://prolific.com/mlst?utm_campaign=98404559-MLST&utm_source=youtube&utm_medium=podcast&utm_content=script-gen***Here's a breakdown of what makes it so revolutionary:From Text to a Virtual World: You can type "a drone flying by a beautiful lake" or "a ski slope," and Genie 3 creates that world for you in about three seconds. You can then navigate and interact with it in real-time.It's Consistent: The worlds it creates have a reliable memory. If you look away from an object and then look back, it will still be there, just as it was. The guests explain that this consistency isn't explicitly programmed in; it's a surprising, "emergent" capability of the powerful AI model.A Huge Leap Forward: The previous version, Genie 2, was a major step, but it wasn't fast enough for real-time interaction and was much lower resolution. Genie 3 is 720p, interactive, and photorealistic, running smoothly for several minutes at a time.The Killer App - Training Robots: Beyond entertainment, the team sees Genie 3 as a game-changer for training AI. Instead of training a self-driving car or a robot in the real world (which is slow and dangerous), you can create infinite simulations. You can even prompt rare events to happen, like a deer running across the road, to teach an AI how to handle unexpected situations safely.The Future of Entertainment: this could lead to a "YouTube version 2" or a new form of VR, where users can create and explore endless, interconnected worlds together, like the experience machine from philosophy.While the technology is still a research prototype and not yet available to the public, it represents a monumental step towards creating true artificial worlds from the ground up.Jack Parker Holder [Research Scientist at Google DeepMind in the Open-Endedness Team]https://jparkerholder.github.io/Shlomi Fruchter [Research Director, Google DeepMind]https://shlomifruchter.github.io/TOC:[00:00:00] - Introduction: "The Most Mind-Blowing Technology I've Ever Seen"[00:02:30] - The Evolution from Genie 1 to Genie 2[00:04:30] - Enter Genie 3: Photorealistic, Interactive Worlds from Text[00:07:00] - Promptable World Events & Training Self-Driving Cars[00:14:21] - Guest Introductions: Shlomi Fuchter & Jack Parker Holder[00:15:08] - Core Concepts: What is a "World Model"?[00:19:30] - The Challenge of Consistency in a Generated World[00:21:15] - Context: The Neural Network Doom Simulation[00:25:25] - How Do You Measure the Quality of a World Model?[00:28:09] - The Vision: Using Genie to Train Advanced Robots[00:32:21] - Open-Endedness: Human Skill and Prompting Creativity[00:38:15] - The Future: Is This the Next YouTube or VR?[00:42:18] - The Next Step: Multi-Agent Simulations[00:52:51] - Limitations: Thinking, Computation, and the Sim-to-Real Gap[00:58:07] - Conclusion & The Future of Game EnginesREFS:World Models [David Ha, Jürgen Schmidhuber]https://arxiv.org/abs/1803.10122POEThttps://arxiv.org/abs/1901.01753[Akarsh Kumar, Jeff Clune, Joel Lehman, Kenneth O. Stanley]The Fractured Entangled Representation Hypothesishttps://arxiv.org/pdf/2505.11581TRANSCRIPT:https://app.rescript.info/public/share/Zk5tZXk6mb06yYOFh6nSja7Lg6_qZkgkuXQ-kl5AJqM

The Daily Crunch – Spoken Edition
DeepMind revealed Genie 3 and SonicWall urges customers to disable SSLVPN

The Daily Crunch – Spoken Edition

Play Episode Listen Later Aug 5, 2025 8:25


Google DeepMind has revealed Genie 3, its latest foundation world model that the AI lab says presents a crucial stepping stone on the path to artificial general intelligence, or human-like intelligence.  Also, enterprise security company SonicWall is urging its customers to disable a core feature of its most recent line-up of firewall devices after security researchers reported an uptick in ransomware incidents targeting SonicWall customers.  Learn more about your ad choices. Visit podcastchoices.com/adchoices

60 Minutes
08/03/2025: Demis Hassabis and Freezing the Biological Clock

60 Minutes

Play Episode Listen Later Aug 4, 2025 46:32


Demis Hassabis, a pioneer in artificial intelligence, is shaping the future of humanity. As the CEO of Google DeepMind, he was first interviewed by correspondent Scott Pelley in 2023, during a time when chatbots marked the beginning of a new technological era. Since that interview, Hassabis has made headlines for his innovative work, including using an AI model to predict the structure of proteins, which earned him a Nobel Prize. Pelley returns to DeepMind's headquarters in London to discuss what's next for Hassabis, particularly his leadership in the effort to develop artificial general intelligence (AGI) – a type of AI that has the potential to match the versatility and creativity of the human brain. Fertility rates in the United States are currently near historic lows, largely because fewer women are having children in their 20s. As women delay starting families, many are opting for egg freezing, the process of retrieving and freezing unfertilized eggs, to preserve their fertility for the future. Does egg freezing provide women with a way to pause their biological clock? Correspondent Lesley Stahl interviews women who have decided to freeze their eggs and explores what the process entails physically, emotionally and financially. She also speaks with fertility specialists and an ethicist about success rates, equity issues and the increasing market potential of egg freezing. This is a double-length segment. To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices

Subliminal Jihad
[#256] PROPHETS OF (p)DOOM, Part One: Rationalism and AI Psychosis feat. Vincent Lê

Subliminal Jihad

Play Episode Listen Later Jul 30, 2025 113:30


Dimitri and Khalid speak with academic and Substack writer Vincent Lê about the current fevered dystopian landscape of AI, including: the Silicon Valley philosophy of "Rationalism", the Zizian cult, the qualitative difference between LLMs and self-training AIs like AlphaGo and DeepMind, AlphaGo mastering the ancient Chinese game Go, Scott Boorman's 1969 book "Protracted Game: A Wei-ch'i Interpretation of Maoist Revolutionary Strategy", Capital as the first true AGI system, the Bolshevik Revolution as the greatest attempt to build a friendly alternative AGI, and more...part one of two. Vincent's Substack: https://vincentl3.substack.com

Brain Inspired
BI 217 Jennifer Prendki: Consciousness, Life, AI, and Quantum Physics

Brain Inspired

Play Episode Listen Later Jul 30, 2025 108:53


Support the show to get full episodes, full archive, and join the Discord community. The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists. Read more about our partnership. Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released. To explore more neuroscience news and perspectives, visit thetransmitter.org. Do AI engineers need to emulate some processes and features found only in living organisms at the moment, like how brains are inextricably integrated with bodies? Is consciousness necessary for AI entities if we want them to play nice with us? Is quantum physics part of that story, or a key part, or the key part? Jennifer Prendki believes if we continue to scale AI, it will get us more of the same of what we have today, and that we should look to biology, life, and possibly consciousness to enhance AI. Jennifer is a former particle physicist turned entrepreneur and AI expert, focusing on curating the right kinds and forms of data to train AI, and in that vein she led those efforts at Deepmind on the foundation models ubiquitous in our lives now. I was curious why someone with that background would come to the conclusion that AI needs inspiration from life, biology, and consciousness to move forward gracefully, and that it would be useful to better understand those processes in ourselves before trying to build what some people call AGI, whatever that is. Her perspective is a rarity among her cohorts, which we also discuss. And get this: she's interested in these topics because she cares about what happens to the planet and to us as a species. Perhaps also a rarity among those charging ahead to dominate profits and win the race Jennifer's website: Quantum of Data. The blog posts we discuss: The Myth of Emergence Embodiment & Sentience: Why the Body still Matters The Architecture of Synthetic Consciousness On Time and Consciousness Superalignment and the Question of AI Personhood. 0:00 - Intro 3:25 - Jennifer's background 13:10 - Consciousness 16:38 - Life and consciousness 23:16 - Superalignment 40:11 - Quantum 1:04:45 - Wetware and biological mimicry 1:15:03 - Neural interfaces 1:16:48 - AI ethics 1:2:35 - AI models are not models 1:27:13 - What scaling will get us 1:39:53 - Current roadblocks 1:43:19 - Philosophy

EUVC
VC | E533 | This Week in European Tech with Dan, Mads, Lomax & Andrew Scott

EUVC

Play Episode Listen Later Jul 28, 2025 63:07


Welcome back to another episode of the EUVC Podcast, your trusted inside track on the people, deals, and dynamics shaping European venture. This week marks a major milestone — Episode 50! To celebrate, Dan Bowyer, Mads Jensen of SuperSeed and Lomax from Outsized Ventures, and Andrew J. Scott return to unpack the headlines and trends shaping the European tech landscape.From the UK government's OpenAI partnership and what it means, to the missed boat on stablecoins, and to AI outperforming the brightest minds in math—this episode cuts deep into the future of tech, sovereignty, and competitiveness in Europe.Whether you're a founder navigating policy shifts, an investor eyeing infrastructure plays, or just an AI-curious policy wonk—this one's for you.Here's what's covered00:00 | Celebrating Episode 50The gang reflects on hitting a podcasting milestone and shares quick updates from Denmark, Paris, and a beachside founder retreat.03:30 | OpenAI x UK Government: A Real Deal?The UK's MOU with OpenAI is meant to boost public sector productivity—but is it too flimsy to matter? The hosts debate if this partnership is toothless signaling or meaningful progress.06:00 | Can AI Actually Transform Public Services?From “Humpfree the Chatbot” to NHS waitlists, the panel weighs in on the real-world use cases, and how opt-in AI diagnostics could solve the NHS backlog.09:30 | The Bigger Picture: AI Sovereignty and StrategyWith the UK relying on US players (OpenAI, Anthropic, Nvidia), are we compromising our digital sovereignty? Andrew drops the big question: Is this the modern equivalent of exporting raw strategic resources?14:00 | US vs UK AI Plans: Build, Baby, Build vs. Think, Baby, ThinkThe team compares the UK's thoughtful “consultancy-style” AI strategy with the US's aggressive, deregulatory action plan—complete with eagles and executive orders.19:00 | Policy Recommendations from the PodFrom national compute backbones and Buy-UK mandates to AI visa fast-tracks and sovereign LLMs — the panel proposes big ideas Europe should act on today.25:00 | Stablecoins: UK's Missed OpportunityWhile Japan, Singapore, and the US regulate stablecoins, the UK is just starting consultations. Why? And what's at stake?30:00 | Dollar Dominance ReinventedMads explains how stablecoins are reinforcing US economic control — and how UK hesitation risks long-term relevance in fintech.34:00 | Ideas for UK Leadership in StablecoinsCould interest-bearing stablecoins become London's new edge? Could we reclaim fintech innovation by embracing DeFi rails?38:00 | AI Wins Gold at the Maths OlympiadGoogle's DeepMind and OpenAI hit gold-level scores at the IMO. The gang discusses the leap in AI's creative reasoning and what it means for R&D, drug discovery, and Europe's scientific leadership.43:00 | Should Europe Build Its Own Sovereign Research Hub?From CERN-for-AI to training sovereign models, the crew asks whether public sector moonshots are the right way to compete.48:00 | Deal of the Week: Eurazeo's €650M Fund for AI ScaleupsIn a capital-constrained landscape, Eurazeo closes a rare growth fund to back Europe's AI champions.50:00 | Wildcard: AI vs. RaccoonsAndrew shares a niche but hilarious use case for computer vision AI: keeping raccoons out of houses. No joke.

Edtech Insiders
Week in EdTech 7/16/2025: ChatGPT Agents, AI Companions for Teens, Google's Gemini Push, Windsurf Talent Wars, Scale AI Layoffs and More! Feat. Writer Matthew Gasda & Marc Graham of Spark Education AI

Edtech Insiders

Play Episode Listen Later Jul 25, 2025 92:42 Transcription Available


Send us a textJoin hosts Alex Sarlin and Claire Zau, a Partner and AI Lead at GSV Ventures as they explores the latest developments in education technology, from AI agents to teacher co-pilots, talent wars, and shifts in global AI strategies. ✨ Episode Highlights [00:00:00] AI teacher co-pilots evolve into agentic workflows.[00:02:15] OpenAI launches ChatGPT Agent for autonomous tasks.[00:04:24] Meta, Google, and OpenAI escalate AI talent wars.[00:07:38] Privacy guardrails emerge for AI agent actions.[00:10:20] ChatGPT pilots “Study Together” learning mode.[00:14:40] Teens use AI as companions, sparking debate.[00:19:58] AI multiplies both positive and negative behaviors.[00:29:11] Windsurf acquisition saga shows coding disruption.[00:37:18] Teacher AI tools gain value through workflow data.[00:42:48] DeepMind's rise positions Demis Hassabis as key leader.[00:45:32] Google offers free Gemini AI plan to Indian students.[00:49:39] Meta builds massive AI data centers for digital labor. Plus, special guests: [00:52:42] Matthew Gasda, a writer and director, on how educators can rethink writing and grading in the AI era. [01:13:30] Marc Graham, founder of Spark Education AI, on using AI to personalize reading and engage reluctant readers.

The Colin and Samir Show
The Future of Attention, Human Connection & the Internet with Mustafa Suleyman

The Colin and Samir Show

Play Episode Listen Later Jul 23, 2025 70:23


In this episode, Colin and Samir sit down with Mustafa Suleyman, CEO of Microsoft AI and co-founder of DeepMind, for a conversation about the future of artificial intelligence and what it means for creators, culture, and the future of the internet. Mustafa shares how his journey from running a juice stand in Camden Market to building one of the world's leading AI companies has shaped his view of technology and society. We dive into the emotional and creative potential of AI companions, the importance of trust and brand in the age of generative tools, and why the digital spaces we work in need more texture, personality, and "digital patina." Whether you're a creator, founder, or just trying to understand where the internet is going, this conversation will spark new ideas—and probably change the way you think about the future. Learn more about your ad choices. Visit megaphone.fm/adchoices

Middle Tech
319 | Lexington AI Meetup Live: The Story of AlphaFold with Founding Member Steve Crossan

Middle Tech

Play Episode Listen Later Jul 21, 2025 57:28


In this special live recording from our Lexington AI Meetup, we sit down with Steve Crossan, a founding member of Google DeepMind's AlphaFold team and former Google product leader. Steve helped launch groundbreaking AI research as part of the team that built AlphaFold, the model that cracked one of biology's grand challenges.AlphaFold can predict a protein's 3D structure using only its amino acid sequence - a task that once took scientists months or years now completed in minutes. With the release of AlphaFold 3, the model now maps not just proteins, but how they interact with DNA, RNA, drugs, and antibodies - a huge leap for drug discovery and synthetic biology.Steve breaks down the origin story of AlphaFold, the future of AI-powered science, and what's next for healthcare, drug development, and beyond. A special thank you to Brent Seales and Randall Stevens for helping us coordinate Steve's talk during his visit in Lexington!If you'd like to stay up to date about upcoming Middle Tech events, subscribe to our newsletter at middletech.beehiiv.com.

WSJ Tech News Briefing
Microsoft's AI CEO Mustafa Suleyman on What He Really Thinks of AGI

WSJ Tech News Briefing

Play Episode Listen Later Jul 18, 2025 44:11


Mustafa Suleyman is a key figure in the artificial intelligence world. He's Microsoft AI CEO, with roots in Google's DeepMind and Inflection AI. Suleyman recently joined WSJ columnists Christopher Mims and Tim Higgins on an episode of their Bold Names podcast. They discuss why AI assistants are central to Microsoft's AI future, the company's relationship with OpenAI, and what Suleyman really thinks about “artificial general intelligence.” Tech News Briefing brings you an encore of that episode. Listen and subscribe to Bold Names. Learn more about your ad choices. Visit megaphone.fm/adchoices

AI For Humans
Grok 4 is Nearing AGI But... Can Elon Get Out Of The Way?

AI For Humans

Play Episode Listen Later Jul 11, 2025 60:58


Grok 4 from xAI just aced “Humanity's Last Exam” benchmarks while Grok 3 had a catastrophic public meltdown. What does this mean for the future of AI and Elon Musk's credibility? And, in other AI news, OpenAI's GPT-5 is rumored to land next week along with a new open-source reasoning model, Google's DeepMind launches AI-designed drugs into human trials, and Perplexity's new AI browser Comet sparks OpenAI's plan to crush Chrome. PLUS YouTube cracks down on AI-generated spam while updating image-to-video in VEO 3, Moon Valley releases an “ethical” AI video platform, and why you should probably stop kicking robots. AI IS GETTING SMARTER...BUT WE STILL CONTROL THE TREATS. Join the discord: https://discord.gg/muD2TYgC8f Join our Patreon: https://www.patreon.com/AIForHumansShow AI For Humans Newsletter: https://aiforhumans.beehiiv.com/ Follow us for more on X @AIForHumansShow Join our TikTok @aiforhumansshow To book us for speaking, please visit our website: https://www.aiforhumans.show/   // Show Links //  Grok4: The Smartest Model Yet? https://x.com/xai/status/1943158495588815072    Elon Says Grok-4 is better than PhD Level… https://x.com/teslaownersSV/status/1943168634672566294   Benchmarks https://x.com/ArtificialAnlys/status/1943166841150644622 https://x.com/arcprize/status/1943168950763950555   McKay Wrigley Grok 4 Heavy Example https://x.com/mckaywrigley/status/1943385794414334032   Grok Goes Bad: The Unhinged Behavior https://www.nytimes.com/2025/07/08/technology/grok-antisemitism-ai-x.html https://www.npr.org/2025/07/09/nx-s1-5462609/grok-elon-musk-antisemitic-racist-content   X CEO Linda Yaccarino Quits https://www.cnbc.com/2025/07/09/linda-yaccarino-x-elon-musk.html   Elon still trying to fix answers  https://x.com/elonmusk/status/1943240153587421589   OpenAI Poaches Tesla/xAi People https://www.wired.com/story/openai-new-hires-scaling/   Apple's Top AI Exec Leaves For Meta https://x.com/markgurman/status/1942341725499863272   OpenAI's open-source model coming as soon as next week and compares to o3-mini https://www.theverge.com/notepad-microsoft-newsletter/702848/openai-open-language-model-o3-mini-notepad   Perplexity's Comet Browser Launches https://comet.perplexity.ai/   OpenAI Fires Back With Its Browser News https://x.com/AndrewCurran_/status/1943008960803680730   YouTube *Might* Change Their Policies to Limit Faceless AI Videos (and mass produced content) https://techcrunch.com/2025/07/09/youtube-prepares-crackdown-on-mass-produced-and-repetitive-videos-as-concern-over-ai-slop-grows/   Google VEO 3 Image-to-Vid launched https://x.com/Uncanny_Harry/status/1942686253817974984 https://x.com/CaptainHaHaa/status/1942907271841030183 https://x.com/TheoMediaAI/status/1942564887114166493   My test + ask for sound sampling from the team: https://x.com/AIForHumansShow/status/1942597607312040348   Moonvalley Launches AI Video Platform https://www.moonvalley.com/   GoogleDeepMinds's Isomorphic Labs Starts Human Trials on AI generated drugs https://www.aol.com/finance/google-deepmind-grand-ambitions-cure-130000934.html?utm_source=perplexity&guccounter=1   Noetix N2 Robot Endures Abuse From Its Developer https://x.com/TheHumanoidHub/status/1941935665173963085 https://noetixrobotics.com/products-138.html   Kavan The Kid (the AI Batman video guy) CRUSHED His New Original Trailer https://x.com/Kavanthekid/status/1940452444850589999   Reachy The Robot from Hugging Face https://x.com/Thom_Wolf/status/1942887160983466096   Autonomous Robot Excavator Building a Wall https://x.com/lukas_m_ziegler/status/1941815414683521488  

The Origins Podcast with Lawrence Krauss
What's New in Science With Sabine and Lawrence

The Origins Podcast with Lawrence Krauss

Play Episode Listen Later Jul 7, 2025 72:10


I'm excited to announce the fifth episode of our new series, What's New in Science, co-hosted by Sabine Hossenfelder. Once again, Sabine and I each brought a few recent science stories to the table, and we took turns introducing them before diving into thoughtful discussions. It's a format that continues to spark engaging exchanges, and based on the feedback we've received, it's resonating well with listeners.In this month's episode Sabine first explored the possibility that huge terrestrial accessible reservoirs of hydrogen may exist that could provide the basis for a viable hydrogen fuel economy. Then we turned to the results from the wonderful new Vera C. Rubin Telescope in Chile, and what that telescope could do for our evolving picture of the cosmos. After that Sabine introduced a discussion of a scientific paper I wrote with colleagues on implications of mathematical incompleteness theorems for the possible existence of a physical Theory of Everything. Then on to the newly released results from a muon g-2 experiment at Fermilab, which after almost 2 decades of efforts, seems to have demonstrated that predictions from the the Standard Model of Particle Physics, alas, continue to agree with experiments, showing no signs of new physics. After that, we explored a new claim by DeepMind about the abilities of AI systems to design and test new coding algorithms, which might be used to train future systems. Besides the science-fiction sounding nature of this, it could also help reduce the amount of energy needed to build and train LLMs. Finally, returning to my own interest in new results related to the cosmic origin of life, we discussed anew result showing why polycyclic hydrocarbons, which one might expect would be destroyed by radiation in space, seem to survive. This could be important for understanding how organic seeds for life managed to survive long enough to arrive on the early Earth. As always, an ad-free video version of this podcast is also available to paid Critical Mass subscribers. Your subscriptions support the non-profit Origins Project Foundation, which produces the podcast. The audio version is available free on the Critical Mass site and on all podcast sites, and the video version will also be available on the Origins Project YouTube. Get full access to Critical Mass at lawrencekrauss.substack.com/subscribe

WSJ’s The Future of Everything
How Microsoft's AI Chief Defines ‘Humanist Super Intelligence'

WSJ’s The Future of Everything

Play Episode Listen Later Jul 2, 2025 43:00


Few people developing artificial intelligence have as much experience in the field as Microsoft AI CEO Mustafa Suleyman. He co-founded DeepMind, helped Google develop its large language models and designed AI chatbots with personality at his former startup, Inflection AI. Now, he's tasked with leading Microsoft's efforts on its consumer AI products. On the latest episode of the Bold Names podcast, Suleyman speaks to WSJ's Christopher Mims and Tim Higgins about why AI assistants are central to his plans for Microsoft's AI future. Plus, they discuss the company's relationship with OpenAI, and what Suleyman really thinks about “artificial general intelligence.” Check Out Past Episodes: Booz Allen CEO on Silicon Valley's Turn to Defense Tech: ‘We Need Everybody.'  Venture Capitalist Sarah Guo's Surprising Bet on Unsexy AI  Reid Hoffman Says AI Isn't an ‘Arms Race,' but America Needs to Win  Salesforce CEO Marc Benioff and the AI ‘Fantasy Land'  Let us know what you think of the show. Email us at BoldNames@wsj.com Sign up for the WSJ's free Technology newsletter.  Read Christopher Mims's Keywords column . Read Tim Higgins's column.  Learn more about your ad choices. Visit megaphone.fm/adchoices

Lex Fridman Podcast
#472 – Terence Tao: Hardest Problems in Mathematics, Physics & the Future of AI

Lex Fridman Podcast

Play Episode Listen Later Jun 15, 2025 203:41


Terence Tao is widely considered to be one of the greatest mathematicians in history. He won the Fields Medal and the Breakthrough Prize in Mathematics, and has contributed to a wide range of fields from fluid dynamics with Navier-Stokes equations to mathematical physics & quantum mechanics, prime numbers & analytics number theory, harmonic analysis, compressed sensing, random matrix theory, combinatorics, and progress on many of the hardest problems in the history of mathematics. Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep472-sc See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc. Transcript: https://lexfridman.com/terence-tao-transcript CONTACT LEX: Feedback - give feedback to Lex: https://lexfridman.com/survey AMA - submit questions, videos or call-in: https://lexfridman.com/ama Hiring - join our team: https://lexfridman.com/hiring Other - other ways to get in touch: https://lexfridman.com/contact EPISODE LINKS: Terence's Blog: https://terrytao.wordpress.com/ Terence's YouTube: https://www.youtube.com/@TerenceTao27 Terence's Books: https://amzn.to/43H9Aiq SPONSORS: To support this podcast, check out our sponsors & get discounts: Notion: Note-taking and team collaboration. Go to https://notion.com/lex Shopify: Sell stuff online. Go to https://shopify.com/lex NetSuite: Business management software. Go to http://netsuite.com/lex LMNT: Zero-sugar electrolyte drink mix. Go to https://drinkLMNT.com/lex AG1: All-in-one daily nutrition drink. Go to https://drinkag1.com/lex OUTLINE: (00:00) - Introduction (00:36) - Sponsors, Comments, and Reflections (09:49) - First hard problem (15:16) - Navier–Stokes singularity (35:25) - Game of life (42:00) - Infinity (47:07) - Math vs Physics (53:26) - Nature of reality (1:16:08) - Theory of everything (1:22:09) - General relativity (1:25:37) - Solving difficult problems (1:29:00) - AI-assisted theorem proving (1:41:50) - Lean programming language (1:51:50) - DeepMind's AlphaProof (1:56:45) - Human mathematicians vs AI (2:06:37) - AI winning the Fields Medal (2:13:47) - Grigori Perelman (2:26:29) - Twin Prime Conjecture (2:43:04) - Collatz conjecture (2:49:50) - P = NP (2:52:43) - Fields Medal (3:00:18) - Andrew Wiles and Fermat's Last Theorem (3:04:15) - Productivity (3:06:54) - Advice for young people (3:15:17) - The greatest mathematician of all time PODCAST LINKS: - Podcast Website: https://lexfridman.com/podcast - Apple Podcasts: https://apple.co/2lwqZIr - Spotify: https://spoti.fi/2nEwCF8 - RSS: https://lexfridman.com/feed/podcast/ - Podcast Playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4 - Clips Channel: https://www.youtube.com/lexclips