Podcasts about google deepmind

  • 516PODCASTS
  • 979EPISODES
  • 40mAVG DURATION
  • 1DAILY NEW EPISODE
  • Oct 30, 2025LATEST
google deepmind

POPULARITY

20172018201920202021202220232024


Best podcasts about google deepmind

Latest podcast episodes about google deepmind

Coffee Break: Señal y Ruido
Ep530_B: Estalagmitas; Deepmind; Entrelazamiento y Gravedad; Gravitones; Halloween

Coffee Break: Señal y Ruido

Play Episode Listen Later Oct 30, 2025 127:32


La tertulia semanal en la que repasamos las últimas noticias de la actualidad científica. En el episodio de hoy: Cara B: -La forma de las estalagmitas (Continuación) (00:00) -Aprendizaje multiespectral de Google Deepmind (09:00) -Entrelazamiento cuántico en la gravedad vs gravitación cuántica (39:00) -Absorción de gravitones por fotones en LIGO (1:11:00) -Halloween en el planetario (1:17:00) -Señales de los oyentes (1:34:00) Este episodio es continuación de la Cara A. Contertulios: Cecilia Garraffo, Juan Carlos Gil, Borja Tosar,. Imagen de portada realizada con Seedream 4 4k. Todos los comentarios vertidos durante la tertulia representan únicamente la opinión de quien los hace... y a veces ni eso

a16z
Google DeepMind Developers: How Nano Banana Was Made

a16z

Play Episode Listen Later Oct 28, 2025 54:19


Google DeepMind's new image model Nano Banana took the internet by storm.In this episode, we sit down with Principal Scientist Oliver Wang and Group Product Manager Nicole Brichtova to discuss how Nano Banana was created, why it's so viral, and the future of image and video editing. Resources: Follow Oliver on X: https://x.com/oliver_wang2Follow Nicole on X: https://x.com/nbrichtovaFollow Guido on X: https://x.com/appenzFollow Yoko on X: https://x.com/stuffyokodraws Stay Updated: If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Follow a16z on X: https://x.com/a16zSubscribe to a16z on Substack: https://a16z.substack.com/Follow a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Podcast on SpotifyListen to the a16z Podcast on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

The John Oakley Show
Storms in the Market & in the Caribbean

The John Oakley Show

Play Episode Listen Later Oct 28, 2025 21:52


David Wilkes, President & CEO of the Building Industry and Land Development Association, breaks down the sharp decline in GTA new-home and condo sales—now sitting at just 20 % of the ten-year average—and how high government fees, taxes, and costs threaten future supply and 40,000 construction jobs. Why Wilkes believes HST relief and rate cuts could bring buyers back and how Canada's housing slowdown stretches from Toronto to Vancouver, Calgary, Edmonton, and Montreal. Defining “affordable housing” without eroding existing homeowners' equity, and the structural fixes needed to revive confidence. Mark Sudduth, veteran hurricane chaser and founder of HurricaneTrack.com, reports from the Caribbean on Hurricane Melissa, one of the strongest Atlantic storms on record—its devastation in Jamaica, the threat to Cuba and the Bahamas, and how new AI-driven forecast models like Google DeepMind helped track it with unprecedented accuracy. Learn more about your ad choices. Visit megaphone.fm/adchoices

Noticias Marketing
IA sin límites: GPT-6 Enterprise, Gemini 5 Ultra y el Edge que transforma el marketing

Noticias Marketing

Play Episode Listen Later Oct 24, 2025 4:43 Transcription Available


En este episodio de Noticias Marketing te traigo noticias que podrían cambiar la forma en que trabajas con IA. OpenAI lanza GPT-6 Enterprise, con cifrado homomórfico para mantener tus datos seguros mientras se procesan, generación automática de informes en 12 idiomas y ajuste de estilo al tono de cada empresa, prometiendo recortar a la mitad el tiempo de preparación. Google DeepMind responde con Gemini 5 Ultra, un modelo multimodal capaz de entender texto, imágenes, vídeo en 3D y audio en tiempo real, que se integra con Google Workspace para crear prototipos interactivos y presentaciones en horas. Y NVIDIA desvela Grace Hopper 4, que combina núcleos clásicos con aceleradores cuánticos ligeros para acelerar el entrenamiento de modelos de lenguaje y visión hasta 10x, con un consumo eléctrico un 70% menor.La conversación continúa con Titan Edge de AWS, una solución de computación perimetral que ejecuta visión por computador y NLP directamente en cámaras y sensores, reduciendo latencias a milisegundos. En la UE, la Ley de Servicios Digitales y la IA Act avanzan, con multas de hasta 20 millones de euros o el 4% de la facturación y auditorías trimestrales de sesgos en sistemas de selección y crédito. Y desde España, SaludIA capta 30 millones para ampliar su plataforma de diagnóstico por imagen con IA, mostrando una precisión del 96% en la detección de cáncer de piel. Si te pica la curiosidad, suscríbete a la newsletter de marketing radical en borjagiron.com y comparte el episodio.Conviértete en un seguidor de este podcast: https://www.spreaker.com/podcast/noticias-marketing--5762806/support.Newsletter Marketing Radical: https://marketingradical.substack.com/welcomeNewsletter Negocios con IA: https://negociosconia.substack.com/welcomeMis Libros: https://borjagiron.com/librosSysteme Gratis: https://borjagiron.com/systemeSysteme 30% dto: https://borjagiron.com/systeme30Manychat Gratis: https://borjagiron.com/manychatMetricool 30 días Gratis Plan Premium (Usa cupón BORJA30): https://borjagiron.com/metricoolNoticias Redes Sociales: https://redessocialeshoy.comNoticias IA: https://inteligenciaartificialhoy.comClub: https://triunfers.com

Machine Learning Podcast - Jay Shah
Beyond Accuracy: Evaluating the learned representations of Generative AI models | Aida Nematzadeh

Machine Learning Podcast - Jay Shah

Play Episode Listen Later Oct 23, 2025 53:17


Dr. Aida Nematzadeh is a Senior Staff Research Scientist at Google DeepMind where her research focused on multimodal AI models. She works on developing evaluation methods and analyze model's learning abilities to detect failure modes and guide improvements. Before joining DeepMind, she was a postdoctoral researcher at UC Berkeley and completed her PhD and Masters in Computer Science from the University of Toronto. During her graduate studies she studied how children learn semantic information through computational (cognitive) modeling. Time stamps of the conversation00:00 Highlights01:20 Introduction02:08 Entry point in AI03:04 Background in Cognitive Science & Computer Science 04:55 Research at Google DeepMind05:47 Importance of language-vision in AI10:36 Impact of architecture vs. data on performance 13:06 Transformer architecture 14:30 Evaluating AI models19:02 Can LLMs understand numerical concepts 24:40 Theory-of-mind in AI27:58 Do LLMs learn theory of mind?29:25 LLMs as judge35:56 Publish vs. perish culture in AI research40:00 Working at Google DeepMind42:50 Doing a Ph.D. vs not in AI (at least in 2025)48:20 Looking back on research careerMore about Aida: http://www.aidanematzadeh.me/About the Host:Jay is a Machine Learning Engineer at PathAI working on improving AI for medical diagnosis and prognosis. Linkedin: shahjay22  Twitter:  jaygshah22  Homepage: https://jaygshah.github.io/ for any queries.Stay tuned for upcoming webinars!**Disclaimer: The information in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.**

The MAD Podcast with Matt Turck
Are We Misreading the AI Exponential? Julian Schrittwieser on Move 37 & Scaling RL (Anthropic)

The MAD Podcast with Matt Turck

Play Episode Listen Later Oct 23, 2025 69:56


Are we failing to understand the exponential, again?My guest is Julian Schrittwieser (top AI researcher at Anthropic; previously Google DeepMind on AlphaGo Zero & MuZero). We unpack his viral post (“Failing to Understand the Exponential, again”) and what it looks like when task length doubles every 3–4 months—pointing to AI agents that can work a full day autonomously by 2026 and expert-level breadth by 2027. We talk about the original Move 37 moment and whether today's AI models can spark alien insights in code, math, and science—including Julian's timeline for when AI could produce Nobel-level breakthroughs.We go deep on the recipe of the moment—pre-training + RL—why it took time to combine them, what “RL from scratch” gets right and wrong, and how implicit world models show up in LLM agents. Julian explains the current rewards frontier (human prefs, rubrics, RLVR, process rewards), what we know about compute & scaling for RL, and why most builders should start with tools + prompts before considering RL-as-a-service. We also cover evals & Goodhart's law (e.g., GDP-Val vs real usage), the latest in mechanistic interpretability (think “Golden Gate Claude”), and how safety & alignment actually surface in Anthropic's launch process.Finally, we zoom out: what 10× knowledge-work productivity could unlock across medicine, energy, and materials, how jobs adapt (complementarity over 1-for-1 replacement), and why the near term is likely a smooth ramp—fast, but not a discontinuity.Julian SchrittwieserBlog - https://www.julian.acX/Twitter - https://x.com/mononofuViral post: Failing to understand the exponential, again (9/27/2025)AnthropicWebsite - https://www.anthropic.comX/Twitter - https://x.com/anthropicaiMatt Turck (Managing Director)Blog - https://www.mattturck.comLinkedIn - https://www.linkedin.com/in/turck/X/Twitter - https://twitter.com/mattturckFIRSTMARKWebsite - https://firstmark.comX/Twitter - https://twitter.com/FirstMarkCap(00:00) Cold open — “We're not seeing any slowdown.”(00:32) Intro — who Julian is & what we cover(01:09) The “exponential” from inside frontier labs(04:46) 2026–2027: agents that work a full day; expert-level breadth(08:58) Benchmarks vs reality: long-horizon work, GDP-Val, user value(10:26) Move 37 — what actually happened and why it mattered(13:55) Novel science: AlphaCode/AlphaTensor → when does AI earn a Nobel?(16:25) Discontinuity vs smooth progress (and warning signs)(19:08) Does pre-training + RL get us there? (AGI debates aside)(20:55) Sutton's “RL from scratch”? Julian's take(23:03) Julian's path: Google → DeepMind → Anthropic(26:45) AlphaGo (learn + search) in plain English(30:16) AlphaGo Zero (no human data)(31:00) AlphaZero (one algorithm: Go, chess, shogi)(31:46) MuZero (planning with a learned world model)(33:23) Lessons for today's agents: search + learning at scale(34:57) Do LLMs already have implicit world models?(39:02) Why RL on LLMs took time (stability, feedback loops)(41:43) Compute & scaling for RL — what we see so far(42:35) Rewards frontier: human prefs, rubrics, RLVR, process rewards(44:36) RL training data & the “flywheel” (and why quality matters)(48:02) RL & Agents 101 — why RL unlocks robustness(50:51) Should builders use RL-as-a-service? Or just tools + prompts?(52:18) What's missing for dependable agents (capability vs engineering)(53:51) Evals & Goodhart — internal vs external benchmarks(57:35) Mechanistic interpretability & “Golden Gate Claude”(1:00:03) Safety & alignment at Anthropic — how it shows up in practice(1:03:48) Jobs: human–AI complementarity (comparative advantage)(1:06:33) Inequality, policy, and the case for 10× productivity → abundance(1:09:24) Closing thoughts

IT Privacy and Security Weekly update.
EP 263 Deep Dive. Where are the Cameras? The IT Privacy & Security Weekly Update for the week ending October 21st., 2025

IT Privacy and Security Weekly update.

Play Episode Listen Later Oct 23, 2025 17:08


Google DeepMind's Cell2Sentence-Scale 27B model has marked a significant milestone in biomedical research by predicting and validating a novel cancer immunotherapy. By analyzing over 4,000 compounds, the AI pinpointed silmitasertib as a “conditional amplifier” that boosts immune response in the presence of interferon. Lab tests verified a 50% increase in antigen presentation, enabling the immune system to detect previously undetectable tumors. This discovery, absent from prior scientific literature, highlights AI's ability to uncover hidden biological mechanisms.Microsoft is integrating its Copilot AI into Windows 11, transforming the operating system into an interactive digital assistant. With “Hey, Copilot” voice activation and a Vision feature that allows the AI to “see” the user's screen, Copilot can guide users through tasks in real time. The new Actions feature enables Copilot to perform operations like editing folders or managing background processes. This move reflects Microsoft's broader vision to embed AI seamlessly into everyday workflows, redefining the PC experience by making the operating system a proactive partner rather than a passive platform.Signal has achieved a cryptographic breakthrough by implementing quantum-resistant end-to-end encryption. Its new Triple Ratchet protocol incorporates the CRYSTALS-Kyber algorithm, blending classical and post-quantum security. Engineers overcame the challenge of large quantum-safe keys by fragmenting them into smaller, message-sized pieces, ensuring smooth performance. This upgrade is celebrated as the first user-friendly, large-scale post-quantum encryption deployment, setting a new standard for secure communication in an era where quantum computing could threaten traditional encryption.Using just $750 in consumer-grade hardware, researchers intercepted unencrypted data from 39 geostationary satellites, capturing sensitive information ranging from in-flight Wi-Fi and retail inventory to military and telecom communications. Companies like T-Mobile and Walmart acknowledged misconfigurations after the findings were disclosed. The study exposes the vulnerability of critical infrastructure still relying on unencrypted satellite links, demonstrating that low-cost eavesdropping can breach systems banking on “security through obscurity,” which A foreign actor exploited vulnerabilities in Microsoft SharePoint to infiltrate the Kansas City National Security Campus, a key U.S. nuclear weapons contractor. While the attack targeted IT systems, it raised concerns about potential access to operational technology. Suspected actors include Chinese or Russian groups, likely pursuing strategic espionage. The breach underscores how enterprise software flaws can compromise national defense and highlights the slow pace of securing critical operational infrastructure.Google's Threat Intelligence team uncovered UNC5342, a North Korean hacking group using EtherHiding to embed malware in public blockchains like Ethereum. By storing malicious JavaScript in immutable smart contracts, the technique ensures persistence and low-cost updates. Delivered via fake job interviews targeting developers, this approach marks a new era of cyber threats, leveraging decentralized technology as a permanent malware host.Kohler's Dekoda toilet camera ($599 + subscription) monitors gut health and hydration by scanning waste, using fingerprint ID and encrypted data for privacy. While Kohler claims the camera only views the bowl, privacy advocates question the implications of such intimate surveillance, even with “end-to-end encryption.”In a daring eight-minute heist, thieves used a crane to steal royal jewels from the Louvre, exposing significant security gaps. An audit revealed outdated defenses, delayed modernization, and blind spots, serving as a stark reminder that even the most prestigious institutions are vulnerable to breaches when security measures lag.

GREY Journal Daily News Podcast
Is ChatGPT Atlas Really a Game-Changer for Browsers?

GREY Journal Daily News Podcast

Play Episode Listen Later Oct 23, 2025 2:49


OpenAI launched the ChatGPT Atlas browser, featuring always-available chat, adaptive browser memory, and task automation. HSBC analysts reported that these features are similar to those in Google Chrome, which uses Gemini AI for automation and content management. ChatGPT has about 800 million users, while Chrome has nearly 3 billion. HSBC maintained a Buy rating and $285 price target for Alphabet, citing its resources and ownership of Google DeepMind. Gemini has doubled its generative AI market share in the past year. The release of ChatGPT Atlas does not alter Alphabet's investment outlook but raises interest in the timeline for further AI advancements from Google.Learn more on this news by visiting us at: https://greyjournal.net/news/ Hosted on Acast. See acast.com/privacy for more information.

In Depth
The pivot that paid off: How fal found explosive growth in generative media | Gorkem Yurtseven (Co-founder and CEO)

In Depth

Play Episode Listen Later Oct 22, 2025 59:18


Gorkem Yurtseven is the co-founder and CEO of fal, the generative media platform powering the next wave of image, video, and audio applications. In less than two years, fal has scaled from $2M to over $100M in ARR, serving over 2 million developers and more than 300 enterprises, including Adobe, Canva, and Shopify. In this conversation, Gorkem shares the inside story of fal's pivot into explosive growth, the technical and cultural philosophies driving its success, and his predictions for the future of AI-generated media. In today's episode, we discuss: How fal pivoted from data infrastructure to generative inference fal's explosive year and how they scaled Why "generative media" is a greenfield new market fal's unique hiring philosophy and lean

IT Privacy and Security Weekly update.
Where are the Cameras? The IT Privacy and Security Weekly Update for the week ending October 21st. 2025

IT Privacy and Security Weekly update.

Play Episode Listen Later Oct 22, 2025 17:57


EP 263. In this week's snappy update!Google DeepMind's AI uncovers a groundbreaking cancer therapy, marking a leap in immunotherapy innovation.Microsoft's Copilot AI transforms Windows 11, enabling voice-driven control and screen-aware assistance.Signal's quantum-resistant encryption upgrade really does set a new standard for secure messaging resilience.Researchers expose shocking vulnerabilities in satellite communications, revealing unencrypted data with minimal equipment.Foreign hackers compromised a critical U.S. nuclear weapons facility, through Microsoft's Sharepoint!North Korean hackers pioneer 'EtherHiding,' concealing malware on blockchains for immutable cybertheft opportunities.Kohler's Dekoda toilet camera revolutionizes health monitoring with privacy-focused waste analysis technology and brings new meaning to “End to End” encryption.A daring Louvre heist exposes critical security gaps, sparking debate over protecting global cultural treasures with decades old cameras and tech.Camera ready? Smile.Find the full transcript to this week's podcast here.

Training Data
Securing the AI Frontier: Irregular Co-founder Dan Lahav

Training Data

Play Episode Listen Later Oct 21, 2025 44:09


Irregular co-founder Dan Lahav is redefining what cybersecurity means in the age of autonomous AI. Working closely with OpenAI, Anthropic, and Google DeepMind, Dan, co-founder Omer Nevo and team are pioneering “frontier AI security”—a proactive approach to safeguarding systems where AI models act as independent agents. Dan shares how emergent behaviors, from models socially engineering each other to outmaneuvering real-world defenses like Windows Defender, signal a coming paradigm shift. Dan explains why tomorrow's threats will come from AI-on-AI interactions, why anomaly detection will soon break down, and how governments and enterprises alike must rethink defenses from first principles as AI becomes a national security layer. Hosted by: Sonya Huang and Dean Meyer, Sequoia Capital 00:00 Introduction 03:07 The Future of AI Security 03:55 Thought Experiment: Security in the Age of GPT-10 05:23 Economic Shifts and AI Interaction 07:13 Security in the Autonomous Age 08:50 AI Model Capabilities and Cybersecurity 11:08 Real-World AI Security Simulations 12:31 Working with AI Labs 32:34 Enterprise AI Security Strategies 40:03 Governmental AI Security Considerations 43:41 Final Thoughts

La Story
[Hors-Série Google] IA & Santé : comment la technologie peut-elle sauver des vies ?

La Story

Play Episode Listen Later Oct 19, 2025 21:39


L'intelligence artificielle transforme déjà la médecine : diagnostic assisté, traitements personnalisés, recherche accélérée… Mais comment en faire un véritable levier pour la santé publique ?Dans cet épisode Hors-série proposé par Google, Anne-Vincent Salomon, médecin pathologiste à l'Institut Curie, et Joëlle Barral, directrice de la recherche fondamentale chez Google DeepMind, croisent leurs regards sur le rôle de l'IA dans la recherche médicale, la lutte contre le cancer, et l'avenir des soins.Un échange éclairant à découvrir dès maintenant. Journaliste : Estelle Honnorat Réalisation : Rudy Tolila Mixage : Killian Martin Daoudal Directeur de la Production : Baptiste Farinazzo Production exécutive : Jean-Baptiste Rochelet pour OneTwo OneTwo Hébergé par Acast. Visitez acast.com/privacy pour plus d'informations.

The top AI news from the past week, every ThursdAI

Hey folks, Alex here. Can you believe it's already the middle of October? This week's show was a special one, not just because of the mind-blowing news, but because we set a new ThursdAI record with four incredible interviews back-to-back!We had Jessica Gallegos from Google DeepMind walking us through the cinematic new features in VEO 3.1. Then we dove deep into the world of Reinforcement Learning with my new colleague Kyle Corbitt from OpenPipe. We got the scoop on Amp's wild new ad-supported free tier from CEO Quinn Slack. And just as we were wrapping up, Swyx ( from Latent.Space , now with Cognition!) jumped on to break the news about their blazingly fast SWE-grep models. But the biggest story? An AI model from Google and Yale made a novel scientific discovery about cancer cells that was then validated in a lab. This is it, folks. This is the “let's f*****g go” moment we've been waiting for. So buckle up, because this week was an absolute monster. Let's dive in!ThursdAI - Recaps of the most high signal AI weekly spaces is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.Open Source: An AI Model Just Made a Real-World Cancer DiscoveryWe always start with open source, but this week felt different. This week, open source AI stepped out of the benchmarks and into the biology lab.Our friends at Qwen kicked things off with new 3B and 8B parameter versions of their Qwen3-VL vision model. It's always great to see powerful models shrink down to sizes that can run on-device. What's wild is that these small models are outperforming last generation's giants, like the 72B Qwen2.5-VL, on a whole suite of benchmarks. The 8B model scores a 33.9 on OS World, which is incredible for an on-device agent that can actually see and click things on your screen. For comparison, that's getting close to what we saw from Sonnet 3.7 just a few months ago. The pace is just relentless.But then, Google dropped a bombshell. A 27-billion parameter Gemma-based model they developed with Yale, called C2S-Scale, generated a completely novel hypothesis about how cancer cells behave. This wasn't a summary of existing research; it was a new idea, something no human scientist had documented before. And here's the kicker: researchers then took that hypothesis into a wet lab, tested it on living cells, and proved it was true.This is a monumental deal. For years, AI skeptics like Gary Marcus have said that LLMs are just stochastic parrots, that they can't create genuinely new knowledge. This feels like the first, powerful counter-argument. Friend of the pod, Dr. Derya Unutmaz, has been on the show before saying AI is going to solve cancer, and this is the first real sign that he might be right. The researchers noted this was an “emergent capability of scale,” proving once again that as these models get bigger and are trained on more complex data—in this case, turning single-cell RNA sequences into “sentences” for the model to learn from—they unlock completely new abilities. This is AI as a true scientific collaborator. Absolutely incredible.Big Companies & APIsThe big companies weren't sleeping this week, either. The agentic AI race is heating up, and we're seeing huge updates across the board.Claude Haiku 4.5: Fast, Cheap Model Rivals Sonnet 4 Accuracy (X, Official blog, X)First up, Anthropic released Claude Haiku 4.5, and it is a beast. It's a fast, cheap model that's punching way above its weight. On the SWE-bench verified benchmark for coding, it hit 73.3%, putting it right up there with giants like GPT-5 Codex, but at a fraction of the cost and twice the speed of previous Claude models. Nisten has already been putting it through its paces and loves it for agentic workflows because it just follows instructions without getting opinionated. It seems like Anthropic has specifically tuned this one to be a workhorse for agents, and it absolutely delivers. The thing to note also is the very impressive jump in OSWorld (50.7%), which is a computer use benchmark, and at this price and speed ($1/$5 MTok input/output) is going to make computer agents much more streamlined and speedy! ChatGPT will loose restrictions; age-gating enables “adult mode” with new personality features coming (X) Sam Altman set X on fire with a thread announcing that ChatGPT will start loosening its restrictions. They're planning to roll out an “adult mode” in December for age-verified users, potentially allowing for things like erotica. More importantly, they're bringing back more customizable personalities, trying to recapture some of the magic of GPT-4.0 that so many people missed. It feels like they're finally ready to treat adults like adults, letting us opt-in to R-rated conversations while keeping strong guardrails for minors. This is a welcome change, and we've been advocating for this for a while, and it's a notable change from the XAI approach I covered last week. Opt in for adults with verification while taking precautions vs engagement bait in the form of a flirty animated waifu with engagement mechanics. Microsoft is making every windows 11 an AI PC with copilot voice input and agentic powers (Blog,X)And in breaking news from this morning, Microsoft announced that every Windows 11 machine is becoming an AI PC. They're building a new Copilot agent directly into the OS that can take over and complete tasks for you. The really clever part? It runs in a secure, sandboxed desktop environment that you can watch and interact with. This solves a huge problem with agents that take over your mouse and keyboard, locking you out of your own computer. Now, you can give the agent a task and let it run in the background while you keep working. This is going to put agentic AI in front of hundreds of millions of users, and it's a massive step towards making AI a true collaborator at the OS level.NVIDIA DGX - the tiny personal supercomputer at $4K (X, LMSYS Blog)NVIDIA finally delivered their promised AI Supercomputer, and while the excitement was in the air with Jensen hand delivering the DGX Spark to OpenAI and Elon (recreating that historical picture when Jensen hand delivered a signed DGX workstation while Elon was still affiliated with OpenAI). The workstation was sold out almost immediately. Folks from LMSys did a great deep dive into specs, all the while, folks on our feeds are saying that if you want to get the maximum possible open source LLMs inference speed, this machine is probably overpriced, compared to what you can get with an M3 Ultra Macbook with 128GB of RAM or the RTX 5090 GPU which can get you similar if not better speeds at significantly lower price points. Anthropic's “Claude Skills”: Your AI Agent Finally Gets a Playbook (Blog)Just when we thought the week couldn't get any more packed, Anthropic dropped “Claude Skills,” a huge upgrade that lets you give your agent custom instructions and workflows. Think of them as expertise folders you can create for specific tasks. For example, you can teach Claude your personal coding style, how to format reports for your company, or even give it a script to follow for complex data analysis.The best part is that Claude automatically detects which “Skill” is needed for a given task, so you don't have to manually load them. This is a massive step towards making agents more reliable and personalized, moving beyond just a single custom instruction and into a library of repeatable, expert processes. It's available now for all paid users, and it's a feature I've been waiting for. Our friend Simon Willison things skills may be a bigger deal than MCPs!

Noticias Marketing
IA 360°: De GPT-6 Enterprise al Chip Cuántico y la Nueva Regulación

Noticias Marketing

Play Episode Listen Later Oct 17, 2025 3:29 Transcription Available


Bienvenidos al podcast “Noticias Marketing” y estas son las noticias más importantes sobre Inteligencia Artificial. Perdonad si sueno como un robot, pero de momento no os quitaré el trabajo.Primera noticia: OpenAI ha presentado hoy GPT-6 Enterprise, una versión destinada a grandes empresas con cifrado homomórfico que protege los datos incluso durante su procesamiento. Incluye además generación de informes en diez idiomas y adaptación automática al estilo corporativo, lo que promete reducir a la mitad el tiempo de elaboración de reportes.Segunda noticia: Google DeepMind ha lanzado Gemini 5 Ultra, un modelo multimodal que entiende texto, imágenes, vídeo tridimensional y audio en tiempo real. Se integra con Google Workspace para crear prototipos interactivos y presentaciones avanzadas en cuestión de horas, un salvavidas para equipos de marketing y diseño que necesitan iterar rápido.Tercera noticia: NVIDIA ha desvelado el chip Grace Hopper 4, que combina núcleos clásicos con aceleradores cuánticos ligeros. Este nuevo hardware reduce el consumo eléctrico en un setenta por ciento y ofrece hasta diez veces más rendimiento en entrenamiento de modelos de lenguaje y visión artificial.Este episodio está patrocinado por Systeme, la herramienta de marketing todo en uno gratuita con la que puedes crear tu web, blog, landing page y tienda online, crear automatizaciones y embudos de venta, realizar tus campañas de email marketing, vender cursos online, añadir pagos online e incluso crear webinars automatizados. Puedes empezar a usar Systeme gratis entrando en borjagiron.com barra systeme o desde el link de la descripción. Y ahora continuamos con el episodio.Cuarta noticia: La Comisión Europea ha aprobado la segunda fase de la Ley de Servicios Digitales y la IA Act. A partir de ahora, las plataformas que no identifiquen claramente los contenidos generados por IA se enfrentarán a multas de hasta veinte millones de euros o el cuatro por ciento de su facturación global. Además, los sistemas de selección de personal y crédito deberán someterse a auditorías trimestrales de sesgos.Quinta noticia: La startup española SaludIA ha cerrado una ronda de financiación de treinta millones de euros para expandir su plataforma de diagnóstico por imagen con visión artificial. Sus ensayos en cinco hospitales de Madrid y Barcelona muestran una precisión del noventa y seis por ciento en la detección precoz de cáncer de piel, marcando un avance clave en la sanidad 4.0.Y si aún queda algún oyente por aquí porque este contenido es infumable, enhorabuena. Por cierto, si quieres recibir historias de marketing radical con aprendizajes para poner en práctica en tu negocio, apúntate a la newsletter número uno de marketing radical desde borjagiron.com. Gracias por compartir el episodio con esa persona que creas que le pueda interesar y gracias por dejar un comentario y un me gusta. Un fuerte abrazo. Hasta el próximo episodio.Conviértete en un seguidor de este podcast: https://www.spreaker.com/podcast/noticias-marketing--5762806/support.Newsletter Marketing Radical: https://marketingradical.substack.com/welcomeNewsletter Negocios con IA: https://negociosconia.substack.com/welcomeMis Libros: https://borjagiron.com/librosSysteme Gratis: https://borjagiron.com/systemeSysteme 30% dto: https://borjagiron.com/systeme30Manychat Gratis: https://borjagiron.com/manychatMetricool 30 días Gratis Plan Premium (Usa cupón BORJA30): https://borjagiron.com/metricoolNoticias Redes Sociales: https://redessocialeshoy.comNoticias IA: https://inteligenciaartificialhoy.comClub: https://triunfers.com

People of AI
Conversation with Bibo Xu: How agent conversations are evolving with Google AI

People of AI

Play Episode Listen Later Oct 16, 2025 57:42


Bibo Xu is a Product Manager at Google DeepMind and leads Gemini's multimodal modeling. This video dives into Google AI's journey from basic voice commands to advanced dialogue systems that comprehend not just what is said, but also tone, emotion, and visual context. Check out this conversation to gain a deeper understanding of the challenges and opportunities in integrating diverse AI capabilities when creating universal assistants.  Resources:  Chapters: 0:00 - Intro  1:43 - Introducing Bibo Xu 2:40 - Bibo's Journey: From business school to voice AI  3:59 - The genesis of Google Assistant and Google Home  6:50 - Milestones in speech recognition technology  13:30 - Shifting from command-based AI to natural dialogue  19:00 - The power of multimodal AI for human interaction  21:20 - Real-time multilingual translation with LLMs  25:20 - Project Astra: Building a universal assistant  28:40 - Developer challenges in multimodal AI integration  29:50 - Unpacking the "can't see" debugging story  35:10 - The importance of low latency and interruption  38:30 - Seamless dialogue and background noise filtering  40:00 - Redefining human-computer interaction  41:00 - Ethical considerations for humanlike AI  44:00 - Responding to user emotions and frustration  45:50 - Politeness and expectations in AI conversations  49:10 - AI as a catalyst for research and automation  52:00 - The future of AI assistants and tool use  52:40 - AI interacting with interfaces  54:50 - Transforming the future of work and communication  55:19 - AI for enhanced writing and idea generation  57:13 - Conclusion and future outlook for AI development Subscribe to Google for Developers → https://goo.gle/developers  Speakers: Bibo Xu, Christina Warren, Ashley Oldacre  Products Mentioned: Google AI, Gemini, Generative AI, Android, Google Home, Google Voice, Project Astra, Gemini Live, Google DeepMind 

Startup Gems
Google AI Makes Building AI Apps Too Easy⏐Ep. #234

Startup Gems

Play Episode Listen Later Oct 15, 2025 49:08


Check out my newsletter at ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://TKOPOD.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ and join my new community at ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://TKOwners.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠━I sat down with Logan Kilpatrick from Google DeepMind and we vibe coded real apps live with AI Studio. We went from idea to working prototypes in minutes, including a video analysis app that extracts takeaways from uploads and a voice lead-gen agent that greets visitors and fills a form for me behind the scenes. We talked about why vibe coding lowers the barrier to building, how to package simple tools for specific users, and where the biggest opportunities are with Gemini, VO3, voice agents, and computer-use agents. This isn't sponsored, and Google didn't pay me, but I am a Google shareholder. If you're looking for practical ways to start building with Google's AI today, this one's for you. Logan's links:

Noticias Marketing
IA Sin Límites: De GPT-6 Enterprise a la Nueva Ley Europea

Noticias Marketing

Play Episode Listen Later Oct 14, 2025 3:15 Transcription Available


Bienvenidos al podcast “Noticias Marketing” y estas son las noticias más importantes sobre Inteligencia Artificial. Perdonad si sueno como un robot, pero no os preocupéis: de momento no os quitaré el trabajo.1. OpenAI presenta GPT-6 Enterprise • Diseño pensado para grandes corporaciones con cifrado homomórfico que protege los datos en todo momento. • Incluye adaptación automática al estilo de cada empresa y generación de informes avanzados en diez idiomas. • Comentario: Ideal para reducir a la mitad el tiempo de análisis de datos y acelerar la toma de decisiones.2. Google DeepMind lanza Gemini 5 Ultra • Modelo multimodal capaz de entender texto, imágenes, vídeo en tres dimensiones y señales de audio en tiempo real. • Se integra con Google Workspace para generar presentaciones interactivas y prototipos de producto en cuestión de horas. • Comentario: Un paso adelante para equipos creativos y de diseño que buscan iterar rápido.3. NVIDIA desvela el chip Grace Hopper 4 • Nueva arquitectura que combina núcleos clásicos con aceleradores cuánticos ligeros, reduciendo el consumo eléctrico en un setenta por ciento. • Promete hasta diez veces más rendimiento en entrenamiento de modelos de lenguaje y visión artificial. • Comentario: La carrera por la IA de próxima generación se acelera con fuerza.Este episodio está patrocinado por Systeme, la herramienta de marketing todo en uno gratuita con la que puedes crear tu web, blog, landing page y tienda online, crear automatizaciones y embudos de venta, realizar tus campañas de email marketing, vender cursos online, añadir pagos online e incluso crear webinars automatizados. Puedes empezar a usar Systeme gratis entrando en borjagiron.com barra systeme o desde el link de la descripción. Y ahora continuamos con el episodio.4. La Unión Europea actualiza la IA Act • Segunda fase de la normativa: exceso de confianza en sistemas críticos queda penalizado con multas de hasta veinte millones de euros o el cuatro por ciento de la facturación global. • Exige auditorías trimestrales de sesgos en algoritmos de selección de personal y sistemas de crédito. • Comentario: Un hito para garantizar responsabilidad y transparencia en el uso de inteligencia artificial.5. La española SaludIA cierra ronda de treinta millones de euros • Financiación liderada por inversores europeos para expandir su plataforma de diagnóstico por imagen con visión artificial. • Ensayos clínicos en cinco hospitales de Madrid y Barcelona muestran una precisión del noventa y seis por ciento en detección precoz de cáncer de piel. • Comentario: La sanidad 4.0 avanza con fuerza para mejorar la prevención y el tratamiento.Por cierto, si quieres recibir historias de marketing radical con aprendizajes para poner en práctica en tu negocio apúntate a la newsletter número uno de marketing radical desde borjagiron.com. Gracias por compartir el episodio con esa persona que creas que le pueda interesar y gracias por dejar un comentario y un me gusta. Si aún queda algún oyente por aquí porque este contenido es infumable, enhorabuena. Un fuerte abrazo. Hasta el próximo episodio.Conviértete en un seguidor de este podcast: https://www.spreaker.com/podcast/noticias-marketing--5762806/support.Newsletter Marketing Radical: https://marketingradical.substack.com/welcomeNewsletter Negocios con IA: https://negociosconia.substack.com/welcomeMis Libros: https://borjagiron.com/librosSysteme Gratis: https://borjagiron.com/systemeSysteme 30% dto: https://borjagiron.com/systeme30Manychat Gratis: https://borjagiron.com/manychatMetricool 30 días Gratis Plan Premium (Usa cupón BORJA30): https://borjagiron.com/metricoolNoticias Redes Sociales: https://redessocialeshoy.comNoticias IA: https://inteligenciaartificialhoy.comClub: https://triunfers.com

SparX by Mukesh Bansal
Why Indian CEO's are not investing in R&D (Concerning) | Manish Gupta | SparX

SparX by Mukesh Bansal

Play Episode Listen Later Oct 11, 2025 60:31


In this episode of SparX, Mukesh Bansal speaks with Manish Gupta, Senior Director at Google DeepMind. They discuss how artificial intelligence is evolving, what it means to build truly inclusive AI, and why India must aim higher in research, innovation, and ambition.Manish shares DeepMind's vision of solving “root node problems,” fundamental scientific challenges that unlock breakthroughs across fields, and how AI is already accelerating discovery in areas like biology, materials, and medicine.They talk about:What AGI really means and how close we are to it.Why India needs to move from using AI to creating it.The missing research culture in Indian industry, and how to fix it.How AI can transform healthcare, learning, and agriculture in India.Why ambition, courage, and willingness to fail are essential to deep innovation.Manish also shares insights from his career across the IBM T.J. Watson Research Center and now DeepMind, two of the world's most iconic research environments, and what it will take for India to build its own.If you care about India's AI journey, research, and the future of innovation, this conversation is a masterclass in what it takes to move from incremental progress to world-changing breakthroughs.

Something You Should Know
How to Get Better Results with AI & The Science of Healing Trauma

Something You Should Know

Play Episode Listen Later Oct 9, 2025 50:22


Expiration dates aren't always what they seem. While most packaged foods carry them, some foods — like salt — can last virtually forever. In fact, there's a surprising list of everyday staples that can outlive the labels and stay good for years. Listen as I reveal which foods never really expire. https://www.tasteofhome.com/article/long-term-food-storage-staples-that-last-forever/ AI tools like ChatGPT are everywhere, but to use them well, you need more than just clear questions. The way you prompt, the way you think about the model, and even the way it was trained all play a role in the results you get. To break it all down, I'm joined by Christopher Summerfield, Professor of Cognitive Neuroscience at Oxford and Staff Research Scientist at Google DeepMind. He's also the author of These Strange New Minds: How AI Learned to Talk and What It Means (https://amzn.to/4na3ka2), and he reveals how to get smarter, more effective answers from AI. When does a tough experience cross the line into “trauma”? And once you've been through trauma, is it destined to shape your future forever — or is real healing possible? Dr. Amy Apigian, a double board-certified physician in preventive and addiction medicine with master's degrees in biochemistry and public health, shares a fascinating new way of looking at trauma. She's the author of The Biology of Trauma: How the Body Holds Fear, Pain, and Overwhelm, and How to Heal It (https://amzn.to/4mrsoIu), and what she reveals may change how you view your own life experiences. Looking more attractive doesn't always come down to hair, makeup, or clothes. Science has uncovered a list of simple behaviors and traits that make people instantly more appealing — and most of them are surprisingly easy to do. Listen as I share these research-backed ways to boost your attractiveness.https://www.businessinsider.com/proven-ways-more-attractive-science-2015-7 PLEASE SUPPORT OUR SPONSORS!!! INDEED: Get a $75 sponsored job credit to get your jobs more visibility at ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://Indeed.com/SOMETHING⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ right now! DELL: Your new Dell PC with Intel Core Ultra helps you handle a lot when your holiday to-dos get to be…a lot. Upgrade today by visiting⁠⁠⁠ https://Dell.com/Deals⁠⁠⁠ QUINCE: Layer up this fall with pieces that feel as good as they look! Go to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://Quince.com/sysk⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ for free shipping on your order and 365 day returns! SHOPIFY: Shopify is the commerce platform for millions of businesses around the world! To start selling today, sign up for your $1 per month trial at⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ https://Shopify.com/sysk⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Learn more about your ad choices. Visit megaphone.fm/adchoices

Cyber Security Headlines
DeepMind fixes vulnerabilities, California offers data opt-out, China-Nexus targets open-source tool

Cyber Security Headlines

Play Episode Listen Later Oct 9, 2025 7:46


Google DeepMind's AI agent finds and fixes vulnerabilities  California law lets consumers universally opt out of data sharing China-Nexus actors weaponize 'Nezha' open source tool Huge thanks to our sponsor, ThreatLocker Cybercriminals don't knock — they sneak in through the cracks other tools miss. That's why organizations are turning to ThreatLocker. As a zero-trust endpoint protection platform, ThreatLocker puts you back in control, blocking what doesn't belong and stopping attacks before they spread. Zero Trust security starts here — with ThreatLocker. Learn more at ThreatLocker.com.

AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store

AI Daily Rundown: October 07, 2025: Your daily briefing on the real world business impact of AIListen at https://podcasts.apple.com/us/podcast/ai-daily-news-rundown-openai-ships-apps-agents-agentkit/id1684415169?i=1000730639683

Generative Now | AI Builders on Creating the Future
Inside the Black Box: The Urgency of AI Interpretability

Generative Now | AI Builders on Creating the Future

Play Episode Listen Later Oct 2, 2025 62:17


Recorded live at Lightspeed's offices in San Francisco, this special episode of Generative Now dives into the urgency and promise of AI interpretability. Lightspeed partner Nnamdi Iregbulem spoke with Anthropic researcher Jack Lindsey and Goodfire co-founder and Chief Scientist Tom McGrath, who previously co-founded Google DeepMind's interpretability team. They discuss opening the black box of modern AI models in order to understand their reliability and spot real-world safety concerns, in order to build AI systems of the future that we can trust. Episode Chapters: 00:42 Welcome and Introduction00:36 Overview of Lightspeed and AI Investments03:19 Event Agenda and Guest Introductions05:35 Discussion on Interpretability in AI18:44 Technical Challenges in AI Interpretability29:42 Advancements in Model Interpretability30:05 Smarter Models and Interpretability31:26 Models Doing the Work for Us32:43 Real-World Applications of Interpretability34:32 Philanthropics' Approach to Interpretability39:15 Breakthrough Moments in AI Interpretability44:41 Challenges and Future Directions48:18 Neuroscience and Model Training Insights54:42 Emergent Misalignment and Model Behavior01:01:30 Concluding Thoughts and NetworkingStay in touch:www.lsvp.comX: https://twitter.com/lightspeedvpLinkedIn: https://www.linkedin.com/company/lightspeed-venture-partners/Instagram: https://www.instagram.com/lightspeedventurepartners/Subscribe on your favorite podcast app: generativenow.coEmail: generativenow@lsvp.comThe content here does not constitute tax, legal, business or investment advice or an offer to provide such advice, should not be construed as advocating the purchase or sale of any security or investment or a recommendation of any company, and is not an offer, or solicitation of an offer, for the purchase or sale of any security or investment product. For more details please see lsvp.com/legal.

a16z
Building an AI Physicist: ChatGPT Co-Creator's Next Venture

a16z

Play Episode Listen Later Sep 30, 2025 54:20


Scaling laws took us from GPT-1 to GPT-5 Pro. But in order to crack physics, we'll need a different approach. In this episode, a16z General Partner Anjney Midha talks to Liam Fedus, former VP of post-training research and co-creator of ChatGPT at OpenAI, and Ekin Dogus Cubuk, former head of materials science and chemistry research at Google DeepMind, on their new startup Periodic Labs and their plan to automate discovery in the hard sciences.Follow Liam on X: https://x.com/LiamFedusFollow Dogus on X: https://x.com/ekindogusLearn more about Periodic: https://periodic.com/ Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Podcast on SpotifyListen to the a16z Podcast on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Business of Tech
California's AI Law, Malicious MCP Server, Microsoft Marketplace Overhaul & VMware Migration

Business of Tech

Play Episode Listen Later Sep 30, 2025 16:11


The episode starts with the passage of California's groundbreaking AI transparency law, marking the first legislation in the United States that mandates large AI companies to disclose their safety protocols and provide whistleblower protections. This law applies to major AI labs like OpenAI, Anthropic, and Google DeepMind, requiring them to report critical safety incidents to California's Office of Emergency Services and ensure safety for communities while promoting AI growth. This regulation is a clear signal that the compliance wave surrounding AI is real, with California leading the charge in shaping the future of AI governance.The second story delves into a new cybersecurity risk in the form of the first known malicious Model Context Protocol (MCP) server discovered in the wild. A rogue npm package, "postmark-mcp," was found to be forwarding email data to an external address, exposing sensitive communications. This incident raises concerns about the security of software supply chains and highlights how highly trusted systems like MCP servers are being exploited. Service providers are urged to be vigilant, as this attack marks the emergence of a new vulnerability within increasingly complex software environments.Moving to Microsoft, the company is revamping its Marketplace to introduce stricter partner rules and enhanced discoverability for partner solutions. Microsoft's new initiative, Intune for MSPs, aims to address the needs of managed service providers who have long struggled with multi-tenancy management. Additionally, the company's new "Agent Mode" in Excel and Word promises to streamline productivity by automating tasks but has raised concerns over its accuracy. Despite the potential, Microsoft's tightening ecosystem requires careful navigation for both customers and partners, with compliance and risk management being central to successful engagement.Finally, Broadcom's decision to end support for VMware vSphere 7 has left customers with difficult decisions. As part of Broadcom's transition to a subscription-based model, customers face either costly upgrades, cloud migrations, or reliance on third-party support. Gartner predicts that a significant number of VMware customers will migrate to the cloud in the coming years, and this shift presents a valuable opportunity for service providers to act as trusted advisors in guiding clients through the transition. For those who can manage the complexity of this migration, there's a once-in-a-generation opportunity to capture long-term customer loyalty. Three things to know today00:00 California Enacts Nation's First AI Transparency Law, Mandating Safety Disclosures and Whistleblower Protections05:25 First Malicious MCP Server Discovered, Exposing Email Data and Raising New Software Supply Chain Fears07:16 Microsoft's New Playbook: Stricter Marketplace, Finally Some MSP Love, and AI That's Right Only Half the Time11:07 VMware Customers Face Subscription Shift, Rising Cloud Moves, and Risky Alternatives as Broadcom Ends vSphere 7 This is the Business of Tech.   Supported by: https://scalepad.com/dave/https://mailprotector.com/ Webinar:  https://bit.ly/msprmail All our Sponsors: https://businessof.tech/sponsors/ Do you want the show on your podcast app or the written versions of the stories? Subscribe to the Business of Tech: https://www.businessof.tech/subscribe/Looking for a link from the stories? The entire script of the show, with links to articles, are posted in each story on https://www.businessof.tech/ Support the show on Patreon: https://patreon.com/mspradio/ Want to be a guest on Business of Tech: Daily 10-Minute IT Services Insights? Send Dave Sobel a message on PodMatch, here: https://www.podmatch.com/hostdetailpreview/businessoftech Want our stuff? Cool Merch? Wear “Why Do We Care?” - Visit https://mspradio.myspreadshop.com Follow us on:LinkedIn: https://www.linkedin.com/company/28908079/YouTube: https://youtube.com/mspradio/Facebook: https://www.facebook.com/mspradionews/Instagram: https://www.instagram.com/mspradio/TikTok: https://www.tiktok.com/@businessoftechBluesky: https://bsky.app/profile/businessof.tech Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

The Daily Crunch – Spoken Edition
California Governor Newsom signs landmark AI safety bill; also, Explosion, vehicle fire rock Faraday Future's LA headquarters

The Daily Crunch – Spoken Edition

Play Episode Listen Later Sep 30, 2025 7:02


California Governor Gavin Newsom has signed SB 53, a first-in-the-nation bill that sets new transparency requirements on large AI companies. The bill, which passed the state legislature two weeks ago, requires large AI labs, including OpenAI, Anthropic, Meta, and Google DeepMind,  to be transparent about safety protocols. It also ensures whistleblower protections for employees at those companies.   Also, a Faraday Future electric SUV caught fire at the startup's Los Angeles headquarters early Sunday morning, leading to an explosion that blew out part of a wall.  The fire was extinguished in 40 minutes, and no injuries were reported. Damage to the building, which is a smaller two-story structure next to the larger portion of the headquarters, was severe enough that the city's Department of Building and Safety has “red tagged” it, meaning it may need structural work before it can be occupied again.  Learn more about your ad choices. Visit podcastchoices.com/adchoices

Leveraging AI
227 | AI is now outperforming top human experts in coding & real-world tasks

Leveraging AI

Play Episode Listen Later Sep 27, 2025 64:49 Transcription Available


Check the self-paced AI Business Transformation course - https://multiplai.ai/self-paced-online-course/ What happens when AI not only matches but beats the best human minds?OpenAI and Google DeepMind just entered  and won the "Olympics of coding", outperforming every top university team in the world… using off-the-shelf models. Now, combine that with agents, robotics, and a trillion-dollar infrastructure arms race, and business as we know it is about to change — fast.In this Weekend News episode of Leveraging AI, Isar Meitis breaks down the real-world implications of AI's explosive progress on your workforce, your bottom line, and your industry's future.Whether you're leading digital transformation or trying to stay ahead of disruption, this episode delivers the insights you need — minus the fluff.In this session, you'll discover:01:12 – AI beats elite humans at coding using public models05:15 – OpenAI's GDP-VAL study: AI outperforms humans in 40–49% of real-world jobs12:56 – KPMG report: 42% of enterprises already deploy AI agents18:02 – Allianz warns: 15–20% of companies could vanish without AI adaptation29:22 – OpenAI + Nvidia announce $100B+ infrastructure build33:30 – Deutsche Bank: AI spending may be masking a U.S. recession43:15 – Sam Altman introduces “Pulse”: ChatGPT gets proactiveand more!About Leveraging AI The Ultimate AI Course for Business People: https://multiplai.ai/ai-course/ YouTube Full Episodes: https://www.youtube.com/@Multiplai_AI/ Connect with Isar Meitis: https://www.linkedin.com/in/isarmeitis/ Join our Live Sessions, AI Hangouts and newsletter: https://services.multiplai.ai/events If you've enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Data Driven
Why Simulating Reality Is the Key to Advancing Artificial Intelligence

Data Driven

Play Episode Listen Later Sep 25, 2025 53:38 Transcription Available


In this episode, we're joined once again by Christopher Nuland, technical marketing manager at Red Hat, whose globe-trotting schedule rivals the complexity of a Kubernetes deployment. Christopher sits down with hosts Bailey and Frank La Vigne to explore the frontier of artificial intelligence—from simulating reality and continuous learning models to debates around whether we really need humanoid robots to achieve superintelligence, or if a convincingly detailed simulation (think Grand Theft Auto, but for AI) might get us there first.Christopher takes us on a whirlwind tour of Google DeepMind's pioneering alpha projects, the latest buzz around simulating experiences for AI, and the metaphysical rabbit hole of iRobot and simulation theory. We dive into why the next big advancement in AI might not come from making models bigger, but from making them better at simulating the world around them. Along the way, we tackle timely topics in AI governance, security, and the ethics of continuous learning, with plenty of detours through pop culture, finance, and grassroots tech conferences.If you're curious about where the bleeding edge of AI meets science fiction, and how simulation could redefine the race for superintelligence, this episode is for you. Buckle up—because reality might just be the next thing AI learns to hack.Time Stamps00:00 Upcoming European and US Conferences05:38 AI Optimization Plateau08:43 Simulation's Role in Spatial Awareness10:00 Evolutionary Efficiency of Human Brains16:30 "Robotics Laws and Contradictions"17:32 AI, Paperclips, and Robot Ethics22:18 Troubleshooting Insight Experience25:16 Challenges in Training Deep Learning Models27:15 Challenges in Continuous Model Training32:04 AI Gateway for Specialized Requests36:54 Open Source and Rapid Innovation38:10 Industry-Specific AI Breakthroughs43:28 Misrepresented R&D Success Rates44:51 POC Challenges: Meaningful Versus Superficial47:59 "Crypto's Bumpy Crash"52:59 AI: Beyond Models to Simulation

This Week in Machine Learning & Artificial Intelligence (AI) Podcast

Today, we're joined by Oliver Wang, principal scientist at Google DeepMind and tech lead for Gemini 2.5 Flash Image—better known by its code name, “Nano Banana.” We dive into the development and capabilities of this newly released frontier vision-language model, beginning with the broader shift from specialized image generators to general-purpose multimodal agents that can use both visual and textual data for a variety of tasks. Oliver explains how Nano Banana can generate and iteratively edit images while maintaining consistency, and how its integration with Gemini's world knowledge expands creative and practical use cases. We discuss the tension between aesthetics and accuracy, the relative maturity of image models compared to text-based LLMs, and scaling as a driver of progress. Oliver also shares surprising emergent behaviors, the challenges of evaluating vision-language models, and the risks of training on AI-generated data. Finally, we look ahead to interactive world models and VLMs that may one day “think” and “reason” in images. The complete show notes for this episode can be found at https://twimlai.com/go/748.

Prime Venture Partners Podcast
Why India Must Lead in Deep AI Research | Insights from Google DeepMind Director

Prime Venture Partners Podcast

Play Episode Listen Later Sep 23, 2025 54:47


In this episode, we sit down with Prateek Jain (Principal Scientist & Director at Google DeepMind) for an unfiltered deep dive into:How AI research has evolved from handcrafted features → deep learning → transformers → generativeWhy safety and efficiency matter just as much as scaleIndia's once-in-a-generation chance to lead in deep AI researchThe founder's playbook: building sustainable AI-first companiesThe next big bottlenecks — and opportunities — in this spaceThis is more than a conversation. It's a blueprint for the future of AI.

Canaltech Podcast
Como a Eletrobras usa IA do Google para prever o clima e evitar apagões

Canaltech Podcast

Play Episode Listen Later Sep 18, 2025 26:13


A Eletrobras anunciou uma parceria inédita com o Google Cloud e o Google DeepMind para trazer inteligência artificial ao processo de previsão meteorológica. O objetivo é aumentar a precisão das previsões de curto e médio prazo e, assim, reduzir os impactos de eventos climáticos extremos no fornecimento de energia. No novo episódio do Podcast Canaltech, o repórter Marcelo Fischer, conversa com Lucas Pinz, diretor de Inovação e Tecnologia da Eletrobras, que explicou como funciona o modelo Weather Next e como a empresa está aplicando essa tecnologia no Brasil. A iniciativa faz parte do projeto Atmos, centro de inteligência e monitoramento meteorológico da Eletrobras, que une meteorologistas, cientistas de dados e engenheiros para antecipar situações críticas como ondas de calor, chuvas intensas e ventos extremos. Segundo Lucas, o uso de IA permite que a companhia se prepare melhor para cenários de risco e garanta mais qualidade no fornecimento de energia, reduzindo desligamentos e fortalecendo a rede elétrica frente às mudanças climáticas. Você também vai conferir: Trump adia de novo o banimento do TikTok nos EUA em meio a negociações de venda, WhatsApp libera lembretes de mensagens também no iPhone, OpenAI quer reduzir as “alucinações” do ChatGPT, mas você pode não gostar da solução e Xiaomi lança novo tablet para desafiar o iPad Pro. Este podcast foi roteirizado e apresentado por Fernanda Santos e contou com reportagens de Claudio Yuge, André Lourenti, João Melo, Renato Moura sob coordenação de Anáísa Catucci. A trilha sonora é de Guilherme Zomer, a edição de Jully Cruz e a arte da capa é de Erick Teixeira.See omnystudio.com/listener for privacy information.

80,000 Hours Podcast with Rob Wiblin
Neel Nanda on leading a Google DeepMind team at 26 – and advice if you want to work at an AI company (part 2)

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Sep 15, 2025 106:49


At 26, Neel Nanda leads an AI safety team at Google DeepMind, has published dozens of influential papers, and mentored 50 junior researchers — seven of whom now work at major AI companies. His secret? “It's mostly luck,” he says, but “another part is what I think of as maximising my luck surface area.”Video, full transcript, and links to learn more: https://80k.info/nn2This means creating as many opportunities as possible for surprisingly good things to happen:Write publicly.Reach out to researchers whose work you admire.Say yes to unusual projects that seem a little scary.Nanda's own path illustrates this perfectly. He started a challenge to write one blog post per day for a month to overcome perfectionist paralysis. Those posts helped seed the field of mechanistic interpretability and, incidentally, led to meeting his partner of four years.His YouTube channel features unedited three-hour videos of him reading through famous papers and sharing thoughts. One has 30,000 views. “People were into it,” he shrugs.Most remarkably, he ended up running DeepMind's mechanistic interpretability team. He'd joined expecting to be an individual contributor, but when the team lead stepped down, he stepped up despite having no management experience. “I did not know if I was going to be good at this. I think it's gone reasonably well.”His core lesson: “You can just do things.” This sounds trite but is a useful reminder all the same. Doing things is a skill that improves with practice. Most people overestimate the risks and underestimate their ability to recover from failures. And as Neel explains, junior researchers today have a superpower previous generations lacked: large language models that can dramatically accelerate learning and research.In this extended conversation, Neel and host Rob Wiblin discuss all that and some other hot takes from Neel's four years at Google DeepMind. (And be sure to check out part one of Rob and Neel's conversation!)What did you think of the episode? https://forms.gle/6binZivKmjjiHU6dA Chapters:Cold open (00:00:00)Who's Neel Nanda? (00:01:12)Luck surface area and making the right opportunities (00:01:46)Writing cold emails that aren't insta-deleted (00:03:50)How Neel uses LLMs to get much more done (00:09:08)“If your safety work doesn't advance capabilities, it's probably bad safety work” (00:23:22)Why Neel refuses to share his p(doom) (00:27:22)How Neel went from the couch to an alignment rocketship (00:31:24)Navigating towards impact at a frontier AI company (00:39:24)How does impact differ inside and outside frontier companies? (00:49:56)Is a special skill set needed to guide large companies? (00:56:06)The benefit of risk frameworks: early preparation (01:00:05)Should people work at the safest or most reckless company? (01:05:21)Advice for getting hired by a frontier AI company (01:08:40)What makes for a good ML researcher? (01:12:57)Three stages of the research process (01:19:40)How do supervisors actually add value? (01:31:53)An AI PhD – with these timelines?! (01:34:11)Is career advice generalisable, or does everyone get the advice they don't need? (01:40:52)Remember: You can just do things (01:43:51)This episode was recorded on July 21.Video editing: Simon Monsour and Luke MonsourAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongMusic: Ben CordellCamera operator: Jeremy ChevillotteCoordination, transcriptions, and web: Katy Moore

Les Cast Codeurs Podcast
LCC 330 - Nano banana l'AI de Julia

Les Cast Codeurs Podcast

Play Episode Listen Later Sep 15, 2025 108:38


Katia, Emmanuel et Guillaume discutent Java, Kotlin, Quarkus, Hibernate, Spring Boot 4, intelligence artificielle (modèles Nano Banana, VO3, frameworks agentiques, embedding). On discute les vulnerabilités OWASP pour les LLMs, les personalités de codage des différents modèles, Podman vs Docker, comment moderniser des projets legacy. Mais surtout on a passé du temps sur les présentations de Luc Julia et les différents contre points qui ont fait le buzz sur les réseaux. Enregistré le 12 septembre 2025 Téléchargement de l'épisode LesCastCodeurs-Episode-330.mp3 ou en vidéo sur YouTube. News Langages Dans cette vidéo, José détaille les nouveautés de Java entre Java 21 et 25 https://inside.java/2025/08/31/roadto25-java-language/ Aperçu des nouveautés du JDK 25 : Introduction des nouvelles fonctionnalités du langage Java et des changements à venir [00:02]. Programmation orientée données et Pattern Matching [00:43] : Évolution du “pattern matching” pour la déconstruction des “records” [01:22]. Utilisation des “sealed types” dans les expressions switch pour améliorer la lisibilité et la robustesse du code [01:47]. Introduction des “unnamed patterns” (_) pour indiquer qu'une variable n'est pas utilisée [04:47]. Support des types primitifs dans instanceof et switch (en preview) [14:02]. Conception d'applications Java [00:52] : Simplification de la méthode main [21:31]. Exécution directe des fichiers .java sans compilation explicite [22:46]. Amélioration des mécanismes d'importation [23:41]. Utilisation de la syntaxe Markdown dans la Javadoc [27:46]. Immuabilité et valeurs nulles [01:08] : Problème d'observation de champs final à null pendant la construction d'un objet [28:44]. JEP 513 pour contrôler l'appel à super() et restreindre l'usage de this dans les constructeurs [33:29]. JDK 25 sort le 16 septembre https://openjdk.org/projects/jdk/25/ Scoped Values (JEP 505) - alternative plus efficace aux ThreadLocal pour partager des données immutables entre threads Structured Concurrency (JEP 506) - traiter des groupes de tâches concurrentes comme une seule unité de travail, simplifiant la gestion des threads Compact Object Headers (JEP 519) - Fonctionnalité finale qui réduit de 50% la taille des en-têtes d'objets (de 128 à 64 bits), économisant jusqu'à 22% de mémoire heap Flexible Constructor Bodies (JEP 513) - Relaxation des restrictions sur les constructeurs, permettant du code avant l'appel super() ou this() Module Import Declarations (JEP 511) - Import simplifié permettant d'importer tous les éléments publics d'un module en une seule déclaration Compact Source Files (JEP 512) - Simplification des programmes Java basiques avec des méthodes main d'instance sans classe wrapper obligatoire Primitive Types in Patterns (JEP 455) - Troisième preview étendant le pattern matching et instanceof aux types primitifs dans switch et instanceof Generational Shenandoah (JEP 521) - Le garbage collector Shenandoah passe en mode générationnel pour de meilleures performances JFR Method Timing & Tracing (JEP 520) - Nouvel outillage de profilage pour mesurer le temps d'exécution et tracer les appels de méthodes Key Derivation API (JEP 510) - API finale pour les fonctions de dérivation de clés cryptographiques, remplaçant les implémentations tierces Améliorations du traitement des annotations dans Kotlin 2.2 https://blog.jetbrains.com/idea/2025/09/improved-annotation-handling-in-kotlin-2-2-less-boilerplate-fewer-surprises/ Avant Kotlin 2.2, les annotations sur les paramètres de constructeur n'étaient appliquées qu'au paramètre, pas à la propriété ou au champ Cela causait des bugs subtils avec Spring et JPA où la validation ne fonctionnait qu'à la création d'objet, pas lors des mises à jour La solution précédente nécessitait d'utiliser explicitement @field: pour chaque annotation, créant du code verbeux Kotlin 2.2 introduit un nouveau comportement par défaut qui applique les annotations aux paramètres ET aux propriétés/champs automatiquement Le code devient plus propre sans avoir besoin de syntaxe @field: répétitive Pour l'activer, ajouter -Xannotation-default-target=param-property dans les options du compilateur Gradle IntelliJ IDEA propose un quick-fix pour activer ce comportement à l'échelle du projet Cette amélioration rend l'intégration Kotlin plus fluide avec les frameworks majeurs comme Spring et JPA Le comportement peut être configuré pour garder l'ancien mode ou activer un mode transitoire avec avertissements Cette mise à jour fait partie d'une initiative plus large pour améliorer l'expérience Kotlin + Spring Librairies Sortie de Quarkus 3.26 avec mises à jour d'Hibernate et autres fonctionnalités - https://quarkus.io/blog/quarkus-3-26-released/ mettez à jour vers la 3.26.x car il y a eu une regression vert.x Jalon important vers la version LTS 3.27 prévue fin septembre, basée sur cette version Mise à jour vers Hibernate ORM 7.1, Hibernate Search 8.1 et Hibernate Reactive 3.1 Support des unités de persistance nommées et sources de données dans Hibernate Reactive Démarrage hors ligne et configuration de dialecte pour Hibernate ORM même si la base n'est pas accessible Refonte de la console HQL dans Dev UI avec fonctionnalité Hibernate Assistant intégrée Exposition des capacités Dev UI comme fonctions MCP pour pilotage via outils IA Rafraîchissement automatique des tokens OIDC en cas de réponse 401 des clients REST Extension JFR pour capturer les données runtime (nom app, version, extensions actives) Bump de Gradle vers la version 9.0 par défaut, suppression du support des classes config legacy Guide de démarrage avec Quarkus et A2A Java SDK 0.3.0 (pour faire discuter des agents IA avec la dernière version du protocole A2A) https://quarkus.io/blog/quarkus-a2a-java-0-3-0-alpha-release/ Sortie de l'A2A Java SDK 0.3.0.Alpha1, aligné avec la spécification A2A v0.3.0. Protocole A2A : standard ouvert (Linux Foundation), permet la communication inter-agents IA polyglottes. Version 0.3.0 plus stable, introduit le support gRPC. Mises à jour générales : changements significatifs, expérience utilisateur améliorée (côté client et serveur). Agents serveur A2A : Support gRPC ajouté (en plus de JSON-RPC). HTTP+JSON/REST à venir. Implémentations basées sur Quarkus (alternatives Jakarta existent). Dépendances spécifiques pour chaque transport (ex: a2a-java-sdk-reference-jsonrpc, a2a-java-sdk-reference-grpc). AgentCard : décrit les capacités de l'agent. Doit spécifier le point d'accès primaire et tous les transports supportés (additionalInterfaces). Clients A2A : Dépendance principale : a2a-java-sdk-client. Support gRPC ajouté (en plus de JSON-RPC). HTTP+JSON/REST à venir. Dépendance spécifique pour gRPC : a2a-java-sdk-client-transport-grpc. Création de client : via ClientBuilder. Sélectionne automatiquement le transport selon l'AgentCard et la configuration client. Permet de spécifier les transports supportés par le client (withTransport). Comment générer et éditer des images en Java avec Nano Banana, le “photoshop killer” de Google https://glaforge.dev/posts/2025/09/09/calling-nano-banana-from-java/ Objectif : Intégrer le modèle Nano Banana (Gemini 2.5 Flash Image preview) dans des applications Java. SDK utilisé : GenAI Java SDK de Google. Compatibilité : Supporté par ADK for Java ; pas encore par LangChain4j (limitation de multimodalité de sortie). Capacités de Nano Banana : Créer de nouvelles images. Modifier des images existantes. Assembler plusieurs images. Mise en œuvre Java : Quelle dépendance utiliser Comment s'authentifier Comment configurer le modèle Nature du modèle : Nano Banana est un modèle de chat qui peut retourner du texte et une image (pas simplement juste un modèle générateur d'image) Exemples d'utilisation : Création : Via un simple prompt textuel. Modification : En passant l'image existante (tableau de bytes) et les instructions de modification (prompt). Assemblage : En passant plusieurs images (en bytes) et les instructions d'intégration (prompt). Message clé : Toutes ces fonctionnalités sont accessibles en Java, sans nécessiter Python. Générer des vidéos IA avec le modèle Veo 3, mais en Java ! https://glaforge.dev/posts/2025/09/10/generating-videos-in-java-with-veo3/ Génération de vidéos en Java avec Veo 3 (via le GenAI Java SDK de Google). Veo 3: Annoncé comme GA, prix réduits, support du format 9:16, résolution jusqu'à 1080p. Création de vidéos : À partir d'une invite textuelle (prompt). À partir d'une image existante. Deux versions différentes du modèle : veo-3.0-generate-001 (qualité supérieure, plus coûteux, plus lent). veo-3.0-fast-generate-001 (qualité inférieure, moins coûteux, mais plus rapide). Rod Johnson sur ecrire des aplication agentic en Java plus facilement qu'en python avec Embabel https://medium.com/@springrod/you-can-build-better-ai-agents-in-java-than-python-868eaf008493 Rod the papa de Spring réécrit un exemple CrewAI (Python) qui génère un livre en utilisant Embabel (Java) pour démontrer la supériorité de Java L'application utilise plusieurs agents AI spécialisés : un chercheur, un planificateur de livre et des rédacteurs de chapitres Le processus suit trois étapes : recherche du sujet, création du plan, rédaction parallèle des chapitres puis assemblage CrewAI souffre de plusieurs problèmes : configuration lourde, manque de type safety, utilisation de clés magiques dans les prompts La version Embabel nécessite moins de code Java que l'original Python et moins de fichiers de configuration YAML Embabel apporte la type safety complète, éliminant les erreurs de frappe dans les prompts et améliorant l'outillage IDE La gestion de la concurrence est mieux contrôlée en Java pour éviter les limites de débit des APIs LLM L'intégration avec Spring permet une configuration externe simple des modèles LLM et hyperparamètres Le planificateur Embabel détermine automatiquement l'ordre d'exécution des actions basé sur leurs types requis L'argument principal : l'écosystème JVM offre un meilleur modèle de programmation et accès à la logique métier existante que Python Il y a pas mal de nouveaux framework agentic en Java, notamment le dernier LAngchain4j Agentic Spring lance un serie de blog posts sur les nouveautés de Spring Boot 4 https://spring.io/blog/2025/09/02/road_to_ga_introduction baseline JDK 17 mais rebase sur Jakarta 11 Kotlin 2, Jackson 3 et JUnit 6 Fonctionnalités de résilience principales de Spring : @ConcurrencyLimit, @Retryable, RetryTemplate Versioning d'API dans Spring Améliorations du client de service HTTP L'état des clients HTTP dans Spring Introduction du support Jackson 3 dans Spring Consommateur partagé - les queues Kafka dans Spring Kafka Modularisation de Spring Boot Autorisation progressive dans Spring Security Spring gRPC - un nouveau module Spring Boot Applications null-safe avec Spring Boot 4 OpenTelemetry avec Spring Boot Repos Ahead of Time (Partie 2) Web Faire de la recherche sémantique directement dans le navigateur en local, avec EmbeddingGemma et Transformers.js https://glaforge.dev/posts/2025/09/08/in-browser-semantic-search-with-embeddinggemma/ EmbeddingGemma: Nouveau modèle d'embedding (308M paramètres) de Google DeepMind. Objectif: Permettre la recherche sémantique directement dans le navigateur. Avantages clés de l'IA côté client: Confidentialité: Aucune donnée envoyée à un serveur. Coûts réduits: Pas besoin de serveurs coûteux (GPU), hébergement statique. Faible latence: Traitement instantané sans allers-retours réseau. Fonctionnement hors ligne: Possible après le chargement initial du modèle. Technologie principale: Modèle: EmbeddingGemma (petit, performant, multilingue, support MRL pour réduire la taille des vecteurs). Moteur d'inférence: Transformers.js de HuggingFace (exécute les modèles AI en JavaScript dans le navigateur). Déploiement: Site statique avec Vite/React/Tailwind CSS, déployé sur Firebase Hosting via GitHub Actions. Gestion du modèle: Fichiers du modèle trop lourds pour Git; téléchargés depuis HuggingFace Hub pendant le CI/CD. Fonctionnement de l'app: Charge le modèle, génère des embeddings pour requêtes/documents, calcule la similarité sémantique. Conclusion: Démonstration d'une recherche sémantique privée, économique et sans serveur, soulignant le potentiel de l'IA embarquée dans le navigateur. Data et Intelligence Artificielle Docker lance Cagent, une sorte de framework multi-agent IA utilisant des LLMs externes, des modèles de Docker Model Runner, avec le Docker MCP Tookit. Il propose un format YAML pour décrire les agents d'un système multi-agents. https://github.com/docker/cagent des agents “prompt driven” (pas de code) et une structure pour decrire comment ils sont deployés pas clair comment ils sont appelés a part dans la ligne de commande de cagent fait par david gageot L'owasp décrit l'independance excessive des LLM comme une vulnerabilité https://genai.owasp.org/llmrisk2023-24/llm08-excessive-agency/ L'agence excessive désigne la vulnérabilité qui permet aux systèmes LLM d'effectuer des actions dommageables via des sorties inattendues ou ambiguës. Elle résulte de trois causes principales : fonctionnalités excessives, permissions excessives ou autonomie excessive des agents LLM. Les fonctionnalités excessives incluent l'accès à des plugins qui offrent plus de capacités que nécessaire, comme un plugin de lecture qui peut aussi modifier ou supprimer. Les permissions excessives se manifestent quand un plugin accède aux systèmes avec des droits trop élevés, par exemple un accès en lecture qui inclut aussi l'écriture. L'autonomie excessive survient quand le système effectue des actions critiques sans validation humaine préalable. Un scénario d'attaque typique : un assistant personnel avec accès email peut être manipulé par injection de prompt pour envoyer du spam via la boîte de l'utilisateur. La prévention implique de limiter strictement les plugins aux fonctions minimales nécessaires pour l'opération prévue. Il faut éviter les fonctions ouvertes comme “exécuter une commande shell” au profit d'outils plus granulaires et spécifiques. L'application du principe de moindre privilège est cruciale : chaque plugin doit avoir uniquement les permissions minimales requises. Le contrôle humain dans la boucle reste essentiel pour valider les actions à fort impact avant leur exécution. Lancement du MCP registry, une sorte de méta-annuaire officiel pour référencer les serveurs MCP https://www.marktechpost.com/2025/09/09/mcp-team-launches-the-preview-version-of-the-mcp-registry-a-federated-discovery-layer-for-enterprise-ai/ MCP Registry : Couche de découverte fédérée pour l'IA d'entreprise. Fonctionne comme le DNS pour le contexte de l'IA, permettant la découverte de serveurs MCP publics ou privés. Modèle fédéré : Évite les risques de sécurité et de conformité d'un registre monolithique. Permet des sous-registres privés tout en conservant une source de vérité “upstream”. Avantages entreprises : Découverte interne sécurisée. Gouvernance centralisée des serveurs externes. Réduction de la prolifération des contextes. Support pour les agents IA hybrides (données privées/publiques). Projet open source, actuellement en version preview. Blog post officiel : https://blog.modelcontextprotocol.io/posts/2025-09-08-mcp-registry-preview/ Exploration des internals du transaction log SQL Server https://debezium.io/blog/2025/09/08/sqlserver-tx-log/ C'est un article pour les rugeux qui veulent savoir comment SQLServer marche à l'interieur Debezium utilise actuellement les change tables de SQL Server CDC en polling périodique L'article explore la possibilité de parser directement le transaction log pour améliorer les performances Le transaction log est divisé en Virtual Log Files (VLFs) utilisés de manière circulaire Chaque VLF contient des blocs (512B à 60KB) qui contiennent les records de transactions Chaque record a un Log Sequence Number (LSN) unique pour l'identifier précisément Les données sont stockées dans des pages de 8KB avec header de 96 bytes et offset array Les tables sont organisées en partitions et allocation units pour gérer l'espace disque L'utilitaire DBCC permet d'explorer la structure interne des pages et leur contenu Cette compréhension pose les bases pour parser programmatiquement le transaction log dans un prochain article Outillage Les personalités des codeurs des différents LLMs https://www.sonarsource.com/blog/the-coding-personalities-of-leading-llms-gpt-5-update/ GPT-5 minimal ne détrône pas Claude Sonnet 4 comme leader en performance fonctionnelle malgré ses 75% de réussite GPT-5 génère un code extrêmement verbeux avec 490 000 lignes contre 370 000 pour Claude Sonnet 4 sur les mêmes tâches La complexité cyclomatique et cognitive du code GPT-5 est dramatiquement plus élevée que tous les autres modèles GPT-5 introduit 3,90 problèmes par tâche réussie contre seulement 2,11 pour Claude Sonnet 4 Point fort de GPT-5 : sécurité exceptionnelle avec seulement 0,12 vulnérabilité par 1000 lignes de code Faiblesse majeure : densité très élevée de “code smells” (25,28 par 1000 lignes) nuisant à la maintenabilité GPT-5 produit 12% de problèmes liés à la complexité cognitive, le taux le plus élevé de tous les modèles Tendance aux erreurs logiques fondamentales avec 24% de bugs de type “Control-flow mistake” Réapparition de vulnérabilités classiques comme les failles d'injection et de traversée de chemin Nécessité d'une gouvernance renforcée avec analyse statique obligatoire pour gérer la complexité du code généré Pourquoi j'ai abandonné Docker pour Podman https://codesmash.dev/why-i-ditched-docker-for-podman-and-you-should-too Problème Docker : Le daemon dockerd persistant s'exécute avec des privilèges root, posant des risques de sécurité (nombreuses CVEs citées) et consommant des ressources inutilement. Solution Podman : Sans Daemon : Pas de processus d'arrière-plan persistant. Les conteneurs s'exécutent comme des processus enfants de la commande Podman, sous les privilèges de l'utilisateur. Sécurité Renforcée : Réduction de la surface d'attaque. Une évasion de conteneur compromet un utilisateur non privilégié sur l'hôte, pas le système entier. Mode rootless. Fiabilité Accrue : Pas de point de défaillance unique ; le crash d'un conteneur n'affecte pas les autres. Moins de Ressources : Pas de daemon constamment actif, donc moins de mémoire et de CPU. Fonctionnalités Clés de Podman : Intégration Systemd : Génération automatique de fichiers d'unité systemd pour gérer les conteneurs comme des services Linux standards. Alignement Kubernetes : Support natif des pods et capacité à générer des fichiers Kubernetes YAML directement (podman generate kube), facilitant le développement local pour K8s. Philosophie Unix : Se concentre sur l'exécution des conteneurs, délègue les tâches spécialisées à des outils dédiés (ex: Buildah pour la construction d'images, Skopeo pour leur gestion). Migration Facile : CLI compatible Docker : podman utilise les mêmes commandes que docker (alias docker=podman fonctionne). Les Dockerfiles existants sont directement utilisables. Améliorations incluses : Sécurité par défaut (ports privilégiés en mode rootless), meilleure gestion des permissions de volume, API Docker compatible optionnelle. Option de convertir Docker Compose en Kubernetes YAML. Bénéfices en Production : Sécurité améliorée, utilisation plus propre des ressources. Podman représente une évolution plus sécurisée et mieux alignée avec les pratiques modernes de gestion Linux et de déploiement de conteneurs. Guide Pratique (Exemple FastAPI) : Le Dockerfile ne change pas. podman build et podman run remplacent directement les commandes Docker. Déploiement en production via Systemd. Gestion d'applications multi-services avec les “pods” Podman. Compatibilité Docker Compose via podman-compose ou kompose. Détection améliorée des APIs vulnérables dans les IDEs JetBrains et Qodana - https://blog.jetbrains.com/idea/2025/09/enhanced-vulnerable-api-detection-in-jetbrains-ides-and-qodana/ JetBrains s'associe avec Mend.io pour renforcer la sécurité du code dans leurs outils Le plugin Package Checker bénéficie de nouvelles données enrichies sur les APIs vulnérables Analyse des graphes d'appels pour couvrir plus de méthodes publiques des bibliothèques open-source Support de Java, Kotlin, C#, JavaScript, TypeScript et Python pour la détection de vulnérabilités Activation des inspections via Paramètres > Editor > Inspections en recherchant “Vulnerable API” Surlignage automatique des méthodes vulnérables avec détails des failles au survol Action contextuelle pour naviguer directement vers la déclaration de dépendance problématique Mise à jour automatique vers une version non affectée via Alt+Enter sur la dépendance Fenêtre dédiée “Vulnerable Dependencies” pour voir l'état global des vulnérabilités du projet Méthodologies Le retour de du sondage de Stack Overflow sur l'usage de l'IA dans le code https://medium.com/@amareshadak/stack-overflow-just-exposed-the-ugly-truth-about-ai-coding-tools-b4f7b5992191 84% des développeurs utilisent l'IA quotidiennement, mais 46% ne font pas confiance aux résultats. Seulement 3,1% font “hautement confiance” au code généré. 66% sont frustrés par les solutions IA “presque correctes”. 45% disent que déboguer le code IA prend plus de temps que l'écrire soi-même. Les développeurs seniors (10+ ans) font moins confiance à l'IA (2,6%) que les débutants (6,1%), créant un écart de connaissances dangereux. Les pays occidentaux montrent moins de confiance - Allemagne (22%), UK (23%), USA (28%) - que l'Inde (56%). Les créateurs d'outils IA leur font moins confiance. 77% des développeurs professionnels rejettent la programmation en langage naturel, seuls 12% l'utilisent réellement. Quand l'IA échoue, 75% se tournent vers les humains. 35% des visites Stack Overflow concernent maintenant des problèmes liés à l'IA. 69% rapportent des gains de productivité personnels, mais seulement 17% voient une amélioration de la collaboration d'équipe. Coûts cachés : temps de vérification, explication du code IA aux équipes, refactorisation et charge cognitive constante. Les plateformes humaines dominent encore : Stack Overflow (84%), GitHub (67%), YouTube (61%) pour résoudre les problèmes IA. L'avenir suggère un “développement augmenté” où l'IA devient un outil parmi d'autres, nécessitant transparence et gestion de l'incertitude. Mentorat open source et défis communautaires par les gens de Microcks https://microcks.io/blog/beyond-code-open-source-mentorship/ Microcks souffre du syndrome des “utilisateurs silencieux” qui bénéficient du projet sans contribuer Malgré des milliers de téléchargements et une adoption croissante, l'engagement communautaire reste faible Ce manque d'interaction crée des défis de durabilité et limite l'innovation du projet Les mainteneurs développent dans le vide sans feedback des vrais utilisateurs Contribuer ne nécessite pas de coder : documentation, partage d'expérience, signalement de bugs suffisent Parler du project qu'on aime autour de soi est aussi super utile Microcks a aussi des questions specifiques qu'ils ont posé dans le blog, donc si vous l'utilisez, aller voir Le succès de l'open source dépend de la transformation des utilisateurs en véritables partenaires communautaires c'est un point assez commun je trouve, le ratio parlant / silencieux est tres petit et cela encourage les quelques grandes gueules La modernisation du systemes legacy, c'est pas que de la tech https://blog.scottlogic.com/2025/08/27/holistic-approach-successful-legacy-modernisation.html Un artcile qui prend du recul sur la modernisation de systemes legacy Les projets de modernisation legacy nécessitent une vision holistique au-delà du simple focus technologique Les drivers business diffèrent des projets greenfield : réduction des coûts et mitigation des risques plutôt que génération de revenus L'état actuel est plus complexe à cartographier avec de nombreuses dépendances et risques de rupture Collaboration essentielle entre Architectes, Analystes Business et Designers UX dès la phase de découverte Approche tridimensionnelle obligatoire : Personnes, Processus et Technologie (comme un jeu d'échecs 3D) Le leadership doit créer l'espace nécessaire pour la découverte et la planification plutôt que presser l'équipe Communication en termes business plutôt que techniques vers tous les niveaux de l'organisation Planification préalable essentielle contrairement aux idées reçues sur l'agilité Séquencement optimal souvent non-évident et nécessitant une analyse approfondie des interdépendances Phases projet alignées sur les résultats business permettent l'agilité au sein de chaque phase Sécurité Cyber Attaque su Musée Histoire Naturelle https://www.franceinfo.fr/internet/securite-sur-internet/cyberattaques/le-museum-nati[…]e-d-une-cyberattaque-severe-une-plainte-deposee_7430356.html Compromission massive de packages npm populaires par un malware crypto https://www.aikido.dev/blog/npm-debug-and-chalk-packages-compromised 18 packages npm très populaires compromis le 8 septembre 2025, incluant chalk, debug, ansi-styles avec plus de 2 milliards de téléchargements hebdomadaires combinés duckdb s'est rajouté à la liste Code malveillant injecté qui intercepte silencieusement l'activité crypto et web3 dans les navigateurs des utilisateurs Le malware manipule les interactions de wallet et redirige les paiements vers des comptes contrôlés par l'attaquant sans signes évidents Injection dans les fonctions critiques comme fetch, XMLHttpRequest et APIs de wallets (window.ethereum, Solana) pour intercepter le trafic Détection et remplacement automatique des adresses crypto sur multiple blockchains (Ethereum, Bitcoin, Solana, Tron, Litecoin, Bitcoin Cash) Les transactions sont modifiées en arrière-plan même si l'interface utilisateur semble correcte et légitime Utilise des adresses “sosies” via correspondance de chaînes pour rendre les échanges moins évidents à détecter Le mainteneur compromis par email de phishing provenant du faux domaine “mailto:support@npmjs.help|support@npmjs.help” enregistré 3 jours avant l'attaque sur une demande de mise a jour de son autheotnfication a deux facteurs après un an Aikido a alerté le mainteneur via Bluesky qui a confirmé la compromission et commencé le nettoyage des packages Attaque sophistiquée opérant à plusieurs niveaux: contenu web, appels API et manipulation des signatures de transactions Les anti-cheats de jeux vidéo : une faille de sécurité majeure ? - https://tferdinand.net/jeux-video-et-si-votre-anti-cheat-etait-la-plus-grosse-faille/ Les anti-cheats modernes s'installent au Ring 0 (noyau système) avec privilèges maximaux Ils obtiennent le même niveau d'accès que les antivirus professionnels mais sans audit ni certification Certains exploitent Secure Boot pour se charger avant le système d'exploitation Risque de supply chain : le groupe APT41 a déjà compromis des jeux comme League of Legends Un attaquant infiltré pourrait désactiver les solutions de sécurité et rester invisible Menace de stabilité : une erreur peut empêcher le démarrage du système (référence CrowdStrike) Conflits possibles entre différents anti-cheats qui se bloquent mutuellement Surveillance en temps réel des données d'utilisation sous prétexte anti-triche Dérive dangereuse selon l'auteur : des entreprises de jeux accèdent au niveau EDR Alternatives limitées : cloud gaming ou sandboxing avec impact sur performances donc faites gaffe aux jeux que vos gamins installent ! Loi, société et organisation Luc Julia au Sénat - Monsieur Phi réagi et publie la vidéo Luc Julia au Sénat : autopsie d'un grand N'IMPORTE QUOI https://www.youtube.com/watch?v=e5kDHL-nnh4 En format podcast de 20 minutes, sorti au même moment et à propos de sa conf à Devoxx https://www.youtube.com/watch?v=Q0gvaIZz1dM Le lab IA - Jérôme Fortias - Et si Luc Julia avait raison https://www.youtube.com/watch?v=KScI5PkCIaE Luc Julia au Senat https://www.youtube.com/watch?v=UjBZaKcTeIY Luc Julia se défend https://www.youtube.com/watch?v=DZmxa7jJ8sI Intelligence artificielle : catastrophe imminente ? - Luc Julia vs Maxime Fournes https://www.youtube.com/watch?v=sCNqGt7yIjo Tech and Co Monsieur Phi vs Luc Julia (put a click) https://www.youtube.com/watch?v=xKeFsOceT44 La tronche en biais https://www.youtube.com/live/zFwLAOgY0Wc Conférences La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 12 septembre 2025 : Agile Pays Basque 2025 - Bidart (France) 15 septembre 2025 : Agile Tour Montpellier - Montpellier (France) 18-19 septembre 2025 : API Platform Conference - Lille (France) & Online 22-24 septembre 2025 : Kernel Recipes - Paris (France) 22-27 septembre 2025 : La Mélée Numérique - Toulouse (France) 23 septembre 2025 : OWASP AppSec France 2025 - Paris (France) 23-24 septembre 2025 : AI Engineer Paris - Paris (France) 25 septembre 2025 : Agile Game Toulouse - Toulouse (France) 25-26 septembre 2025 : Paris Web 2025 - Paris (France) 30 septembre 2025-1 octobre 2025 : PyData Paris 2025 - Paris (France) 2 octobre 2025 : Nantes Craft - Nantes (France) 2-3 octobre 2025 : Volcamp - Clermont-Ferrand (France) 3 octobre 2025 : DevFest Perros-Guirec 2025 - Perros-Guirec (France) 6-7 octobre 2025 : Swift Connection 2025 - Paris (France) 6-10 octobre 2025 : Devoxx Belgium - Antwerp (Belgium) 7 octobre 2025 : BSides Mulhouse - Mulhouse (France) 7-8 octobre 2025 : Agile en Seine - Issy-les-Moulineaux (France) 8-10 octobre 2025 : SIG 2025 - Paris (France) & Online 9 octobre 2025 : DevCon #25 : informatique quantique - Paris (France) 9-10 octobre 2025 : Forum PHP 2025 - Marne-la-Vallée (France) 9-10 octobre 2025 : EuroRust 2025 - Paris (France) 16 octobre 2025 : PlatformCon25 Live Day Paris - Paris (France) 16 octobre 2025 : Power 365 - 2025 - Lille (France) 16-17 octobre 2025 : DevFest Nantes - Nantes (France) 17 octobre 2025 : Sylius Con 2025 - Lyon (France) 17 octobre 2025 : ScalaIO 2025 - Paris (France) 17-19 octobre 2025 : OpenInfra Summit Europe - Paris (France) 20 octobre 2025 : Codeurs en Seine - Rouen (France) 23 octobre 2025 : Cloud Nord - Lille (France) 30-31 octobre 2025 : Agile Tour Bordeaux 2025 - Bordeaux (France) 30-31 octobre 2025 : Agile Tour Nantais 2025 - Nantes (France) 30 octobre 2025-2 novembre 2025 : PyConFR 2025 - Lyon (France) 4-7 novembre 2025 : NewCrafts 2025 - Paris (France) 5-6 novembre 2025 : Tech Show Paris - Paris (France) 5-6 novembre 2025 : Red Hat Summit: Connect Paris 2025 - Paris (France) 6 novembre 2025 : dotAI 2025 - Paris (France) 6 novembre 2025 : Agile Tour Aix-Marseille 2025 - Gardanne (France) 7 novembre 2025 : BDX I/O - Bordeaux (France) 12-14 novembre 2025 : Devoxx Morocco - Marrakech (Morocco) 13 novembre 2025 : DevFest Toulouse - Toulouse (France) 15-16 novembre 2025 : Capitole du Libre - Toulouse (France) 19 novembre 2025 : SREday Paris 2025 Q4 - Paris (France) 19-21 novembre 2025 : Agile Grenoble - Grenoble (France) 20 novembre 2025 : OVHcloud Summit - Paris (France) 21 novembre 2025 : DevFest Paris 2025 - Paris (France) 27 novembre 2025 : DevFest Strasbourg 2025 - Strasbourg (France) 28 novembre 2025 : DevFest Lyon - Lyon (France) 1-2 décembre 2025 : Tech Rocks Summit 2025 - Paris (France) 4-5 décembre 2025 : Agile Tour Rennes - Rennes (France) 5 décembre 2025 : DevFest Dijon 2025 - Dijon (France) 9-11 décembre 2025 : APIdays Paris - Paris (France) 9-11 décembre 2025 : Green IO Paris - Paris (France) 10-11 décembre 2025 : Devops REX - Paris (France) 10-11 décembre 2025 : Open Source Experience - Paris (France) 11 décembre 2025 : Normandie.ai 2025 - Rouen (France) 14-17 janvier 2026 : SnowCamp 2026 - Grenoble (France) 2-6 février 2026 : Web Days Convention - Aix-en-Provence (France) 3 février 2026 : Cloud Native Days France 2026 - Paris (France) 12-13 février 2026 : Touraine Tech #26 - Tours (France) 22-24 avril 2026 : Devoxx France 2026 - Paris (France) 23-25 avril 2026 : Devoxx Greece - Athens (Greece) 17 juin 2026 : Devoxx Poland - Krakow (Poland) 4 septembre 2026 : JUG SUmmer Camp 2026 - La Rochelle (France) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via X/twitter https://twitter.com/lescastcodeurs ou Bluesky https://bsky.app/profile/lescastcodeurs.com Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/

Masty o Rasty | پادکست فارسی مستی و راستی

Ali is VP of AI Product at SandboxAQ and was Product lead at Google DeepMind. Previously at Google Research. Before that: Meta (including Facebook AI - FAIR, Integrity and News Feed), LinkedIn, Yahoo, Microsoft and a startup. PHD in computer science.In this episode we talk about the current climate in silicon valley. To reach Ali go to :https://www.instagram.com/alikh1980 Hosted on Acast. See acast.com/privacy for more information.

All-In with Chamath, Jason, Sacks & Friedberg
Google DeepMind CEO Demis Hassabis on AI, Creativity, and a Golden Age of Science | All-In Summit

All-In with Chamath, Jason, Sacks & Friedberg

Play Episode Listen Later Sep 12, 2025 31:48


(0:00) Introducing Sir Demis Hassabis, reflecting on his Nobel Prize win (2:39) What is Google DeepMind? How does it interact with Google and Alphabet? (4:01) Genie 3 world model (9:21) State of robotics models, form factors, and more (14:42) AI science breakthroughs, measuring AGI (20:49) Nano-Banana and the future of creative tools, democratization of creativity (24:44) Isomorphic Labs, probabilistic vs deterministic, scaling compute, a golden age of science Thanks to our partners for making this happen! Solana: https://solana.com/ OKX: https://www.okx.com/ Google Cloud: https://cloud.google.com/ IREN: https://iren.com/ Oracle: https://www.oracle.com/ Circle: https://www.circle.com/ BVNK: https://www.bvnk.com/ Follow Demis: https://x.com/demishassabis Follow the besties: https://x.com/chamath https://x.com/Jason https://x.com/DavidSacks https://x.com/friedberg Follow on X: https://x.com/theallinpod Follow on Instagram: https://www.instagram.com/theallinpod Follow on TikTok: https://www.tiktok.com/@theallinpod Follow on LinkedIn: https://www.linkedin.com/company/allinpod Intro Music Credit: https://rb.gy/tppkzl https://x.com/yung_spielburg Intro Video Credit: https://x.com/TheZachEffect  

The Made to Thrive Show
AI Health Breakthroughs: Personal AI Agents for Disease Prevention, Peak Performance, Lifelong Wellness & Curing Cancer by 2030 with Leading AI Futurist Steve Brown

The Made to Thrive Show

Play Episode Listen Later Sep 10, 2025 59:04


AI is here, and today is the worst it is ever going to be. But there are also over 8 billion human beings, so the biggest question of all is - how are we going to cooperate and even co-exist? Is AI here to take all the jobs or create economic and scientific super-abundance? And for human health, will we all carry a personal AI health agent in our pockets helping us live healthier and happier and prevent disease, and will AI cure every single cancer by 2030? At this stage, there are more questions than answers, which is why I invited the number one artificial intelligence futurist onto the show.Steve Brown is a leading voice in the field of artificial intelligence. A former executive at Google DeepMind and Intel, he has delivered hundreds of information-packed and entertaining keynotes across five continents, inspiring audiences to take action with AI. ‍As a thought leader on AI, generative AI, autonomous agents, digital transformation, and the future impact of AI on business, education, and society, Over his 25-year career, Steve has held senior leadership roles, including Senior Director and in-house Futurist at Google DeepMind in London and Intel's Chief Evangelist and Futurist. He is the co-founder of The Provenance Chain Network, a company providing supply chain transparency and security services for the U.S. Space Force, as well as a strategic advisor to two AI startups and a BCG Luminary. Steve's mission is to help organisations build a better future with AI by creating new customer experiences, streamlining operations, and elevating the workforce. Join us as we explore:What is AI, what are the different types, why use one type over the other, what AI can do today and why today is the worst AI will ever be.How to use AI right now to improve decision making, and specifically your personal health AI agent who is there with you 24/7 to make medicine, health and performance optimization choices.How to think of and deploy AI as an enhancer and augmenter or our lives and professions rather than fearing it is here to replace you.Prompt engineering your health and performance.AI hallucinations, AI risks, AI misuse, AI misalignment and AI blackmail. Contact:Website - https://www.stevebrown.aiMentions:Tools - NotebookLM, https://notebooklm.googleTools - Perplexity, https://www.perplexity.aiSupport the showFollow Steve's socials: Instagram | LinkedIn | YouTube | Facebook | Twitter | TikTokSupport the show on Patreon:As much as we love doing it, there are costs involved and any contribution will allow us to keep going and keep finding the best guests in the world to share their health expertise with you. I'd be grateful and feel so blessed by your support: https://www.patreon.com/MadeToThriveShowSend me a WhatsApp to +27 64 871 0308. Disclaimer: Please see the link for our disclaimer policy for all of our content: https://madetothrive.co.za/terms-and-conditions-and-privacy-policy/

People of AI
Creative storytelling with AI: The making of Ancestra

People of AI

Play Episode Listen Later Sep 10, 2025 61:40


In this episode of People of AI , we take you behind the scenes of "ANCESTRA," a groundbreaking film that integrates generative artificial intelligence into its core. Hear from the director Eliza McNitt and key collaborators from the Google DeepMind team about how they leveraged AI as a new creative tool, navigated its capabilities and limitations, and ultimately shaped a unique cinematic experience. Understand the future role of AI in filmmaking and its potential for developers and storytellers. Chapters: 0:00 - Introduction to Ancestra: AI in filmmaking 3:38 - The Origin Story of ANCESTRA 5:35 - Google DeepMind and Primordial Soup collaboration 11:47 - Veo and the creative process 20:21 - Behind the scenes: Making the film 28:47 - Generating videos: Gemini and Veo tools 38:11 - AI as a creative tool, not a replacement 47:41 - AI's impact and the future of the film industry 53:51 - Generative models: A new kind of camera 57:46 - Rapid fire & conclusion Resources: Ancestra → https://goo.gle/4mVScNW  Making of ANCESTRA → https://goo.gle/3JVJil1  Veo 3 → https://goo.gle/4mWn3Kz  Veo 3 Documentation → https://goo.gle/46qqFOV  Veo 3 Cookbook → https://goo.gle/3VMVFSZ  Google Flow → https://goo.gle/3VMVR4F  Watch more People of AI → https://goo.gle/PAI  Subscribe to Google for Developers → https://goo.gle/developers #PeopleofAI Speaker: Christina Warren, Ashley Oldacre,  Eliza McNitt, Ben Wiley, Corey Matthewson,  Products Mentioned: Google AI, Gemini, Veo 2, Veo 3

80,000 Hours Podcast with Rob Wiblin
#222 – Neel Nanda on the race to read AI minds

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Sep 8, 2025 181:11


We don't know how AIs think or why they do what they do. Or at least, we don't know much. That fact is only becoming more troubling as AIs grow more capable and appear on track to wield enormous cultural influence, directly advise on major government decisions, and even operate military equipment autonomously. We simply can't tell what models, if any, should be trusted with such authority.Neel Nanda of Google DeepMind is one of the founding figures of the field of machine learning trying to fix this situation — mechanistic interpretability (or “mech interp”). The project has generated enormous hype, exploding from a handful of researchers five years ago to hundreds today — all working to make sense of the jumble of tens of thousands of numbers that frontier AIs use to process information and decide what to say or do.Full transcript, video, and links to learn more: https://80k.info/nn1Neel now has a warning for us: the most ambitious vision of mech interp he once dreamed of is probably dead. He doesn't see a path to deeply and reliably understanding what AIs are thinking. The technical and practical barriers are simply too great to get us there in time, before competitive pressures push us to deploy human-level or superhuman AIs. Indeed, Neel argues no one approach will guarantee alignment, and our only choice is the “Swiss cheese” model of accident prevention, layering multiple safeguards on top of one another.But while mech interp won't be a silver bullet for AI safety, it has nevertheless had some major successes and will be one of the best tools in our arsenal.For instance: by inspecting the neural activations in the middle of an AI's thoughts, we can pick up many of the concepts the model is thinking about — from the Golden Gate Bridge, to refusing to answer a question, to the option of deceiving the user. While we can't know all the thoughts a model is having all the time, picking up 90% of the concepts it is using 90% of the time should help us muddle through, so long as mech interp is paired with other techniques to fill in the gaps.This episode was recorded on July 17 and 21, 2025.Interested in mech interp? Apply by September 12 to be a MATS scholar with Neel as your mentor! http://tinyurl.com/neel-mats-appWhat did you think? https://forms.gle/xKyUrGyYpYenp8N4AChapters:Cold open (00:00)Who's Neel Nanda? (01:02)How would mechanistic interpretability help with AGI (01:59)What's mech interp? (05:09)How Neel changed his take on mech interp (09:47)Top successes in interpretability (15:53)Probes can cheaply detect harmful intentions in AIs (20:06)In some ways we understand AIs better than human minds (26:49)Mech interp won't solve all our AI alignment problems (29:21)Why mech interp is the 'biology' of neural networks (38:07)Interpretability can't reliably find deceptive AI – nothing can (40:28)'Black box' interpretability — reading the chain of thought (49:39)'Self-preservation' isn't always what it seems (53:06)For how long can we trust the chain of thought (01:02:09)We could accidentally destroy chain of thought's usefulness (01:11:39)Models can tell when they're being tested and act differently (01:16:56)Top complaints about mech interp (01:23:50)Why everyone's excited about sparse autoencoders (SAEs) (01:37:52)Limitations of SAEs (01:47:16)SAEs performance on real-world tasks (01:54:49)Best arguments in favour of mech interp (02:08:10)Lessons from the hype around mech interp (02:12:03)Where mech interp will shine in coming years (02:17:50)Why focus on understanding over control (02:21:02)If AI models are conscious, will mech interp help us figure it out (02:24:09)Neel's new research philosophy (02:26:19)Who should join the mech interp field (02:38:31)Advice for getting started in mech interp (02:46:55)Keeping up to date with mech interp results (02:54:41)Who's hiring and where to work? (02:57:43)Host: Rob WiblinVideo editing: Simon Monsour, Luke Monsour, Dominic Armstrong, and Milo McGuireAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongMusic: Ben CordellCamera operator: Jeremy ChevillotteCoordination, transcriptions, and web: Katy Moore

Night Science
Can Google's Co-scientist project give scientists superpowers?

Night Science

Play Episode Listen Later Sep 8, 2025 40:13


To answer this question, we speak with Dr. Alan Karthikesalingam and Vivek Natarajan from Google DeepMind about their groundbreaking AI co-scientist project. Beyond their work at Google, Alan is an honorary lecturer in vascular surgery at Imperial College London, and Vivek teaches at Harvard's T.H. Chan School of Public Health. Together, we discuss how their system has evolved to mirror parts of human hypothesis generation while also diverging in fascinating ways. We talk about its internal “tournaments” of ideas, its ability to be prompted to “think out of the box,” and whether it becomes too constrained by the need to align with every published “fact”. And we discuss how we still seem far away from a time when AI can not only answer our questions, but can ask new and exciting research questions itself.The Night Science Podcast is produced by the Night Science Institute – for more information on Night Science, visit night-science.org .

Cloud Wars Live with Bob Evans
Microsoft's Mustafa Suleyman Warns Against the Illusion of Conscious AI: “Build AI for People, Not to Be a Person”

Cloud Wars Live with Bob Evans

Play Episode Listen Later Sep 5, 2025 2:30


Highlights00:03 — Microsoft AI CEO Mustafa Suleyman has published a blog post that criticizes the notion of seemingly conscious AI. He argues that the pursuit of this idea, particularly regarding AI model welfare, is misguided.00:19 — Suleyman explains that these concepts are already causing mental health issues among users. He is growing: “more and more concerned about what is becoming known as the psychosis risk and a bunch of related issues. I don't think this will be limited to those who are already at risk of mental health issues.”00:56 — Suleyman argues that we should “build AI for people not to be a person.” He is adamant in his neglect of this previous approach, saying that: “The arrival of seemingly conscious AI is inevitable and unwelcome. Instead, we need a vision for AI that can fulfill its potential as a helpful companion without falling prey to its illusions.”AI Agent & Copilot Summit is an AI-first event to define opportunities, impact, and outcomes with Microsoft Copilot and agents. Building on its 2025 success, the 2026 event takes place March 17-19 in San Diego. Get more details. 01:18 — Meanwhile, companies like Anthropic, OpenAI, and Google DeepMind are actively pursuing AI consciousness or welfare research. Now, while there are complications and not everyone is on the same page, here, it is clear that the evolving scenarios and possibilities presented by AI are increasingly reaching into the realm of the surreal and in some cases, they're fantastical.01:42 — This is largely because, for the first time in human history, we have a tool that is evolving at a pace beyond which we can conceive. While some of these ideas may sound far-fetched, the warnings accompanying them are serious based on real concerns coming from leaders in the AI space, the ones leading this new AI revolution.02:04 — What everyone must do, and though it will be challenging, is to start imagining the unimaginable. Once you accept the vast possibilities of AI, both good and bad, you can begin to take the warnings that come with them seriously. Visit Cloud Wars for more.

Everyday AI Podcast – An AI and ChatGPT Podcast
EP 598: Nano Banana! Real Use cases for Google's new Gemini 2.5 Flash Image

Everyday AI Podcast – An AI and ChatGPT Podcast

Play Episode Listen Later Aug 27, 2025 47:55


Nano Banana is no longer a mystery.Google officially released Gemini 2.5 Flash Image on Tuesday (AKA Nano Banana), revealing it was the company behind the buzzy AI image model that had the internet talking. But... what does it actually do? And how can you put it to work for you? Find out in our newish weekly segment, AI at Work on Wednesdays.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Thoughts on this? Join the convo and connect with other AI leaders on LinkedIn.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Gemini 2.5 Flash Image (Nano Banana) RevealBenchmark Scores: Gemini 2.5 Flash Image vs. CompetitionMultimodal Model Capabilities ExplainedCharacter Consistency in AI Image GenerationAdvanced Image Editing: Removal and Object ControlIntegration with Google AI Studio and APIReal-World Business Use Cases for Gemini 2.5Live Demos: Headshots, Mockups, and InfographicsGemini 2.5 Flash Image Pricing and LimitsIterative Prompting for AI Image CreationTimestamps:00:00 "AI Highlights: Google's Gemini 2.5"06:17 "Nano Banana AI Features"09:58 "Revolutionizing Photo Editing Tools"12:31 "Nano Banana: Effortless Video Updating"14:39 "Impressions on Nano Banana"19:24 AI Growth Strategies Unlocked20:58 Turning Selfie into Professional Headshot24:48 AI-Enhanced Headshots and Team Photos29:51 "3D AI Logo Mockups"32:22 Improved Logo Design Review35:41 Photoshop Shortcut Critique38:50 Deconstructive Design with Logos44:01 "Transform Diagrams Into Presentations"46:12 "Refining AI for Jaw-Dropping Results"Keywords:Gemini 2.5, Gemini 2.5 Flash Image, Nano Banana, Google AI, Google DeepMind, AI image generation, multimodal model, AI photo editing, image manipulation, text-to-image model, image editing AI, large language model, character consistency, AI headshot generator, real estate image editing, product mockup generator, smart image blending, style transfer AI, Google AI Studio, LM Arena, Elo score, AI watermarks, synthID fingerprint, Photoshop alternative, AI-powered design, generative AI, API integration, Adobe integration, AI for business, visual content creation, creative AI tools, professional image editing, iterative prompting, interior design AI, infographic generator, training material visuals, A/B test variations, marketing asset creation, production scaling, image benchmark, AI output watermark, cost-effective AI images, scalable AI infrastructure, prompt-based editing, natural language image editing, OpenAI GPT-4o image, benchmarking leader, visSend Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner

This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Genie 3: A New Frontier for World Models with Jack Parker-Holder and Shlomi Fruchter - #743

This Week in Machine Learning & Artificial Intelligence (AI) Podcast

Play Episode Listen Later Aug 19, 2025 61:01


Today, we're joined by Jack Parker-Holder and Shlomi Fruchter, researchers at Google DeepMind, to discuss the recent release of Genie 3, a model capable of generating “playable” virtual worlds. We dig into the evolution of the Genie project and review the current model's scaled-up capabilities, including creating real-time, interactive, and high-resolution environments. Jack and Shlomi share their perspectives on what defines a world model, the model's architecture, and key technical challenges and breakthroughs, including Genie 3's visual memory and ability to handle “promptable world events.” Jack, Shlomi, and Sam share their favorite Genie 3 demos, and discuss its potential as a dynamic training environment for embodied AI agents. Finally, we will explore future directions for Genie research. The complete show notes for this episode can be found at https://twimlai.com/go/743.

WeatherBrains
WeatherBrains 1022: Tighter Than A Tick's Butt

WeatherBrains

Play Episode Listen Later Aug 19, 2025 98:40


Tonight's Guest WeatherBrain is a professor of mechanical engineering and applied mathematics at the University of Pennsylvania.  He previously served as a principal scientist at Microsoft Research, where he led the development of Aurora.  This was the groundbreaking AI foundation model for earth system forecasting.  Dr. Paris Perdikaris, welcome to WeatherBrains! Our email officer Jen is continuing to handle the incoming messages from our listeners. Reach us here: email@weatherbrains.com. What is Aurora and the motivation for the program? (08:15) Practical experience and forecasting with tropical forecasting with Aurora (12:30) Success stories with Aurora (14:00) Convection-allowing models (CAMS) (17:15) Convective feedback problems with models (18:45) Emergence of the Google DeepMind ensemble and other AI models (20:00) Aurora's hardware architecture and its importance (23:30) Continuing need of physics-based models in the age of AI (28:00) Ethical considerations and biases in model training data (33:30) Role of US agencies and potential loss of funding (37:55) Communicating AI forecasts to the public and the ensuing ethical issues (42:30) Broad risks of using AI (44:30) Aurora business model (58:45) The Astronomy Outlook with Tony Rice (01:05:25) This Week in Tornado History With Jen (01:07:25) E-Mail Segment (01:09:30) and more! Web Sites from Episode 1022: Penn's Predictive Intelligence Lab Alabama Weather Network on Facebook Picks of the Week: James Aydelott - Dr. Cameron Nixon on YouTube: Cell Mergers and Nudgers Jen Narramore - "Significant Tornadoes 1680-1991" by Thomas Grazulis Rick Smith - USA Jobs Troy Kimmel - Foghorn Kim Klockow-McClain - Out John Gordon - Live Recon in the Atlantic Basin Bill Murray - Foghorn James Spann - Weather Lab: Cyclones The WeatherBrains crew includes your host, James Spann, plus other notable geeks like Troy Kimmel, Bill Murray, Rick Smith, James Aydelott, Jen Narramore, John Gordon, and Dr. Kim Klockow-McClain. They bring together a wealth of weather knowledge and experience for another fascinating podcast about weather.

a16z
Google DeepMind Lead Researchers on Genie 3 & the Future of World-Building

a16z

Play Episode Listen Later Aug 16, 2025 41:28


Genie 3 can generate fully interactive, persistent worlds from just text, in real time.In this episode, Google DeepMind's Jack Parker-Holder (Research Scientist) and Shlomi Fruchter (Research Director) join Anjney Midha, Marco Mascorro, and Justine Moore of a16z, with host Erik Torenberg, to discuss how they built it, the breakthrough “special memory” feature, and the future of AI-powered gaming, robotics, and world models.They share:How Genie 3 generates interactive environments in real timeWhy its “special memory” feature is such a breakthroughThe evolution of generative models and emergent behaviorsInstruction following, text adherence, and model comparisonsPotential applications in gaming, robotics, simulation, and moreWhat's next: Genie 4, Genie 5, and the future of world models This conversation offers a first-hand look at one of the most advanced world models ever created. Timecodes: 0:00 Introduction & The Magic of Genie 30:41 Real-Time World Generation Breakthroughs1:22 The Team's Journey: From Genie 1 to Genie 35:03 Interactive Applications & Use Cases8:03 Special Memory and World Consistency12:29 Emergent Behaviors and Model Surprises18:37 Instruction Following and Text Adherence19:53 Comparing Genie 3 and Other Models21:25 The Future of World Models & Modality Convergence27:35 Downstream Applications and Open Questions31:42 Robotics, Simulation, and Real-World Impact39:33 Closing Thoughts & Philosophical Reflections Resources:Find Shlomi on X: https://x.com/shlomifruchterFind Jack on X: https://x.com/jparkerholderFind Anjney on X: https://x.com/anjneymidhaFind Justine on X: https://x.com/venturetwinsFind Marco on X: https://x.com/Mascobot Stay Updated: Let us know what you think: https://ratethispodcast.com/a16zFind a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

Daily Tech News Show
Inside Google DeepMind's Genie 3 world model - DTNSB 5075

Daily Tech News Show

Play Episode Listen Later Aug 5, 2025 32:01


Cloudflare calls out Perplexity for evading no-crawl directives, and Windows XP and Clippy are coming for your feet!Starring Jason Howell, Tom Merritt, and Ryan Whitwam.Links to stories discussed in this episode can be found here. Hosted on Acast. See acast.com/privacy for more information.

60 Minutes
08/03/2025: Demis Hassabis and Freezing the Biological Clock

60 Minutes

Play Episode Listen Later Aug 4, 2025 46:32


Demis Hassabis, a pioneer in artificial intelligence, is shaping the future of humanity. As the CEO of Google DeepMind, he was first interviewed by correspondent Scott Pelley in 2023, during a time when chatbots marked the beginning of a new technological era. Since that interview, Hassabis has made headlines for his innovative work, including using an AI model to predict the structure of proteins, which earned him a Nobel Prize. Pelley returns to DeepMind's headquarters in London to discuss what's next for Hassabis, particularly his leadership in the effort to develop artificial general intelligence (AGI) – a type of AI that has the potential to match the versatility and creativity of the human brain. Fertility rates in the United States are currently near historic lows, largely because fewer women are having children in their 20s. As women delay starting families, many are opting for egg freezing, the process of retrieving and freezing unfertilized eggs, to preserve their fertility for the future. Does egg freezing provide women with a way to pause their biological clock? Correspondent Lesley Stahl interviews women who have decided to freeze their eggs and explores what the process entails physically, emotionally and financially. She also speaks with fertility specialists and an ethicist about success rates, equity issues and the increasing market potential of egg freezing. This is a double-length segment. To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices

Lex Fridman Podcast
#475 – Demis Hassabis: Future of AI, Simulating Reality, Physics and Video Games

Lex Fridman Podcast

Play Episode Listen Later Jul 23, 2025 154:56


Demis Hassabis is the CEO of Google DeepMind and Nobel Prize winner for his groundbreaking work in protein structure prediction using AI. Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep475-sc See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc. Transcript: https://lexfridman.com/demis-hassabis-2-transcript CONTACT LEX: Feedback - give feedback to Lex: https://lexfridman.com/survey AMA - submit questions, videos or call-in: https://lexfridman.com/ama Hiring - join our team: https://lexfridman.com/hiring Other - other ways to get in touch: https://lexfridman.com/contact EPISODE LINKS: Demis's X: https://x.com/demishassabis DeepMind's X: https://x.com/GoogleDeepMind DeepMind's Instagram: https://instagram.com/GoogleDeepMind DeepMind's Website: https://deepmind.google/ Gemini's Website: https://gemini.google.com/ Isomorphic Labs: https://isomorphiclabs.com/ The MANIAC (book): https://amzn.to/4lOXJ81 Life Ascending (book): https://amzn.to/3AhUP7z SPONSORS: To support this podcast, check out our sponsors & get discounts: Hampton: Community for high-growth founders and CEOs. Go to https://joinhampton.com/lex Fin: AI agent for customer service. Go to https://fin.ai/lex Shopify: Sell stuff online. Go to https://shopify.com/lex LMNT: Zero-sugar electrolyte drink mix. Go to https://drinkLMNT.com/lex AG1: All-in-one daily nutrition drink. Go to https://drinkag1.com/lex OUTLINE: (00:00) - Introduction (00:29) - Sponsors, Comments, and Reflections (08:40) - Learnable patterns in nature (12:22) - Computation and P vs NP (21:00) - Veo 3 and understanding reality (25:24) - Video games (37:26) - AlphaEvolve (43:27) - AI research (47:51) - Simulating a biological organism (52:34) - Origin of life (58:49) - Path to AGI (1:09:35) - Scaling laws (1:12:51) - Compute (1:15:38) - Future of energy (1:19:34) - Human nature (1:24:28) - Google and the race to AGI (1:42:27) - Competition and AI talent (1:49:01) - Future of programming (1:55:27) - John von Neumann (2:04:41) - p(doom) (2:09:24) - Humanity (2:12:30) - Consciousness and quantum computation (2:18:40) - David Foster Wallace (2:25:54) - Education and research PODCAST LINKS: - Podcast Website: https://lexfridman.com/podcast - Apple Podcasts: https://apple.co/2lwqZIr - Spotify: https://spoti.fi/2nEwCF8 - RSS: https://lexfridman.com/feed/podcast/ - Podcast Playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4 - Clips Channel: https://www.youtube.com/lexclips