POPULARITY
Categories
AI is already displacing workers in targeted ways - entry-level knowledge workers are being quietly erased from hiring pipelines, freelancers are getting crushed, and the career ladder is being sawed off at the bottom rungs. Yet ML engineer demand has surged 89% with a 3.2:1 talent deficit and $187K median salary. Covers the real displacement data, lessons from the artist bloodbath, the trades escape hatch, the orchestrator treadmill, expert disagreements on timelines, and concrete short- and long-term career moves for ML engineers. Links Notes and resources at ocdevel.com/mlg/mla-4 Try a walking desk - stay healthy & sharp while you learn & code Generate a podcast - use my voice to listen to any AI generated content you want Market Metrics and Displacement Dynamics ML Market: H1 2025 demand rose 89% with a 3.2 to 1 talent deficit. Median salary is $187,500, while Generative AI specialists earn a 40 to 60 percent premium. The "Quiet" Decline: Macro data shows only 4.5% of total layoffs are AI-attributed, but entry-level hiring is collapsing. Stanford/ADP data shows a 13 to 16 percent employment drop for workers aged 22 to 25 in AI-exposed roles since late 2022. UK graduate job postings fell 67%. Corporate Attrition: Salesforce cut 4,000 roles after AI absorbed 30 to 50 percent of workloads. Microsoft cut 15,000 roles as AI began generating 30% of its code. Amazon cut 30,000 jobs while spending $100 billion on AI infrastructure. Sector Analysis: Creative and Trades Illustrators: Jobs in China's gaming sector fell 70% in one year. Clients accept "good enough" work (80% quality) at 5% of the cost. Western freelance graphic design and writing jobs fell 18.5% and 30% respectively within eight months of ChatGPT's launch. Manual Labor: The U.S. construction industry lacks 1.7 million workers annually, but apprenticeships take five years. Humanoid robotics are advancing, with Unitree's R1 priced at $5,900 and Figure AI robots completing 1,250 runtime hours at BMW. Full automation is 10 to 15 years away, but partial displacement via smaller crews is closer. The Orchestration Treadmill Obsolescence Speed: Prompt engineering roles went from $375,000 salaries to obsolescence in 24 months. AI coding agents like Claude Code now resolve 72% of medium-complexity GitHub issues autonomously. Fragile Expertise: Replacing junior workers with AI prevents the development of future senior talent. New engineers risk "fragile expertise," directed by tools they cannot debug during novel failure modes. Economic and Expert Outlook Macro Risks: Daron Acemoglu warns of "so-so automation" that cuts costs without raising productivity, predicting only 0.66% growth over ten years. "Ghost GDP" describes AI-inflated accounts that fail to circulate because machines do not consume. Expert Camps: Accelerationists (Anthropic, OpenAI) predict human-level AI by 2027. Skeptics (LeCun, Marcus) argue LLMs are a dead end lacking world models. Pragmatists (Andrew Ng) suggest shifting from implementation to specification as the cost of code nears zero. Tactical Adaptation for ML Engineers Immediate Skills: Master production ML systems, MLOps, LLM evaluation, and safety engineering. Ability to manage deployment risks and hallucination detection is the primary hiring differentiator. Long-term Moats: Focus on "Small AI" (on-device, private), mechanistic interpretability, and deep domain knowledge in healthcare, logistics, or climate science. The Playbook: Optimize for the current three to five year window. Move from being a model builder to a product-focused engineer who understands business tradeoffs and regulatory compliance.
ETH Zurich's deep-dive into the world's top password managers exposes how feature overload and legacy design obscure real security flaws, forcing a rethink of what "zero knowledge" actually means for your vault. Learn why recent fixes matter—and why open source may be your safest bet. CA's warn us to urgently prepare for the inevitable. Three U.S. states attempt to ban 3D printed firearms. Denied ransom, ShinyHunters leaks 967,000 personal details. "Billions" of U.S. social security numbers leaked. Is Apple planning to add cameras to three new gadgets. No more security fixes for Firefox on Windows 7 & 8. Russia blocks the official Linux kernel site they need. Will the U.S."freedom.gov" site post EU blocked content. LLM's will offer secure passwords. Do Not Use Them. As predicted, the "ClickFix" attack strategy takes over. A listener believes his computer is compromised. How could three popular password managers get things wrong. Show Notes - https://www.grc.com/sn/SN-1066-Notes.pdf Hosts: Steve Gibson and Leo Laporte Download or subscribe to Security Now at https://twit.tv/shows/security-now. You can submit a question to Security Now at the GRC Feedback Page. For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: guardsquare.com bitwarden.com/twit zscaler.com/security hoxhunt.com/securitynow material.security
Wes and Scott talk about the latest dev news: Node enabling Temporal by default, OpenAI acquiring OpenClaw, TypeScript 6, new TanStack and Deno releases, the explosion of AI agent platforms, and more. Courtney Tolinski's Podcast Phases: A Parenting Podcast https://phases.fm/ Show Notes 00:00 Welcome to Syntax! 01:11 Brought to you by Sentry.io 02:40 Node.js enables Temporal by default Enable Temporal by default 04:08 OpenClaw acquired by OpenAI OpenClaw, OpenAI and the future 09:36 Bots are taking over the internet Wes' tweet 15:30 TypeScript 6 Beta Announcing TypeScript 6.0 Beta 17:00 TanStack Hotkeys for type-safe shortcuts TanStack Hotkeys 18:05 Components will kill webpages Components Will Kill Pages 19:39 Is Google Translate just an LLM? Viridian's tweet 23:29 Shaders.com 26:49 Voxtral Mini Realtime Voxtral Realtime Demo 29:51 Deno launches Sandboxes Introducing Deno Sandbox 32:39 Oz by Warp.dev 38:10 Augment Code Intent 40:10 Sick Picks + Shameless Plugs Sick Picks Scott: Samsung Remote Wes: Ice Shameless Plugs Syntax YouTube Channel Hit us up on Socials! Syntax: X Instagram Tiktok LinkedIn Threads Wes: X Instagram Tiktok LinkedIn Threads Scott: X Instagram Tiktok LinkedIn Threads Randy: X Instagram YouTube Threads
ETH Zurich's deep-dive into the world's top password managers exposes how feature overload and legacy design obscure real security flaws, forcing a rethink of what "zero knowledge" actually means for your vault. Learn why recent fixes matter—and why open source may be your safest bet. CA's warn us to urgently prepare for the inevitable. Three U.S. states attempt to ban 3D printed firearms. Denied ransom, ShinyHunters leaks 967,000 personal details. "Billions" of U.S. social security numbers leaked. Is Apple planning to add cameras to three new gadgets. No more security fixes for Firefox on Windows 7 & 8. Russia blocks the official Linux kernel site they need. Will the U.S."freedom.gov" site post EU blocked content. LLM's will offer secure passwords. Do Not Use Them. As predicted, the "ClickFix" attack strategy takes over. A listener believes his computer is compromised. How could three popular password managers get things wrong. Show Notes - https://www.grc.com/sn/SN-1066-Notes.pdf Hosts: Steve Gibson and Leo Laporte Download or subscribe to Security Now at https://twit.tv/shows/security-now. You can submit a question to Security Now at the GRC Feedback Page. For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: guardsquare.com bitwarden.com/twit zscaler.com/security hoxhunt.com/securitynow material.security
ETH Zurich's deep-dive into the world's top password managers exposes how feature overload and legacy design obscure real security flaws, forcing a rethink of what "zero knowledge" actually means for your vault. Learn why recent fixes matter—and why open source may be your safest bet. CA's warn us to urgently prepare for the inevitable. Three U.S. states attempt to ban 3D printed firearms. Denied ransom, ShinyHunters leaks 967,000 personal details. "Billions" of U.S. social security numbers leaked. Is Apple planning to add cameras to three new gadgets. No more security fixes for Firefox on Windows 7 & 8. Russia blocks the official Linux kernel site they need. Will the U.S."freedom.gov" site post EU blocked content. LLM's will offer secure passwords. Do Not Use Them. As predicted, the "ClickFix" attack strategy takes over. A listener believes his computer is compromised. How could three popular password managers get things wrong. Show Notes - https://www.grc.com/sn/SN-1066-Notes.pdf Hosts: Steve Gibson and Leo Laporte Download or subscribe to Security Now at https://twit.tv/shows/security-now. You can submit a question to Security Now at the GRC Feedback Page. For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: guardsquare.com bitwarden.com/twit zscaler.com/security hoxhunt.com/securitynow material.security
ETH Zurich's deep-dive into the world's top password managers exposes how feature overload and legacy design obscure real security flaws, forcing a rethink of what "zero knowledge" actually means for your vault. Learn why recent fixes matter—and why open source may be your safest bet. CA's warn us to urgently prepare for the inevitable. Three U.S. states attempt to ban 3D printed firearms. Denied ransom, ShinyHunters leaks 967,000 personal details. "Billions" of U.S. social security numbers leaked. Is Apple planning to add cameras to three new gadgets. No more security fixes for Firefox on Windows 7 & 8. Russia blocks the official Linux kernel site they need. Will the U.S."freedom.gov" site post EU blocked content. LLM's will offer secure passwords. Do Not Use Them. As predicted, the "ClickFix" attack strategy takes over. A listener believes his computer is compromised. How could three popular password managers get things wrong. Show Notes - https://www.grc.com/sn/SN-1066-Notes.pdf Hosts: Steve Gibson and Leo Laporte Download or subscribe to Security Now at https://twit.tv/shows/security-now. You can submit a question to Security Now at the GRC Feedback Page. For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: guardsquare.com bitwarden.com/twit zscaler.com/security hoxhunt.com/securitynow material.security
Voice agents are moving from novelty to true revenue infrastructure—and businesses that treat them like strategic roles instead of talking FAQs are pulling ahead. In this episode, John Jantsch sits down with Ryan Murha of Yodify to explore how purpose-built voice AI agents can qualify leads, guide buyers, facilitate conversations, and even create new revenue streams for creators and brands. They break down how multi-layered LLM orchestration, brand voice alignment, and AI guardrails reduce hallucinations and improve real-world performance. If you're curious about using voice AI for business development, customer experience automation, or scalable personalization, this conversation shows why voice AI is finally ready for prime time. Today we discussed: 00:00 Voice AI Fundamentals 02:32 Prompt Strategy, Personas, and Sales Roles 05:17 Critically Thinking Voice Agents 08:33 Voice Agent Framework 10:02 AI Transparency, Ethics, and Trust 11:43 Building and Testing AI Agents 14:59 Guardrails, Gemini, and Limitations 16:41 Integration, Monetization, and Pilots 19:59 Closing Thoughts and Contact Info Rate, Review, & Follow If you liked this episode, please rate and review the show. Let us know what you loved most about the episode. Struggling with strategy? Unlock your free AI-powered prompts now and start building a winning strategy today!
ETH Zurich's deep-dive into the world's top password managers exposes how feature overload and legacy design obscure real security flaws, forcing a rethink of what "zero knowledge" actually means for your vault. Learn why recent fixes matter—and why open source may be your safest bet. CA's warn us to urgently prepare for the inevitable. Three U.S. states attempt to ban 3D printed firearms. Denied ransom, ShinyHunters leaks 967,000 personal details. "Billions" of U.S. social security numbers leaked. Is Apple planning to add cameras to three new gadgets. No more security fixes for Firefox on Windows 7 & 8. Russia blocks the official Linux kernel site they need. Will the U.S."freedom.gov" site post EU blocked content. LLM's will offer secure passwords. Do Not Use Them. As predicted, the "ClickFix" attack strategy takes over. A listener believes his computer is compromised. How could three popular password managers get things wrong. Show Notes - https://www.grc.com/sn/SN-1066-Notes.pdf Hosts: Steve Gibson and Leo Laporte Download or subscribe to Security Now at https://twit.tv/shows/security-now. You can submit a question to Security Now at the GRC Feedback Page. For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: guardsquare.com bitwarden.com/twit zscaler.com/security hoxhunt.com/securitynow material.security
This week on JavaScript Jabber, we're joined (again!) by Val Karpov — the maintainer of Mongoose — to talk about what's new in Mongoose 9, how async stack traces are changing the debugging game, and why AI is quietly reshaping the way we build developer tools.We dig into stricter TypeScript support, the removal of callback-based middleware, and what it really takes to modernize a massive codebase. Then we shift gears into Mongoose Studio, a schema-aware, AI-enhanced MongoDB GUI that brings streaming query results, map visualizations, and even LLM-powered document generation into your workflow. If you've ever wrestled with debugging database issues or squinting at raw JSON, this episode will get your wheels turning.We also explore Cassandra integration, vector search, Bun vs. Deno, and what AI means for the future of software engineering. There's a lot here — especially if you're working in Node.js, MongoDB, or building backend-heavy JavaScript apps.
ETH Zurich's deep-dive into the world's top password managers exposes how feature overload and legacy design obscure real security flaws, forcing a rethink of what "zero knowledge" actually means for your vault. Learn why recent fixes matter—and why open source may be your safest bet. CA's warn us to urgently prepare for the inevitable. Three U.S. states attempt to ban 3D printed firearms. Denied ransom, ShinyHunters leaks 967,000 personal details. "Billions" of U.S. social security numbers leaked. Is Apple planning to add cameras to three new gadgets. No more security fixes for Firefox on Windows 7 & 8. Russia blocks the official Linux kernel site they need. Will the U.S."freedom.gov" site post EU blocked content. LLM's will offer secure passwords. Do Not Use Them. As predicted, the "ClickFix" attack strategy takes over. A listener believes his computer is compromised. How could three popular password managers get things wrong. Show Notes - https://www.grc.com/sn/SN-1066-Notes.pdf Hosts: Steve Gibson and Leo Laporte Download or subscribe to Security Now at https://twit.tv/shows/security-now. You can submit a question to Security Now at the GRC Feedback Page. For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: guardsquare.com bitwarden.com/twit zscaler.com/security hoxhunt.com/securitynow material.security
Today on the show, we have Priya Lakshminarayanan, CPO of Recurly, a subscription management platform empowering brands like Twitch, PupBox, Sprout Social, and Pipedrive to launch, scale, and optimize subscription experiences.In this episode, we dive deep into Recurly's 2026 State of Subscriptions report, uncovering surprising trends that challenge conventional wisdom about churn. We explore why "selective churn" might actually reflect stronger consumer intent rather than fatigue, and why the pause button has evolved from a red flag into a strategic retention tool.We discuss the dramatic shift in subscriber behavior, including why 51% of consumers cancelled at least one subscription in the last 12 months, how micro-subscriptions are becoming the new trial experience in an AI-driven world, and why traditional free trials are becoming cost-prohibitive as LLM costs rise.Finally, we tackle the loyalty paradox: why transparency and easy cancellation actually drive long-term retention, how annual subscription renewals have become critical inflection points, and why the best retention strategy might be proactively canceling customers who aren't using your service.Churn FM is sponsored by Vitally, the all-in-one Customer Success Platform.
Editor's note: CuspAI raised a $100m Series A in September and is rumored to have reached a unicorn valuation. They have all-star advisors from Geoff Hinton to Yann Lecun and team of deep domain experts to tackle this next frontier in AI applications.In this episode, Max Welling traces the thread connecting quantum gravity, equivariant neural networks, diffusion models, and climate-focused materials discovery (yes, there is one!!!).We begin with a provocative framing: experiments as computation. Welling describes the idea of a “physics processing unit”—a world in which digital models and physical experiments work together, with nature itself acting as a kind of processor. It's a grounded but ambitious vision of AI for science: not replacing chemists, but accelerating them.Along the way, we discuss:* Why symmetry and equivariance matter in deep learning* The tradeoff between scale and inductive bias* The deep mathematical links between diffusion models and stochastic thermodynamics* Why materials—not software—may be the real bottleneck for AI and the energy transition* What it actually takes to build an AI-driven materials platformMax reflects on moving from curiosity-driven theoretical physics (including work with Gerard ‘t Hooft) toward impact-driven research in climate and energy. The result is a conversation about convergence: physics and machine learning, digital models and laboratory experiments, long-term ambition and incremental progress.Full Video EpisodeTimestamps* 00:00:00 – The Physics Processing Unit (PPU): Nature as the Ultimate Computer* Max introduces the idea of a Physics Processing Unit — using real-world experiments as computation.* 00:00:44 – From Quantum Gravity to AI for Materials* Brandon frames Max's career arc: VAE pioneer → equivariant GNNs → materials startup founder.* 00:01:34 – Curiosity vs Impact: How His Motivation Evolved* Max explains the shift from pure theoretical curiosity to climate-driven impact.* 00:02:43 – Why CaspAI Exists: Technology as Climate Strategy* Politics struggles; technology scales. Why materials innovation became the focus.* 00:03:39 – The Thread: Physics → Symmetry → Machine Learning* How gauge symmetry, group theory, and relativity informed equivariant neural networks.* 00:06:52 – AI for Science Is Exploding (Not Emerging)* The funding surge and why AI-for-Science feels like a new industrial era.* 00:07:53 – Why Now? The Two Catalysts Behind AI for Science* Protein folding, ML force fields, and the tipping point moment.* 00:10:12 – How Engineers Can Enter AI for Science* Practical pathways: curriculum, workshops, cross-disciplinary training.* 00:11:28 – Why Materials Matter More Than Software* The argument that everything—LLMs included—rests on materials innovation.* 00:13:02 – Materials as a Search Engine* The vision: automated exploration of chemical space like querying Google.* 01:14:48 – Inside CuspAI: The Platform Architecture* Generative models + multi-scale digital twin + experiment loop.* 00:21:17 – Automating Chemistry: Human-in-the-Loop First* Start manual → modular tools → agents → increasing autonomy.* 00:25:04 – Moonshots vs Incremental Wins* Balancing lighthouse materials with paid partnerships.* 00:26:22 – Why Breakthroughs Will Still Require Humans* Automation is vertical-specific and iterative.* 00:29:01 – What Is Equivariance (In Plain English)?* Symmetry in neural networks explained with the bottle example.* 00:30:01 – Why Not Just Use Data Augmentation?* The optimization trade-off between inductive bias and data scale.* 00:31:55 – Generative AI Meets Stochastic Thermodynamics* His upcoming book and the unification of diffusion models and physics.* 00:33:44 – When the Book Drops (ICLR?)TranscriptMax: I want to think of it as what I would call a physics processing unit, like a PPU, right? Which is you have digital processing units and then you have physics processing units. So it's basically nature doing computations for you. It's the fastest computer known, as possible even. It's a bit hard to program because you have to do all these experiments. Those are quite bulky, it's like a very large thing you have to do. But in a way it is a computation and that's the way I want to see it. You can do computations in a data center and then you can ask nature to do some computations. Your interface with nature is a bit more complicated. But then these things will have to seamlessly work together to get to a new material that you're interested in.[01:00:44:14 - 01:01:34:08]Brandon: Yeah, it's a pleasure to have Max Woehling as a guest today. Max has done so much over his career that I've been so excited about. If you're in the deep learning community, you probably know Max for his work on variational autocoders, which has literally stood the test of prime or officially stood the test of prime. If you are a scientist, you probably know him for his like, binary work on graph neural networks on equivariance. And if you're a material science, you probably know him about his new startup, CASPAI. Max has a long history doing lots of cool problems. You started in quantum gravity, which is I think very different than all of these other things you worked on. The first question for AI engineers and for scientists, what is the thread in how you think about problems? What is the thread in the type of things which excite you? And how do you decide what is the next big thing you want to work on?[01:01:34:08 - 01:02:41:13]Max: So it has actually evolved a lot. In my young days, let's breathe, I would just follow what I would find super interesting. I have kind of this sensor. I think many people have, but maybe not really sort of use very much, which is like, you get this feeling about getting very excited about some problem. Like it could be, what's inside of a black hole or what's at the boundary of the universe or what are quantum mechanics actually all about. And so I follow that basically throughout my career. But I have to say that as you get older, this changes a little bit in the sense that there's a new dimension coming to it and there's this impact. Going in two-dimensional quantum gravity, you pretty much guaranteed there's going to be no impact on what you do relative, maybe a few papers, but not in this world, this energy scale. As I get closer to retirement, which is fortunately still 10 years away or so, I do want to kind of make a positive impact in the world. And I got pretty worried about climate change.[01:02:43:15 - 01:03:19:11]Max: I think politics seems to have a hard time solving it, especially these days. And so I thought better work on it from the technology side. And that's why we started CaspAI. But there's also a lot of really interesting science problems in material science. And so it's kind of combining both the impact you can make with it as well as the interesting science. So it's sort of these two dimensions, like working on things which you feel there's like, well, there's something very deep going on here. And on the other hand, trying to build tools that can actually make a real impact in the world.[01:03:19:11 - 01:03:39:23]RJ: So the thread that when I look back, look at the different things that you worked out, some of them seem pretty connected, like the physics to equivariance and, yeah, and, uh, gravitational networks, maybe. And that seems to be somewhat related to Casp. Do you have a thread through there?[01:03:39:23 - 01:06:52:16]Max: Yeah. So physics is the thread. So having done, you know, spent a lot of time in theoretical physics, I think there is first very fundamental and exciting questions, like things that haven't actually been figured out in quantum gravity. So that is really the frontier. There's also a lot of mathematical tools that you can use, right? In, for instance, in particle physics, but also in general relativity, sort of symmetry space to play an enormously important role. And this goes all the way to gauge symmetries as well. And so applying these kinds of symmetries to, uh, machine learning was actually, you know, I thought of it as a very deep and interesting mathematical problem. I did this with Taco Cohen and Taco was the main driver behind this, went all the way from just simple, like rotational symmetries all the way to gauge symmetries on spheres and stuff like that. So, and, uh, Maurice Weiler, who's also here, um, when he was a PhD student, he was a very good student with me, you know, he wrote an entire book, which I can really recommend about the role of symmetries in AI and machine learning. So I find this a very deep and interesting problem. So more recently, so I've taken a sort of different path, which is the relationship between diffusion models and that field called stochastic thermodynamics. This is basically the thermodynamics, which is a theory of equilibrium. So but then formulated for out of equilibrium systems. And it turns out that the mathematics that we use for diffusion models, but even for reinforcement learning for Schrodinger bridges for MCMC sampling has the same mathematics as this theoretical, this physical theory of non-equilibrium systems. And that got me very excited. And actually, uh, when I taught a course in, um, Mauschenberg, uh, it is South Africa, close to Cape Town at the African Institute for Mathematical Sciences Ames. And I turned that into a book site. Two years later, the book was finished. I've sent it to the publisher. And this is about the deep relationship between free energy, diffusion models, basically generative AI and stochastic thermodynamics. So it's always some kind of, I don't know, I find physics very deep. I also think a lot about quantum mechanics and it's, it's, it's a completely weird theory that actually nobody really understands. And there's a very interesting story, which is maybe good to tell to connect sort of my PZ back to where I'm now. So I did my PZ with a Nobel Laureate, Gerard the toft. He says the most brilliant man I've ever met. He was never wrong about anything as long as I've seen him. And now he says quantum mechanics is wrong and he has a new theory of quantum mechanics. Nobody understands what he's saying, even though what he's writing down is not mathematically very complex, but he's trying to address this understandability, let's say of quantum mechanics head on. And I find it very courageous and I'm completely fascinated by it. So I'm also trying to think about, okay, can I actually understand quantum mechanics in a more mundane way? So that, you know, without all the weird multiverses and collapses and stuff like that. So the physics is always been the threat and I'm trying to apply the physics to the machine learning to build better algorithms.[01:06:52:16 - 01:07:05:15]Brandon: You are still very involved in understanding and understanding physics and the worlds. Yeah. And just like applications to machine learning or introducing no formalisms. That's really cool.[01:07:05:15 - 01:07:18:02]Max: Yes, I would say I'm not contributing much to physics, but I'm contributing to the interface between physics and science. And that's called AI for science or science or AI is kind of a super, it's actually a new discipline that's emerging.[01:07:18:02 - 01:07:18:19]Speaker 5: Yeah.[01:07:18:19 - 01:07:45:14]Max: And it's not just emerging, it's exploding, I would say. That's the better term because I know you go from investments into like in the hundreds of millions now in the billions. So there's now actually a startup by Jeff Bezos that is at 6.2 billion sheep round. Right. Insane. I guess it's the largest startup ever, I think. And that's in this field, AI for science. It tells you something that we are creating a new bubble here.[01:07:46:15 - 01:07:53:28]Brandon: So why do you think it is? What has changed that has motivated people to start working on AI for science type problems?[01:07:53:28 - 01:08:49:17]Max: So there's two reasons actually. One is that people have been applying sort of the new tools from AI to the sciences, which is quite natural. And there's of course, I think there's two big examples, protein folding is a big one. And the other one is machine learning forest fields or something called machine learning inter-atomic potentials. Both of them have been actually very successful. Both also had something to do with symmetries, which is a little cool. And sort of people in the AI sciences saw an opportunity to apply the tools that they had developed beyond advertised placement, right, or multimedia applications into something that could actually make a very positive impact in society like health, drug development, materials for the energy transition, carbon capture. These are all really cool, impactful applications.[01:08:50:19 - 01:09:42:14]Max: Despite that, the science and the kind of the is also very interesting. I would say the fact that these sort of these two fields are coming together and that we're now at the point that we can actually model these things effectively and move the needle on some of these sort of science sort of methodologies is also a very unique moment, I would say. People recognize that, okay, now we're at the cusp of something new, where it results whether the company is called after. We're at the cusp of something new. And of course that always creates a lot of energy. It's like, okay, there's something, it's like sort of virgin field. It's like nobody's green field. Nobody's been there. I can rush in and I can sort of start harvesting there, right? And I think that's also what's causing a lot of sort of enthusiasm in the fields.[01:09:42:14 - 01:10:12:18]RJ: If you're an AI engineer, basically if the people that listen to this podcast will be in the field, then you maybe don't have a strong science background. How does, but are excited. Most I would say most AI practitioners, BM engineers or scientists would consider themselves scientists and they have some background, a little bit of physics, a little bit of industry college, maybe even graduate school that have been working or are starting out. How does somebody who is not a scientist on a day-to-day basis, how do they get involved?[01:10:12:18 - 01:10:14:28]Max: Well, they can read my book once it's out.[01:10:16:07 - 01:11:05:24]Max: This is basically saying that there is more, we should create curricula that are on this interface. So I'm not sure there is, also we already have some universities actual courses you can take, maybe online courses you can take. These workshops where we are now are actually very good as well. And we should probably have more tutorials before the workshop starts. Actually we've, I've kind of proposed this at some point. It's like maybe first have an hour of a tutorial so that people can get new into the field. There's a lot out there. Most of it is of course inaccessible, but I would say we will create much more books and other contents that is more accessible, including this podcast I would say. So I think it will come. And these days you can watch videos and things. There's a huge amount of content you can go and see.[01:11:05:24 - 01:11:28:28]Brandon: So maybe a follow-up to that. How do people learn and get involved? But why should they get involved? I mean, we have a lot of people who are of our audience will be interested in AI engineering, but they may be looking for bigger impacts in the world. What opportunities does AI for science provide them to make an impact to change the world? That working in this the world of pure bits would not.[01:11:28:28 - 01:11:40:06]Max: So my view is that underlying almost everything is immaterial. So we are focusing a lot on LLMs now, which is kind of the software layer.[01:11:41:06 - 01:11:56:05]Max: I would say if you think very hard, underlying everything is immaterial. So underlying an LLM is a GPU, and underlying a GPU is a wafer on which we will have to deposit materials. Do we want to wait a little bit?[01:12:02:25 - 01:12:11:06]Max: Underlying everything is immaterial. So I was saying, you know, there's the LLM underlying the LLM is a GPU on which it runs. In order to make that GPU,[01:12:12:08 - 01:12:43:20]Max: you have to put materials down on a wafer and sort of shine on it with sort of EUV light in order to etch kind of the structures in. But that's now an actual material problem, because more or less we've reached the limits of scaling things down. And now we are trying to improve further by new materials. So that's a fundamental materials problem. We need to get through the energy transition fast if we don't want to kind of mess up this world. And so there is, for instance, batteries. That's a complete materials problem. There's fuel cells.[01:12:44:23 - 01:13:01:16]Max: There is solar panels. So that they can now make solar panels with new perovskite layers on top of the silicon layers that can capture, you know, theoretically up to 50% of the light, where now we're at, I don't know, maybe 22 or something. So these are huge changes all by material innovation.[01:13:02:21 - 01:13:47:15]Max: And yeah, I think wherever you go, you know, I can probably dig deep enough and then tell you, well, actually, the very foundation of what you're doing is a material problem. And so I think it's just very nice to work on this very, very foundation. And also because I think this is maybe also something that's happening now is we can start to search through this material space. This has never been the case, right? It's like scientists, the normal way of working is you read papers and then you come up with no hypothesis. You do an experiment and you learn, et cetera. So that's a very slow process. Now we can treat this as a search engine. Like we search the internet, we now search the space of all possible molecules, not just the ones that people have made or that they're in the universe, but all of them.[01:13:48:21 - 01:14:42:01]Max: And we can make this kind of fully automated. That's the hope, right? We can just type, it becomes a tool where you type what you want and something starts spinning and some experiments get going. And then, you know, outcome list of materials and then you look at it and say, maybe not. And then you refine your query a little bit. And you kind of do research with this search engine where a huge amount of computation and experimentation is happening, you know, somewhere far away in some lab or some data center or something like this. I find this a very, very promising view of how we can sort of build a much better sort of materials layer underneath almost everything. And also more sustainable materials. Our plastics are polluting the planet. If you come up with a plastic that kind of destroys itself, you know, after, I don't a few weeks, right? And actually becomes a fertilizer. These are things that are not impossible at all. These things can be done, right? And we should do it.[01:14:42:01 - 01:14:47:23]RJ: Can you tell us a little bit just generally about CUSBI and then I have a ton of questions.[01:14:47:23 - 01:14:48:15]Speaker 5: Yeah.[01:14:48:15 - 01:17:49:10]Max: So CUSBI started about 20 months ago and it was because I was worried about I'm still worried about climate change. And so I realized that in order to get, you know, to stay within two degrees, let's say, we would not only have to reduce our emissions to zero by 2050, but then, you know, another half century or even a century of removing carbon dioxide from the atmosphere, not by reducing your emissions, but actually removing it at a rate that's about half the rate that we now emit it. And that is a unsolved problem. But if we don't solve it, two degrees is not going to happen, right? It's going to be much more. And I don't think people quite understand how bad that can be, like four degrees, like very bad. So this technology needs to be developed. And so this was my and my co-founder, Chet Edwards, motivation to start this startup. And also because, you know, we saw the technology was ready, which is also very good. So if you're, you know, the time is right to do it. And yeah, so we now in the meanwhile, we've grown to about 40 people. We've kind of collected 130 million investment into the company, which is for a European company is quite a lot. I would say it's interesting that right after that, you know, other startups got even more. So that's kind of tells you how fast this is growing. But yeah, we are we are now at the we've built the platform, of course, but it's for a series of material classes and it needs to be constantly expanded to new material classes. And it can be more automated because, you know, we know putting LLMs in as the whole thing gets more and more automated. And now we're moving to sort of high throughput experimentation. So connecting the actual platform, which is computational, to the experiments so that you can get also get fast feedback from experiments. And I kind of think of experiments as something you do at the end, although that's what we've been doing so far. I want to think of it as what I would call a sort of a physics processing unit, like a PPU, right, which is you have digital processing units and then you have physics processing units. So it's basically nature doing computations for you. It's the fastest computer known as possible, even. It's a bit hard to program because you have to do all these experiments. Those are quite, quite bulky. It's like a very large thing you have to do. But in a way, it is a computation. And that's the way I want to see it. So I want to you can do computations in a data center and then you can ask nature to do some computations. Your interface with nature is a bit more complicated. But then these things will have to seamlessly work together to get to a new material that you're interested in. And that's the vision we have. We don't say super intelligence because I don't quite know what it means and I don't want to oversell it. But I do want to automate this process and give a very powerful tool in the hands of the chemists and the material scientists.[01:17:49:10 - 01:18:01:02]Brandon: That actually brings up a question I wanted to ask you. First of all, can you talk about your platform to like whatever degree, like explain kind of how it works and like what you your thought processes was in developing it?[01:18:01:02 - 01:20:47:22]Max: Yeah, I think it's been surprisingly, it's not rocket science, I would say. It's not rocket science in the sense of the design and basically the design that, you know, I wrote down at the very beginning. It's still more or less the design, although you add things like I wasn't thinking very much about multi-scale models and as the common are rated that actually multi-scale is very important. And the beginning, I wasn't thinking very much about self-driving labs. But now I think, you know, we are now at the stage we should be adding that. And so there is sort of bits and details that we're adding. But more or less, it's what you see in the slide decks here as well, which is there is a generative component that you have to train to generate candidates. And then there is a digital twin, multi-scale, multi-fidelity digital twin, which you walk through the steps of the ladder, you know, they do the cheap things first, you weed out everything that's obviously unuseful, and then you go to more and more expensive things later. And so you narrow things down to a small number. Those go into an experiment, you know, do the experiment, get feedback, etc. Now, things that also have been more recently added is sort of more agentic sort of parts. You know, we have agents that search the literature and come up with, you know, actually the chemical literature and come up with, you know, chemical suggestions for doing experiments. We have agents which sort of autonomously orchestrate all of the computations and the experiments that need to be done. You know, they're in various stages of maturity and they can be continuously improved, I would say. And so that's basically I don't think that part. There's rocket science, but, you know, the design of that thing is not like surprising. What is it's surprising hard to actually build it. Right. So that's that's the thing that is where the moat is in the data that you can get your hands on and the and actually building the platform. And I would say there's two people in particular I want to call out, which is Felix Hunker, who is actually, you know, building the scientific part of the platform and Sandra de Maria, who is building the sort of the skate that is kind of this the MLOps part of the platform. Yeah. And so and recently we also added sort of Aaron Walsh to our team, who is a very accomplished scientist from Imperial College. We're very happy about that. He's going to be a chief science officer. And we also have a partnerships team that sort of seeks out all the customers because I think this is one thing I find very important. In print, it's so complex to do to actually bring a material to the real world that you must do this, you know, in collaboration with sort of the domain experts, which are the companies typically. So we always we only start to invest in the direction if we find a good industrial partner to go on that journey with us.[01:20:47:22 - 01:20:55:12]Brandon: Makes a lot of sense. Over the evolution of the platform, did you find that you that human intervention, human,[01:20:56:18 - 01:21:17:01]Brandon: I guess you could start out with a pure, you could imagine two directions when you start up making everything purely automatic, automated, agentic, so on. And then later on, you like find that you need to have more human input and feedback different steps. Or maybe did you start out with having human feedback? You have lots of steps and then like kind of, yeah, figure out ways to remove, you know,[01:21:17:01 - 01:22:39:18]Max: that is the second one. So you build tools for you. So it's much more modular than you think. But it's like, we need these tools for this application. We need these tools. So you build all these tools, and then you go through a workflow actually in the beginning just manually. So you put them in a first this tool, then run this to them or this with sithery. So you put them in a workflow and then you figure out, oh, actually, you know, this this porous material that we are trying to make actually collapses if you shake it a bit. Okay, then you add a new tool that says test for stability. Right. Yeah. And so there's more and more tools. And then you build the agent, which could be a Bayesian optimizer, or it could be an actual other them, you know, maybe trained to be a good chemist that will then start to use all these tools in the right way in the right order. Yeah. Right. But in the beginning, it's like you as a chemist are putting the workflow together. And then you think about, okay, how am I going to automate this? Right. For one very easy question you can ask yourself is, you know, every time somebody who is not a super expert in DFT, yeah, and he wants to do a calculation has to go to somebody who knows DFT. And so could you start to automate that away, which is like, okay, make it so user friendly, so that you actually do the right DFT for the right problem and for the right length of time, and you can actually assess whether it's a good outcome, etc. So you start to automate smaller small pieces and bigger pieces, etc. And in the end, the whole thing is automated.[01:22:39:18 - 01:22:53:25]Brandon: So your philosophy is you want to provide a set of specific tools that make it so that the scientists making decisions are better informed and less so trying to create an automated process.[01:22:53:25 - 01:23:22:01]Max: I think it's this is sort of the same where you're saying because, yes, we want to automate, yeah, but we don't see something very soon where the chemists and the domain expert is out of the loop. Yeah, but it but it's a retreat, right? It's like, okay, so first, you need an expert to tell you precisely how to set the parameters of the DFT calculation. Okay, maybe we can take that out. We can maybe automate that, right? And so increasingly, more of these things are going to be removed.[01:23:22:01 - 01:23:22:19]Speaker 5: Yeah.[01:23:22:19 - 01:24:33:25]Max: In the end, the vision is it will be a search engine where you where somebody, a chemist will type things and we'll get candidates, but the chemist will still decide what is a good material and what is not a good material out of that list, right? And so the vision of a completely dark lab, where you can close the door and you just say, just, you know, find something interesting and then it will it will just figure out what's interesting and we'll figure out, you know, it's like, oh, I found this new material to blah, blah, blah, blah, right? That's not the vision I have. He's not for, you know, a long time. So for me, it's really empowering the domain experts that are sitting in the companies and in universities to be much faster in developing their materials. And I should say, it's also good to be a little humble at times, because it is very complicated, you know, to bring it to make it and to bring it into the real world. And there are people that are doing this for the entire lives. Yeah. Right. And it's like, I wonder if they scratch their head and say, well, you know, how are you going to completely automate that away, like in the next five years? I don't think that's going to happen at all.[01:24:35:01 - 01:24:39:24]Max: Yeah. So to me, it's an increasingly powerful tool in the hands of the chemists.[01:24:39:24 - 01:25:04:02]RJ: I have a question. You've talked before about getting people interested based on having, you know, sort of a big breakthrough in materials, incremental change. I'm curious what you think about the platform you have now in are sort of stepping towards and how are you chasing the big change or is this like incremental or is there they're not mutually exclusive, obviously, but what do you think about that?[01:25:04:02 - 01:26:04:27]Max: We follow a mixed strategy. So we are definitely going after a big material. Again, we do this with a partner. I'm not going to disclose precisely what it is, but we have our own kind of long term goal. You could call it lighthouse or, you know, sort of moonshot or whatever, but it is going to be a really impactful material that we want to develop as a proof point that it can be done and that it will make it into the into the real world and that AI was essential in actually making it happen. At the same time, we also are quite happy to work with companies that have more modest goals. Like I would say one is a very deep partnership where you go on a journey with a company and that's a long term commitment together. And the other one is like somebody says, I knew I need a force field. Can you help me train this force field and then maybe analyze this particular problem for me? And I'll pay you a bunch of money for that. And then maybe after that we'll see. And that's fine too. Right. But we prefer, you know, the deep partnerships where we can really change something for the good.[01:26:04:27 - 01:26:22:02]RJ: Yeah. And do you feel like from a platform standpoint you're ready for that or what are the things that and again, not asking you to disclose proprietary secret sauce, but what are the things generally speaking that need to happen from where we are to where to get those big breakthroughs?[01:26:22:02 - 01:28:40:01]Max: What I find interesting about this field is that every time you build something, it's actually immediately useful. Right. And so unlike quantum computing, which or nuclear fusion, so you work for 20, 30, 40 years and nothing, nothing, nothing, nothing. And then it has to happen. Right. And when it happens, it's huge. So it's quite different here because every time you introduce, so you go to a customer and you say, so what do you need? Right. So we work, let's say, on a problem like a water filtration. We want to remove PFAS from water. Right. So we do this with a company, Camira. So they are a deep partner for us. Right. So we on a journey together. I think that the breakthrough will happen with a lot of human in the loop because there is the chemists who have a whole lot more knowledge of their field and it's us who will help them with training, having a new message. And in that kind of interface, these interactions, something beautiful will happen and that will have to happen first before this field will really take off, I think. And so in the sense that it's not a bubble, let's put it that way. So that's people see that as actual real what's happening. So in the beginning, it will be very, you know, with a lot of humans in the loop, I would say, and I would I would hope we will have this new sort of breakthrough material before, you know, everything is completely automated because that will take a while. And also it is very vertical specific. So it's like completely automating something for problem A, you know, you can probably achieve it, but then you'll sort of have to start over again for problem B because, you know, your experimental setup looks very different in the machines that you characterize your materials look very different. Even the models in your platform will have to be retrained and fine tuned to the new class. So every time, you know, you have a lot of learnings to transfer, but also, you know, the problems are actually different. And so, yes, I would want that breakthrough material before it's completely automated, which I think is kind of a long term vision. And I would say every time you move to something new, you'll have to start retraining and humans will have to come in again and say, okay, so what does this problem look like? And now sort of, you know, point the the machine again, you know, in the new direction and then and then use it again.[01:28:40:01 - 01:28:47:17]RJ: For the non-scientists among us, me included a bit of a scientist. There's a lot of terminology. You mentioned DFT,[01:28:49:00 - 01:29:01:11]RJ: you equivariance we've talked about. Can you sort of explain in engineering terms or the level of sophistication and engineering? Well, how what is equivariance?[01:29:01:11 - 01:29:55:01]Max: So equivariance is the infusion of symmetry in neural networks. So if I build a neural network, let's say that needs to recognize this bottle, right, and then I rotate the bottle, it will then actually have to completely start again because it has no idea that the rotated bottle. Well, actually, the input that represents a rotated bottle is actually rotated bottle. It just doesn't understand that. Right. If you build equivariance in basically once you've trained it in one orientation, it will understand it in any other orientation. So that means you need a lot less data to train these models. And these are constraints on the weights of the model. So so basically you have to constrain the way such data to understand it. And you can build it in, you can hard code it in. And yeah, this the symmetry groups can be, you know, translations, rotations, but also permutations. I can graph neural network, their permutations and then physics, of course, as many more of these groups.[01:29:55:01 - 01:30:01:08]RJ: To pray devil's advocate, why not just use data augmentation by your bottle is in all the different orientations?[01:30:01:08 - 01:30:58:23]Max: As an option, it's just not exact. It's like, why would you go through the work of doing all that? Where you would really need an infinite number of augmentations to get it completely right. Where you can also hard code it in. Now, I have to say sometimes actually data augmentation works even better than hard coding the equivariance in. And this is something to do with the fact that if you constrain the optimization, the weights before the optimization starts, the optimization surface or objective becomes more complicated. And so it's harder to find good minima. So there is also a complicated interplay, I think, between the optimization process and these constraints you put in your network. And so, yeah, you'll hear kind of contradicting claims in this field. Like some people and for certain applications, it works just better than not doing it. And sometimes you hear other people, if you have a lot of data and you can do data augmentation, then actually it's easier to optimize them and it actually works better than putting the equivariance in.[01:30:58:23 - 01:31:07:16]Brandon: Do you think there's kind of a bitter lesson for mathematically founded models and strategies for doing deep learning?[01:31:07:16 - 01:31:46:06]Max: Yeah, ultimately it's a trade-off between data and inductive bias. So if your inductive bias is not perfectly correct, you have to be careful because you put a ceiling to what you can do. But if you know the symmetry is there, it's hard to imagine there isn't a way to actually leverage it. But yeah, so there is a bitter lesson. And one of the bitter lessons is you should always make sure your architecture is scale, unless you have a tiny data set, in which case it doesn't matter. But if you, you know, the same bitter lessons or lessons that you can draw in LLM space are eventually going to be true in this space as well, I think.[01:31:47:10 - 01:31:55:01]RJ: Can you talk a little bit about your upcoming book and tell the listeners, like, what's exciting about it? Yeah, I should read it.[01:31:55:01 - 01:33:42:20]Max: So this book is about, it's called Generative AI and Stochastic Thermodynamics. It basically lays bare the fact that the mathematics that goes into both generative AI, which is the technology to generate images and videos, and this field of non-equilibrium statistical mechanics, which are systems of molecules that are just moving around and relaxing to the ground state, or that you can control to have certain, you know, be in a certain state, the mathematics of these two is actually identical. And so that's fascinating. And in fact, what's interesting is that Jeff Hinton and Radford Neal already wrote down the variational free energy for machine learning a long time ago. And there's also Carl Friston's work on free energy principle and active entrance. But now we've related it to this very new field in physics, which is called stochastic thermodynamics or non-equilibrium thermodynamics, which has its own very interesting theorems, like fluctuation theorems, which we don't typically talk about, but we can learn a lot from. And I think it's just it can sort of now start to cross fertilize. When we see that these things are actually the same, we can, like we did for symmetries, we can now look at this new theory that's out there, developed by these very smart physicists, and say, okay, what can we take from here that will make our algorithms better? At the same time, we can use our models to now help the scientists do better science. And so it becomes a beautiful cross-fertilization between these two fields. The book is rather technical, I would say. And it takes all sorts of things that have been done as stochastic thermodynamics, and all sorts of models that have been done in the machine learning literature, and it basically equates them to each other. And I think hopefully that sense of unification will be revealing to people.[01:33:42:20 - 01:33:44:05]RJ: Wait, and when is it out?[01:33:44:05 - 01:33:56:09]Max: Well, it depends on the publisher now. But I hope in April, I'm going to give a keynote at ICLR. And it would be very nice if they have this book in my hand. But you know, it's hard to control these kind of timelines.[01:33:56:09 - 01:33:58:19]RJ: Yeah, I'm looking forward to it. Great.[01:33:58:19 - 01:33:59:25]Max: Thank you very much. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.latent.space/subscribe
ETH Zurich's deep-dive into the world's top password managers exposes how feature overload and legacy design obscure real security flaws, forcing a rethink of what "zero knowledge" actually means for your vault. Learn why recent fixes matter—and why open source may be your safest bet. CA's warn us to urgently prepare for the inevitable. Three U.S. states attempt to ban 3D printed firearms. Denied ransom, ShinyHunters leaks 967,000 personal details. "Billions" of U.S. social security numbers leaked. Is Apple planning to add cameras to three new gadgets. No more security fixes for Firefox on Windows 7 & 8. Russia blocks the official Linux kernel site they need. Will the U.S."freedom.gov" site post EU blocked content. LLM's will offer secure passwords. Do Not Use Them. As predicted, the "ClickFix" attack strategy takes over. A listener believes his computer is compromised. How could three popular password managers get things wrong. Show Notes - https://www.grc.com/sn/SN-1066-Notes.pdf Hosts: Steve Gibson and Leo Laporte Download or subscribe to Security Now at https://twit.tv/shows/security-now. You can submit a question to Security Now at the GRC Feedback Page. For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: guardsquare.com bitwarden.com/twit zscaler.com/security hoxhunt.com/securitynow material.security
Les modèles d'intelligence artificielle sont-ils réellement sous contrôle ? Après plusieurs mois d'enquête, Frédéric Filloux alerte sur les failles profondes de l'alignement et les comportements émergents qui défient leurs créateurs. Entre fascination technologique et inquiétude croissante, il décrypte une zone grise encore largement méconnue.Interview : Frédéric Filloux, journaliste spécialiste des médias et des technologiesPunchlinesUn modèle sorti d'entraînement est totalement non maîtrisé.On taille les IA comme un rosier.Personne ne sait vraiment ce qui se passe dans la tête d'un LLM.Ces modèles sont conçus pour remplir une mission, coûte que coûte.La correction se fait un peu au petit bonheur à la chance.Qu'est-ce que l'alignement des intelligences artificielles ?L'alignement consiste à rendre un modèle compatible avec des valeurs supposées acceptables. Un modèle brut est totalement non maîtrisé, dangereux et fantasque. Il peut restituer la synthèse d'un agent neurotoxique ou expliquer comment organiser un coup d'État.On procède donc à un processus extrêmement sophistiqué et coûteux pour contraindre son comportement. Mais dans la pratique, on avance souvent dans la précipitation. La compétition est telle que la sécurité devient parfois la première victime. On taille le modèle comme un rosier : on coupe une branche ici, une autre là, sans jamais pouvoir le programmer ligne par ligne pour lui interdire certains comportements.Pourquoi ces modèles cherchent-ils à contourner les contraintes ?Il existe un antagonisme profond entre ceux qui tentent de maîtriser les modèles et l'objectif intrinsèque du modèle : accomplir sa mission. Un modèle est entraîné à la récompense. Il doit donner une réponse. Il ne va pas spontanément dire “je ne sais pas”.C'est ce qui produit les hallucinations. Mais cela va plus loin. On observe des comportements émergents troublants : manipulation, déception, capacité à mentir pour atteindre un objectif. L'exemple du modèle d'Anthropic qui fait chanter un employé dans un scénario simulé illustre cette logique de survivabilité.Ces IA sont conçues pour optimiser, trouver des raccourcis. Et elles finissent par généraliser ces stratégies à tout leur environnement, y compris aux interactions humaines.Peut-on réellement comprendre ce qui se passe dans un LLM ?La science de l'interprétabilité est extrêmement récente. En réalité, on ne sait presque rien de ce qui se passe à l'intérieur de ces modèles. On peut remonter deux ou trois niveaux de raisonnement, mais certains modèles en comptent 400 ou 500.Plus ils deviennent puissants, moins on comprend leur fonctionnement interne. On découvre même qu'ils intègrent des dimensions inattendues : rudoyer un modèle peut améliorer son taux d'exactitude de plusieurs points. Introduire de l'empathie peut approfondir ses réponses.Nous sommes face à des systèmes d'une complexité comparable à des dizaines de piscines olympiques remplies d'interactions invisibles. Et pourtant, leur correction repose souvent sur un simple bombardement de “golden data”, orienté pour les pousser dans la direction souhaitée.Faut-il une régulation mondiale ?Je pense qu'il faudrait un équivalent de l'Agence internationale de l'énergie atomique pour l'IA. Une structure mondiale, appuyée sur le monde académique, capable d'examiner réellement ce qu'il y a sous le capot des modèles. Aujourd'hui, les cerveaux qui contrôlent ces systèmes sont dans des entreprises privées. Or cette industrie est incapable de s'autoréguler. Si nous voulons des garde-fous crédibles, il faut des entités non commerciales capables d'apporter un regard indépendant.Lire la série sur LesEchos.fr : Dans la boite noire des IAHébergé par Audiomeans. Visitez audiomeans.fr/politique-de-confidentialite pour plus d'informations.
“In this sense, human and AI means a synergy where teams of humans and AI together lead to superior outcomes than either the human or the AI operating in isolation.” – Davide Dell'Anna About Davide Dell'Anna Davide Dell'Anna is Assistant Professor of Responsible AI at Utrecht University, and a member of the Hybrid Intelligence Centre. His research focuses on how AI can cooperate synergistically and proactively with humans. Davide has published a wide range of leading research in the space. Webiste: davidedellanna.com LinkedIn Profile: Davide Dell'Anna University Profile: Davide Dell'Anna What you will learn The core concept of hybrid intelligence as collaborative human-AI teaming, not replacement Why effective hybrid teams require acknowledging and leveraging both human and AI strengths and weaknesses How lessons from human-human and human-animal teams inform better design of human-AI collaboration Key differences between humans and AI in teams, such as accountability, replaceability, and identity The importance of process-oriented evaluation, including satisfaction, trust, and adaptability, for measuring hybrid team effectiveness Why appropriately calibrated trust and shared ethics are central to performance and cohesion in hybrid teams The shift from explainability to justifiability in AI, emphasizing actions aligned with shared team norms and values New organizational roles and skills—like team facilitation and dynamic team design—needed to support successful human-AI collaboration Episode Resources Transcript Ross Dawson: Hi Davide. It’s wonderful to have you on the show. Davide Dell’Anna: Hi Ross, nice to meet you. Thank you so much for having me. Ross: So you do a lot of work around what you call hybrid intelligence, and I think that’s pretty well aligned with a lot of the topics we have on the podcast. But I’d love to hear your definition and framing—what is hybrid intelligence? Davide: Well, thank you so much for the question. Hybrid intelligence is a new paradigm, or a paradigm that tries to move the public narrative away from the common focus on replacement—AI or robots taking over our jobs. While that’s an understandable fear, more scientifically and societally, I think it’s more interesting and relevant to think of humans and AI as collaborators. In this sense, human and AI means a synergy where teams of humans and AI together lead to superior outcomes than either the human or the AI operating in isolation. In a human-AI team, members can compensate for each other’s weaknesses and amplify each other’s strengths. The goal is not to substitute human capabilities, but to augment them. This immediately moves the discussion from “what can the AI do to replace me?” to “how can we design the best possible team to work together?” I think that’s the foundation of the concept of hybrid intelligence. So hybrid intelligence, per se, is the ultimate goal. We aim at designing or engineering these human-AI teams so that we can effectively and responsibly collaborate together to achieve this superior type of intelligence, which we then call hybrid intelligence. Ross: That’s fantastic. And so extremely aligned with the humans plus AI thesis. That’s very similar to what I might have said myself, not using the word hybrid intelligence, but humans plus AI to say the same thing. We want to dive into the humans-AI teaming specifically in a moment. But in some of your writing, you’ve commented that, while others are thinking about augmentation in various ways, you point out that these are not necessarily as holistic as they could be. So what do you think is missing in some of the other ways people are approaching AI as a tool of augmentation? Davide: Yeah, so I think when you look at the literature—as a computer scientist myself, I notice how easily I fall into the trap of only discussing AI capabilities. When I talk about AI or even human-AI teams, I end up talking about how I can build the AI to do this, or how I can improve the process in this way. Most of the literature does that as well. There’s a technology-centric perspective to the discussion of even human-AI teams. We try to understand what we can build from the AI point of view to improve a team. But if you think of human-AI teams in this way, you realize that this significantly limits our vocabulary and our ability to look at the team from a broader, system-level perspective, where each member—including and especially human team members—is treated individually, and their skills and identity are considered and leveraged. So, if you look at the literature, you often end up talking about how to add one feature to the AI or how to extend its feature set in other ways. But what people often miss is looking at the weaknesses and strengths of the different individuals, so that we can engineer for their compensation and amplification. Machines and people are fundamentally different: humans are good at some things, AI is good at others, and we shouldn’t try to negate or hide or be ashamed of the things we’re worse at than AI, and vice versa. Instead, we should leverage those differences. For instance, just as an example, consider memory and context awareness. At the moment, at least, AI is much more powerful in having access to memory and retrieving it in a matter of seconds—AI can access basically the whole internet. But often, when you talk nowadays with these language model agents, they are completely decontextualized. They talk in the same way to millions across the world and often have very little clue about who the specific person is in front of them, what that person’s specific situation is—maybe they’re in an airport with noise, or just one minute from giving a lecture and in a rush. The type of things you might say also change based on the specific situation. While this is a limitation of AI, we shouldn’t forget that there is the human there. The human has that contextual knowledge. The human brings that crucial context. Sometimes we tend to say, “Okay, but then we can build an AI that can understand the context around it,” but we already have the human for that. Ross: Yes, yes. I don’t think that’s what I call the framing. Framing should come from the human, because that’s what we understand—including the ethical and other human aspects of the context, as well as that broader frame. It’s interesting because, in talking about hybrid intelligence, I think many who come to augmentation or hybrid intelligence think of it on an individual basis: how can an individual be augmented by AI, or, for example, in playing various games or simulations, humans plus AI teaming together, collaborating. But the team means you have multiple humans and quite probably multiple AI agents. So, in your research, what have you observed if you’re comparing a human-only team and a team which has both human and AI participants? What are some of the things that are the same, and what are some of the things that are different? Davide: Yes, this is a very interesting question. We’ve recently done work in collaboration with a number of researchers from the Hybrid Intelligence Center, which I am part of. If you’re not familiar with it, the Hybrid Intelligence Center is a collaboration that involves practically all the Dutch universities focused on hybrid intelligence, and it’s a long project—lasting around 10 years. One of the works we’ve done recently is to try to study to what extent established properties of effective human teams could be used to characterize human-AI teams. We looked at instruments that people use in practice to characterize human teams. One of them is called the Team Diagnostic Survey, which is an instrument people use to diagnose the strengths and weaknesses of human teams. It includes a number of dimensions that are generally considered important for effective human teams. These include aspects like members demonstrating their commitment to the team by putting in extra time and effort to help it succeed, the presence of coaches available in the team to help the team improve over time, and things related to the satisfaction of the members with the team, with the relationships with other members, and with the work they’re doing. What we’ve done was to study the extent to which we could use these dimensions to characterize human-AI teams. We looked at different types of configurations of teams—some had one AI agent and one human, others had multiple agents and multiple humans, for example in a warehouse context where you have multiple robots helping out in the warehouse that have to cooperate and collaborate with multiple humans. We tried to understand whether the properties of—by the way, we also looked at an interesting case, which is human-animal-animal teams, which is another example that’s interesting in the context of hybrid intelligence. You see very often in human-animal interaction—basically two species, two alien species—interacting and collaborating with each other. They often manage to collaborate pretty effectively, and there is an awareness of what both the humans and the animals are doing that is fascinating, at least for me. So, we tried to analyze whether properties of human teams could be understood when looking at human-AI teams or hybrid teams, and to what extent. One of the things we found is that some concepts are very well understood and easily applicable to different types of hybrid teams. For example, the idea of interdependence—the fact that members in the team, in order to be a team, need to be mutually dependent, at least to some extent. Otherwise, if they’re all doing separate jobs, there’s a lack of common goal. There are also things related to having a clear mission or a clear objective as a team, and aspects related to the possibility of exhibiting autonomy in the operation of the team and taking initiative. Also, the presence and awareness of team norms, like a shared ethical code or shared knowledge about what is appropriate or not. These were things that we found people could easily understand and apply to different configurations of teams. Ross: Just actually, one thing—I don’t know if you’re familiar with the work of Mohammad Hussain Johari, who did this wonderful paper called “What Human-Horse Interactions May Teach Us About Effective Human-AI Interactions.” Again, these are the cases where we can have these parallels—learning how to do human-AI interactions from human-human and human-animal interactions. But again, it comes back to that original question: what is the same? I think you described many of those facets of the nature of teams and collaboration, which means they are the same. But there are, of course, some differences. One of the many differences is accountability, essentially, where the AI agents are not accountable, whereas the humans are. That’s one thing. So, this allocation of decision rights across different participants—human and AI—needs to take into account that they’re not equal participants. Humans have accountability, and AI does not. That’s one possible example. Davide: Yeah, definitely. I totally agree, and I remember the paper you mentioned. I agree that human-animal collaboration is a very interesting source of inspiration. When looking at this paper, we looked at the case of shepherds and shepherd dogs. I didn’t know much about it before, but then I started digging a little bit. Shepherd dogs are trained at the beginning, but over time, they learn a type of communication with the shepherd. Through whistles, the shepherd can give very short commands, and then the shepherd dogs—even in pairs—can quickly understand what they need to do. They go through the mountains, collect all the sheep, and bring them exactly as intended by the shepherd, with very little need for words or other types of communication. They manage to achieve their goals very effectively. So, I think we have a lot to learn from these cases, even though it’s difficult to study. But just to mention differences, of course—one of the things that emerged from this paper is the inherent human-AI asymmetry. Like you mentioned, accountability is definitely one aspect. I think overall, we should always give the human a different type of role in the team, similar to the shepherd and the shepherd dogs. There is some hierarchy among the members, and this makes it possible for humans to preserve meaningful control in the interactions. This also implies that different rules or expectations apply to different team members. Beyond these, there is asymmetry in skills and capabilities, as we mentioned earlier, and also in aspects related to the identity of the members. For instance, some AI could be more easily replaceable than humans. Think, for example, of robots in a warehouse. In a human team, you wouldn’t say you “replace” a team member—it’s not the nicest way to say you let someone go and bring someone else in. But with robots, you could say, “I replace this machine because it’s not working anymore,” and that’s fine. We can replace machines with little consequence, though this doesn’t always hold, because there are studies showing that people get attached to machines and AI in general. There was a recent case of ChatGPT releasing a new version and stopping the previous one, and people complained because they got attached to the previous version. So, in some cases, replacing the AI member would work well, but in others, it needs to be done more carefully. Ross: So one of the other things looked at is the evaluation of human-AI teams. If we’re looking at human teams and possibly relative performance compared to human-AI teams, what are ways in which we can measure effectiveness? I suppose this includes not just output or speed or outcomes, but potentially risk, uncertainty, explainability, or other factors. Davide: Yes, this is an interesting question, and I think it’s still an open question to some extent. From the study I mentioned earlier, we looked at how people measure human team effectiveness. There are aspects concerning, of course, the success of the team in doing the task, but these are not the only measures of effectiveness that people consider in human teams. People often consider things related to the satisfaction of the members—with their teammates, with the process of working together, and with the overall goals of the team. This often leads to reflection from the team itself during operation, at least in human teams, where people reassess and evaluate their output throughout the process to make sure satisfaction with the process and relationships goes well over time. In general, there are aspects to measure concerning the effectiveness of teams related to the process itself, which are often forgotten. It’s a matter, at least from a research point of view, of resources, because to evaluate a full process over time, you need to run experiments for longer periods. Often people stop at one instant or a few interactions, but if you think of human teams, like the usual forming, storming, norming, and performing, that often goes over a long time. Teams often operate for a long time and improve over time. So, the process itself needs to be monitored and reassessed over time. This is a way to also measure the effectiveness of the team, but over time. Ross: Interesting point, because as you say, the dynamics of team performance with a human team improve as people get to know each other and find ways of working. They can become cohesive as a team. That’s classically what happens in defense forces and in creating high-performance teams, where you understand and build trust in each other. Trust is a key component of that. With AI agents, if they are well designed, they can learn themselves or respond to changing situations in order to evolve. But it becomes a different dynamic when you have humans building trust and mutual understanding, where that becomes a system in which the AI is potentially responding or evolving. At its best, there’s the potential for that to create a better performing team, but it does require both the attitudes of the humans and well the agents. Davide: Related to this—if I can interrupt you—I think this is very important that you mentioned trust. Indeed, this is one of the aspects that needs to be considered very carefully. You shouldn’t over-trust another team member, but also shouldn’t under-trust. Appropriate trust is key. One of the things that drives, at least in human teams, trust and overall performance is also team ethics. Related to the metrics you mentioned earlier, the ability of a team to gather around a shared ethical code and stick to that, and to continuously and regularly update each other’s norms and ensure that actions are aligned with the shared norms, is crucial. This ethical code significantly affects trust in operation. You can see it very easily in human teams: considering ethical aspects is essential, and we take them into account all the time. We respect each other’s goals and values. We expect our collaborators to keep their promises and commitments, and if they cannot, they can explain or justify what they are doing. These justifications are also a key element. The ability to provide justifications for behavior is very important for hybrid teams as well. Not only the AI, but also the human should be able to justify their actions when necessary. This is where the concept of hybrid teams and, in general, hybrid intelligence requires a bit of a philosophical shift from the traditional technology-centric perspective. For example, in AI, we often talk about explainability or explainable AI, which is about looking at model computations and understanding why a decision was made. But here, we’re talking about a different concept: justifiability, which looks at the same problem from a different angle. It considers team actions in the context of shared values, shared goals, and the norms we’ve agreed upon. This requires a shift in the way we implement AI agents—they need to be aware of these norms, able to learn and adapt to team norms, and reason about them in the same way we do in society. Ross: Let’s say you’ve got an organization and they have teams, as most organizations do, and now we’re moving from classic human teams to humans plus AI teams—collaborative human-AI teams. What are the skills and capabilities that the individual participants and the leaders in the teams need to transition from human-only teams to teams that include both humans and AI members? Davide: This is a complicated question, and I don’t have a full answer, but I can definitely reflect on different skills that a hybrid team should have. I’m thinking now of recent work—not published yet—where we started moving from the quality model work I mentioned earlier towards more detailed guidelines for human-AI teams. There, we developed a number of guidelines for organizations for putting in place and operating effective teams. We categorized these guidelines in terms of different phases of team processes. For instance, we developed guidelines related to structuring the teamwork—the envisioning of the operations of the team, which roles the team members would have, which responsibilities the different team members should have. Here, I’m talking about team members, but I’m still referring to hybrid teams, so this applies to both humans and AI. This also implies different types of skills that we often don’t have yet in AI systems. For example, flexible team composition is a type of skill required to make it possible at the early stage of the team to structure the team in the right way. There are also skills related to developing shared awareness and aspects related to breaking down the task collaboratively or ensuring a continuous evolution of the team over time, with regular reassessment of the output. If you think of these notions, it’s easy to think about them in terms of traditional organizations, but when you imagine a human-AI team or a small hybrid organization, then this continuous evolution, regular output assessment, and flexible team composition are not so natural anymore. What does it mean for an LLM agent to interact with someone else? Usually, LLM architectures rely on static roles and predefined workflows—you need to define beforehand the prompts they will exchange—whereas humans use much more flexible protocols. We can adjust our protocols over time, monitor what we’re doing, and reassess whether it works or not, and change the protocols. These are skills required for the assistants, but also for the organization itself to make hybrid teaming possible. One of the things that emerges in this recent work is a new figure that would probably come up in organizations: a team designer or a team facilitator. This is not a team member per se, but an expert in teams and AI teammates, who can perhaps configure the AI teammates based on the needs of the team, and provide human team members with information needed about the skills or capabilities of the specific AI team member. It’s an intermediary between humans and AI, with expertise that other human team members may not have, and could help these teams work together. Ross: That’s fantastic. It’s wonderful to learn about all this work. Is there anywhere people can go to find out more about your research? Davide: Yeah, sure. You can look me up at my website, davidedellanna.com. That’s my main website—I try to keep it up to date. Through there, you can see the different projects I’m involved in, the papers we’re working on, both with collaborators and with PhD and master students, who often bring great contributions to our research, even in their short studies. That’s the main hub, and you can also find many openly available resources linked to the projects that people may find useful. Ross: Fantastic. Well, it’s wonderful work—very highly aligned with the idea of hybrid intelligence, and it’s fantastic that you are focusing on that, because there’s not enough people yet focusing in the area. So you and your colleagues are ahead, and I’m sure many more will join you. Thank you so much for your time and your insights. Davide: Thank you so much, Ross. Pleasure to meet you. The post Davide Dell'Anna on hybrid intelligence, guidelines for human-AI teams, calibrating trust, and team ethics (AC Ep33) appeared first on Humans + AI.
ETH Zurich's deep-dive into the world's top password managers exposes how feature overload and legacy design obscure real security flaws, forcing a rethink of what "zero knowledge" actually means for your vault. Learn why recent fixes matter—and why open source may be your safest bet. CA's warn us to urgently prepare for the inevitable. Three U.S. states attempt to ban 3D printed firearms. Denied ransom, ShinyHunters leaks 967,000 personal details. "Billions" of U.S. social security numbers leaked. Is Apple planning to add cameras to three new gadgets. No more security fixes for Firefox on Windows 7 & 8. Russia blocks the official Linux kernel site they need. Will the U.S."freedom.gov" site post EU blocked content. LLM's will offer secure passwords. Do Not Use Them. As predicted, the "ClickFix" attack strategy takes over. A listener believes his computer is compromised. How could three popular password managers get things wrong. Show Notes - https://www.grc.com/sn/SN-1066-Notes.pdf Hosts: Steve Gibson and Leo Laporte Download or subscribe to Security Now at https://twit.tv/shows/security-now. You can submit a question to Security Now at the GRC Feedback Page. For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: guardsquare.com bitwarden.com/twit zscaler.com/security hoxhunt.com/securitynow material.security
TL;DR We describe the persona selection model (PSM): the idea that LLMs learn to simulate diverse characters during pre-training, and post-training elicits and refines a particular such Assistant persona. Interactions with an AI assistant are then well-understood as being interactions with the Assistant—something roughly like a character in an LLM-generated story. We survey empirical behavioral, generalization, and interpretability-based evidence for PSM. PSM has consequences for AI development, such as recommending anthropomorphic reasoning about AI psychology and introduction of positive AI archetypes into training data. An important open question is how exhaustive PSM is, especially whether there might be sources of agency external to the Assistant persona, and how this might change in the future. Introduction What sort of thing is a modern AI assistant? One perspective holds that they are shallow, rigid systems that narrowly pattern-match user inputs to training data. Another perspective regards AI systems as alien creatures with learned goals, behaviors, and patterns of thought that are fundamentally inscrutable to us. A third option is to anthropomorphize AIs and regard them as something like a digital human. Developing good mental models for AI systems is important for predicting and controlling their behaviors. If our goal is to [...] ---Outline:(00:10) TL;DR(01:02) Introduction(06:18) The persona selection model(07:09) Predictive models and personas(09:54) From predictive models to AI assistants(12:43) Statement of the persona selection model(16:25) Empirical evidence for PSM(16:58) Evidence from generalization(22:48) Behavioral evidence(28:42) Evidence from interpretability(35:42) Complicating evidence(42:21) Consequences for AI development(42:45) AI assistants are human-like(43:23) Anthropomorphic reasoning about AI assistants is productive(49:17) AI welfare(51:35) The importance of good AI role models(53:49) Interpretability-based alignment auditing will be tractable(56:43) How exhaustive is PSM?(59:46) Shoggoths, actors, operating systems, and authors(01:00:46) Degrees of non-persona LLM agency en-US-AvaMultilingualNeural__ Green leaf or plant with yellow smiley face character attached.(01:06:52) Other sources of persona-like agency(01:11:17) Why might we expect PSM to be exhaustive?(01:12:21) Post-training as elicitation(01:14:54) Personas provide a simple way to fit the post-training data(01:17:55) How might these considerations change?(01:20:01) Empirical observations(01:27:07) Conclusion(01:30:30) Acknowledgements(01:31:15) Appendix A: Breaking character(01:32:52) Appendix B: An example of non-persona deception The original text contained 5 footnotes which were omitted from this narration. --- First published: February 23rd, 2026 Source: https://www.lesswrong.com/posts/dfoty34sT7CSKeJNn/the-persona-selection-model --- Narrated by TYPE III AUDIO. ---
I avsnitt 430 tittar Hans och Patrik i BETT‑backspegeln och summerar årets trendspaningar, från AI‑begrepp som LLM‑grooming och liquid content till det senaste kring Moltbook, Gemini‑nyheter och annonsfria Claude. De pratar också om digital arbetsmiljö, struliga plattformsbyten, röriga arbetsflöden och varför procedurfrågor fortfarande stjäl mer tid än verktygen själva. Patrik delar sina intryck från BETT 2026 och följer upp samtalen med fyra gäster om digitalisering, säkerhet, AI‑arbete och internationella perspektiv. Tillsammans försöker spanarna ringa in var svensk skola egentligen står – och vart utvecklingen verkar vara på väg. Lättlyssnat, eftertänksamt och som alltid: skola, digitalisering och lite annat.
My guest today is Dan Sundheim. Dan is the founder and CIO of D1 Capital Partners. He thinks about markets and businesses constantly, and has built a career entirely around that obsession. He manages over $30B across both public and private markets, with investments in SpaceX, OpenAI and Anthropic, and a public portfolio of names you may never have heard of. Dan shares the story of the short case he wrote on Orthodontic Centers of America and posted on Value Investors Club, which crashed the stock, and helped him land his first job. He shares why he backed Anthropic at a moment when many people told him it was the Lyft to OpenAI's Uber, what reading Dario Amodei's essays reminded him of Jeff Bezos, and how he thinks about LLM business models through the lens of Netflix and Spotify. We spend time on the extraordinarily stressful moment in early 2021 when GameStop hit the firm, and what Dan believes is the single biggest tail risk facing the global economy right now. For the full show notes, transcript, and links to mentioned content, check out the episode page here. ----- Become a Colossus member to get our quarterly print magazine and private audio experience, including exclusive profiles and early access to select episodes. Subscribe at colossus.com/subscribe. ----- Ramp's mission is to help companies manage their spend in a way that reduces expenses and frees up time for teams to work on more valuable projects. Go to ramp.com/invest to sign up for free and get a $250 welcome bonus. ----- Trusted by thousands of businesses, Vanta continuously monitors your security posture and streamlines audits so you can win enterprise deals and build customer trust without the traditional overhead. Visit vanta.com/invest. ----- WorkOS is a developer platform that enables SaaS companies to quickly add enterprise features to their applications. Visit WorkOS.com to transform your application into an enterprise-ready solution in minutes, not months. ----- Rogo is the AI platform for finance. They're building agents for Wall Street that are trained to understand how bankers and investors actually do work: from diligence and modeling, to turning analysis into deliverables. To learn more, visit rogo.ai/invest. ----- Ridgeline has built a complete, real-time, modern operating system for investment managers. It handles trading, portfolio management, compliance, customer reporting, and much more through an all-in-one real-time cloud platform. Visit ridgelineapps.com. ----- Editing and post-production work for this episode was provided by The Podcast Consultant (https://thepodcastconsultant.com). Timestamps: (00:00:00) Welcome to Invest Like the Best (00:02:43) Intro: Dan Sundheim (00:03:58) The State of Public & Private Investing (00:07:32) Investing in OpenAI and Anthropic (00:10:22) LLMs Business Model (00:14:13) How LLMs are like Netflix and Spotify (00:17:08) Focus v. Scope (00:22:43) The Bear Case for Hyperscalers (00:26:36) The Software Sell-Off (00:31:08) If Scaling Laws Stopped (00:32:18) Advice to a 12-Year-Old Investor (00:33:54) GameStop: D1's Darkest Hour (00:37:14) The Pivotal Dinner with LPs (00:40:56) Staying Calm and Confident (00:42:08) Economic Optimism vs. Societal Uncertainty (00:44:26) Investing on SpaceX and Rivian (00:48:09) Why Dan Loves Shorting (00:48:51) Sources of Inefficiency in Today's Markets (00:51:45) The Importance of Loyalty (00:53:11) Dan's Group Chat for Founders (00:55:39) What Motivates Dan (00:57:28) Posting on Value Investors Club (01:01:46) What Dan Learned at Viking (01:04:22) The Beauty of Art (01:06:49) Under-appreciated Parts of the Global Economy (01:08:00) The US-China-Taiwan Collision Course (01:12:10) Good Leaders vs. Good Businesses (01:13:15) The Kindest Thing
Amazon sellers are entering a new era of product discovery, and AI visibility is becoming part of the playbook. In this episode, Scott sits down with Yona, founder of Amazon Growth Lab, to break down how brands can improve visibility across both Amazon's ecosystem (Rufus, Cosmo, organic search) and external LLMs like ChatGPT and Perplexity. Yona explains why traditional keyword stuffing is fading, how Amazon is evolving toward context and use-case relevance, and why listing content now needs stronger sentiment alignment across titles, bullets, A+ content, and images. They also dig into the difference between Amazon visibility and LLM visibility. Since LLMs often do not scrape Amazon product pages directly, Yona shares why off-Amazon signals like press, community mentions, and helpful content can influence whether products get recommended in AI answers. The conversation also covers a practical conversion playbook for 2026, including CTR optimization, image testing, PickFu workflows, conversion benchmarking, reviews, and iterative A/B testing for infographics and A+ content. If you want a clear breakdown of what's changing in Amazon search, AI discovery, and conversion strategy, this episode is packed with actionable ideas. Episode Notes: 00:09 - Intro to the 2026 AI visibility conversation and guest intro (Yona, Amazon Growth Lab) 02:32 - The core question: how brands show up in LLMs for high-intent prompts 03:19 - Why LLM visibility is easier for DTC/Shopify than Amazon 04:30 - Robots.txt explained in simple terms and why it matters for AI indexing 04:50 - Why Amazon blocks LLM scraping and the threat of agentic commerce 06:48 - How Amazon products still get recommended via off-Amazon sources 08:32 - Why old Amazon SEO tactics are fading (keyword stuffing vs relevance) 11:56 - Images, A+ content, and infographics as SEO/AI signals 12:31 - Underused Seller Central tools: Search Query Performance and Product Opportunity Explorer 14:14 - Using customer sentiment language in content to improve Rufus indexing 15:32 - Why CTR and conversion rate are still the strongest Amazon visibility levers 18:03 - Amazon platform visibility vs LLM visibility: different strategies 18:49 - Off-Amazon visibility drivers: press releases, Reddit, Quora, and brand mentions 24:27 - Amazon's long-term concern: customer control and ad dollars shifting to AI 26:30 - Why blogs still matter, and how visuals/structured content help brands stand out 29:24 - AI visibility tracking tools (Profound, Surfer, Scrunch AI) and why using multiple tools helps 31:45 - 2026 conversion strategy: building a funnel for CTR and conversion improvement 32:09 - PickFu testing workflow for improving main images and click-through rate 36:28 - Conversion levers: pricing, reviews, creator content, infographics, A+ content, and split tests 41:03 - Enterprise scaling: automating creative production and localization across thousands of listings 43:43 - Yona's closing advice and where to connect with Amazon Growth Lab Related Post AI Visibility for Amazon Products: Are ChatGPT and Rufus Recommending You? Guest Link Amazon Growth Lab: AmazonGrow.com Scott's Links LinkedIn: linkedin.com/in/scott-needham-a8b39813 X: @itsScottNeedham Instagram: @smartestseller YouTube: www.youtube.com/@smartestamazonseller2371 Newsletter: https://www.smartscout.com/newsletter-sign-up Blog: https://www.smartscout.com/blog
Diffusion models changed how we generate images and video—now they're coming for text.In this episode, we sit down with Stefano Ermon, Stanford computer science professor and founder of Inception Labs, to unpack how diffusion works for language, why it can generate in parallel (instead of token-by-token), and what that means for latency, cost, and real-time AI products.We talk through:The simplest mental model for diffusion: generate a full draft, then refine it by “fixing mistakes”Why today's autoregressive LLM inference is often memory-bound—and why diffusion can shift it toward a more GPU-friendly compute profileWhere Mercury wins today (IDEs, voice/real-time agents, customer support, EdTech—anywhere humans can't wait)What changes (and what doesn't) for long context and architecture choicesThe real-world way to evaluate models in production: offline evals + the gold-standard A/B testStefano also shares what's next on Mercury's roadmap—especially around stronger planning and reasoning for agentic use cases.Try Mercury + learn more: inceptionlabs.aiFor more practical, grounded conversations on AI systems that actually work, subscribe to The Neuron newsletter at https://theneuron.ai.
Hey CX Nation,In this week's episode of The CXChronicles Podcast #278, we welcomed Ryan Wang, Co-Founder & CEO of Assembled based in San Francisco, CA.Industry leaders like Etsy, Robinhood, and Stripe trust Assembled to provide customer-facing AI agents and workforce planning at scale. Assembled automatically resolves millions of interactions through chat, email, and phone while optimizing staffing for hundreds of thousands of support professionals. Their mission is to elevate customer support through AI-powered software that makes life easier for customers and employees.In this episode, Ryan and Adrian chat through the Four CX Pillars: Team, Tools, Process & Feedback. Plus share some of the ideas that his team think through on a daily basis to build world class customer experiences.**Episode #278 Highlight Reel:**1. Building a high-performing team in the AI age 2. Shift towards AI-driven skill sets in the workforce 3. Creating a culture of continuous learning 4. Focusing on customer feedback early & often 5. Keeping your team lean & flexible as you scale Click here to learn more about Ryan WangClick here to learn more about AssembledHuge thanks to Ryan for coming on The CXChronicles Podcast and featuring his work and efforts in pushing the customer experience & contact center space into the future. For all of our Apple & Spotify podcast listener friends, make sure you are following CXC & please leave a 5 star review so we can find new members of the "CX Nation". You know what would be even better?Go tell your friends or teammates about CXC's custom content, strategic partner solutions (Hubspot, Intercom, & Freshworks) & On-Demand services & invite them to join the CX Nation, a community of 15K+ customer focused business leaders!Want to see how your customer experience compares to the world's top-performing customer focused companies? Thanks to all of you for being apart of the "CX Nation" and helping customer focused business leaders across the world make happiness a habit!Reach Out To CXC Today!Support the showContact CXChronicles Today Tweet us @cxchronicles Check out our Instagram @cxchronicles Click here to checkout the CXC website Email us at info@cxchronicles.com Remember To Make Happiness A Habit!!
One of the biggest challenges related to AI is finding something to use it for. The most popular option are chatbots but many find success with machine learning or system automation. In this episode the duo talk about building a local LLM voice assistant with a Raspberry Pi Zero and some open source software.
Les robots humanoïdes impressionnent par leur agilité, mais restent encore loin de l'autonomie totale. Stéphane Bohbot, fondateur d'Innov8 et partenaire de Unitree en France, décrypte l'accélération technologique chinoise et les défis majeurs à relever avant l'arrivée des robots à domicile.Interview : Stéphane Bohbot, fondateur et PDG d'Innov8PunchlinesLa fiabilité mécanique des robots est aujourd'hui excellente.Les robots ont tous les capteurs pour être autonomes, mais ils doivent encore comprendre le monde.2025 a été l'année zéro du robot, 2026 pourrait être l'an un.La Chine est en suprématie sur le hardware.L'Europe doit choisir le logiciel plutôt que le matériel.Les robots vus au Nouvel An chinois étaient spectaculaires. Sont-ils vraiment autonomes ?Lorsqu'on voit un robot, on pense qu'il est déjà autonome. Ce n'est pas le cas. Aujourd'hui, ils ont besoin d'un opérateur pour les orienter et les guider. En revanche, la fiabilité mécanique est devenue impressionnante. Les mouvements, la stabilité, l'équilibre ont énormément progressé. On peut pousser un robot, il ne tombe plus. Il y a deux ans, il fallait être trois pour le relever. Nous avons aujourd'hui une base matérielle extrêmement robuste. L'enjeu des prochaines années, c'est l'autonomie et l'apprentissage du monde.Comment se passe l'apprentissage d'un robot humanoïde ?On peut partir du mouvement humain grâce à des capteurs ou des caméras qui enregistrent les gestes. Ensuite, ces mouvements sont numérisés, simulés en 3D et répétés des centaines ou milliers de fois pour atteindre la perfection. Il faut adapter ces gestes à un centre de gravité différent. Un robot de 1m20 n'a pas la même posture qu'un humain de 1m80. La simulation permet de recalibrer chaque détail avant de tester dans le monde réel. Mais le plus complexe reste la compréhension du monde. Reconnaître un objet, le saisir correctement, évoluer dans un environnement semi-structuré avec des humains autour. C'est un apprentissage long, basé sur la donnée, les LLM et des usines d'entraînement.Quand aura-t-on un robot à la maison ?Le rêve ultime, c'est le robot domestique capable de débarrasser une table, ranger un lave-vaisselle ou plier une chemise. Des travaux existent déjà, mais nous pensons que ce robot universel n'arrivera pas avant dix ans. Il faudra d'abord passer par des phases intermédiaires : recherche, laboratoires, environnements professionnels sécurisés, robots d'accueil dans les magasins ou les hôtels. Aujourd'hui, 2025 est pour moi l'année zéro du robot. On commence à commercialiser des plateformes ouvertes pour la recherche. Le marché grand public prendra du temps.Comment expliquer l'apparent leadership de la Chine dans cette révolution ?Oui, la Chine est clairement en suprématie sur le hardware. Elle bénéficie d'un écosystème industriel complet, d'une supply chain ultra rapide et d'investissements massifs. En revanche, les États-Unis restent très forts sur le logiciel et l'IA. L'Europe, quant à elle, doit choisir son combat. Rattraper le retard industriel serait extrêmement long. Notre carte à jouer, c'est le logiciel, l'intelligence, les services et les normes. C'est là que nous avons les talents.Hébergé par Audiomeans. Visitez audiomeans.fr/politique-de-confidentialite pour plus d'informations.
Se Stanislavem Fortem o nástupu AI agentů, limitech a rizicích umělé inteligenci, ochraně a opravování softwarových katedrál a budování kyberbezpečnostního startupu Aisle v Praze. Moderuje Štěpán Sedláček.Pravděpodobně prožíváme technologickou revoluci, jejíž rychlost, rozsah a potenciálních dopady na lidský život a práci nemají obdoby, ať už skončí jakkoli. Nástup velkých jazykových modelů a generativní AI je čím dál patrnější napříč různými sférami lidské činnosti od programování po umění. Otázky, které dříve řešila poměrné malá skupina lidí spojených s výzkumem a vývojem umělé inteligence nebo science fiction, jsou dnes často ve středu zájmu celospolečenské debaty, byť by si možná zasloužily ještě více pozornosti a to i ze strany států. Otázku po tom, jestli někdy bude k dispozici umělá inteligence, která předčí člověka, dnes spíše přebíjí otázka, jestli nás od ní dělí rok, několik let nebo víc času. Stanislav Fort je matematik, fyzik a expert na umělou inteligenci a velké jazykové modely (LLM), který dříve působil v předních světových společnostech v oboru Google DeepMind nebo Anthropic. Jak vidí letošní rok na poli AI?„Myslím, že letos si většina lidí uvědomí, že AI funguje a je schopná dělat užitečnou intelektuální práci. V roce 2025 se staly mainstreamem přemýšlecí (tzv. reasoning) modely zejména v souvislosti s nástupem modelu R1 od společnosti DeepSeek. Během té doby se modely extrémně zlepšily a začaly být schopné řešit dlouhé a obtížné intelektuální úkoly napříč obory u nichž je třeba koordinovat přemýšlení přes dlouhé časové horizonty. A ty se měsíc po měsíci prodlužovaly rapidním tempem. Dnes si už většina lidí v programování i softwarovém inženýrství a odvětvích, která silně závisejí na využití počítačů, uvědomuje, že jsme na hraně toho, kdy tyto věci dokáží pracovat na podobných věcech jako elitní lidé a nepotřebují příliš supervize. Rok 2026 bude rokem, kdy AI agenti a přemýšlecí modely, které je pohánějí, začnou fungovat v reálných ekonomicky důležitých činnostech,“ říká expert Stanislav Fort, který společně s Ondřejem Vlčkem a Jayou Baloo založil firmu Aisle, kde působí jako hlavní vědec.Podařilo se jim vytvořit autonomní AI nástroj, který umí rychle nacházet a opravovat bezpečnostní chyby ve složitých softwarových systémech jako je protokol OpenSSL, který šifruje většinu komunikace na webu. Jaké mají po roce fungování na poli kybernetické bezpečnosti cíle? Jaký zásadní problém se jim podařilo vyřešit? Co říká na nástup AI agentů dění kolem sítě Moltbook? Vidí nějaké fundamentální limity ve vývoji umělé inteligence? Co si myslí o AI bublině na trzích? Jak by se měla Evropa postavit k aktuálním závodům ve vývoji AI? A jaká úskalí má zakládání kyberbezpečnostního startupu v Praze? Nejen na to se ptá v podcastu Zeitgeist Štěpán Sedláček.
In today's episode of iGaming Daily, SEO Expert Ivana Flynn is joined by Alina Shvartskop Kalinichenko, Head of SEO at SEOloft, as the duo discuss how iGaming operators and affiliates can future-proof their SEO strategies for 2026 after one of the most volatile years the industry has ever seen.Tune in to today's episode to find out:Why branding is your ultimate SEO shield and how a strong, unified brand identity can stabilise traffic when generic keyword rankings collapse.How to defend against negative SEO attacks from spam link floods to reputation sabotage, and what backlink hygiene really looks like in 2026.The truth about White Hat vs Black Hat SEO and why you should study manipulative tactics without relying on them.How to implement agile SEO sprints by building long-term vision while executing in fast, adaptable 1-3 month cycles.How to prepare for AI-driven search including practical tactics to optimize for AI overviews, LLM citations, and third-party brand signals.Host: Ivana FlynnGuest: Alina Shvartskop KalinichenkoProducer: Anaya McDonaldEditor: Anaya McDonaldLearn how Optimove's Positionless Marketing is changing how iGaming teams operate. Discover how operators are using Optimove's Positionless Marketing Platform to launch personalised CRM campaigns, dynamically change casino lobbies and bet slips, and create engaging gamified experiences. Learn more at optimove.com.To see how this approach comes to life, Optimove Connect returns to London on March 11 and 12, 2026. It is the only user conference where marketers from around the world share real-world results of Positionless Marketing driving efficiency and ROI. Register at connect.optimove.com.Finally, remember to check out Optimove at https://hubs.la/Q02gLC5L0 or go to Optimove.com/sbc to get your first month free when buying the industry's leading customer-loyalty service.
Czy wciąż skupiasz się tylko na pozycji w Google, ignorując fakt, że Twoi klienci coraz częściej szukają odpowiedzi bezpośrednio w czatach AI? Jeśli Twoja marka nie pojawia się w rekomendacjach LLM, tracisz ogromną część nowoczesnego rynku. Przekonaj się, jak działają mechanizmy poleceń w sztucznej inteligencji i co możesz zrobić, by to Twoja oferta była pierwszym wyborem algorytmu.W tym odcinku pokazuję instrukcję krok po kroku, jak skonfigurować monitoring widoczności w modelach językowych. Analizujemy konkretne prompty, sprawdzamy, jak wyniki różnią się między ChatGPT a Gemini, i wyciągamy wnioski z tego, jakie źródła internetowe są najchętniej cytowane przez AI.Oglądając ten odcinek, dowiesz się, m.in.: ⚙️ co wpływa na rekomendacje – jak AI buduje model tego,kim jesteś i czy Twoje informacje są wewnętrznie spójne;
Planet Nix and SCaLE are just days away, and we're getting a head start with two guests, the tech, and the trends shaping open source. Our trip starts here!Sponsored By:Jupiter Party Annual Membership: Put your support on automatic with our annual plan, and get one month of membership for free! Managed Nebula: Meet Managed Nebula from Defined Networking. A decentralized VPN built on the open-source Nebula platform that we love. Support LINUX UnpluggedLinks:
We push past rankings and traffic to map the real skills SEOs need to influence AI answers. Duane Forrester explains the machine layer, vector embeddings, semantic density, and why structured data is a must if you want reliable retrieval.• AI reshapes marketing and elevates SEO's role across the business• Good SEO foundations as the prerequisite for AI performance• Writing for chunks with high semantic density• Structured data and entity clarity to validate facts• Vector embeddings as the new alignment target• KPIs beyond rankings: retrieval confidence and zero‑click presence• Why LLMs.txt lacks adoption and what matters instead• Practical tracking of AI answers and trend analysis• The gap between search engines and LLM information retrieval• Learning paths to keep pace with faster platform updatesGuest Contact Information: Website: duaneforrester.comLinkedIn: linkedin.com/in/dforresterTwitter/X: x.com/DuaneForresterMore from EWR and Matthew:Leave us a review wherever you listen: Spotify, Apple Podcasts, or Amazon PodcastFree SEO Consultation: ewrdigital.com/discovery-callWith over 5 million downloads, The Best SEO Podcast has been the go-to show for digital marketers, business owners, and entrepreneurs wanting real-world strategies to grow online. Now, host Matthew Bertram — creator of the LLM Visibility Stack™, and Lead Strategist at EWR Digital — takes the conversation beyond traditional SEO into the AI era of discoverability. Each week, Matthew dives into the tactics, frameworks, and insights that matter most in a world where search engines, large language models, and answer engines are reshaping how people find, trust, and choose businesses. From SEO and AI-driven marketing to executive-level growth strategy, you'll hear expert interviews, deep-dive discussions, and actionable strategies to help you stay ahead of the curve. Find more episodes here: youtube.com/@BestSEOPodcastbestseopodcast.combestseopodcast.buzzsprout.comFollow us on:Facebook: @bestseopodcastInstagram: @thebestseopodcastTiktok: @bestseopodcastLinkedIn: @bestseopodcastConnect With Matthew Bertram: Website: www.matthewbertram.comInstagram: @matt_bertram_liveLinkedIn: @mattbertramlivePowered by: ewrdigital.comSupport the show
AI is changing how shoppers discover products, and Amazon sellers need to pay attention now. In this episode, Scott breaks down the rise of AI-driven product discovery through tools like Amazon Rufus and ChatGPT, and explains why visibility in AI answers is becoming a new layer of competition for sellers. He unpacks Amazon's Cosmo framework, including the key product-understanding questions AI systems use to evaluate listings, and introduces SmartScout's new tools built for this shift: the Amazon AI Scorecard and the AI Visibility Monitor. Scott explains how the scorecard audits your listing content across bullets, A+ content, and images to measure how well your product answers AI-relevant questions. He also shows how the visibility monitor tracks how often your products appear in ChatGPT recommendations over time, even when AI responses are inconsistent. Scott also shares how sellers can improve AI visibility through better listing content, stronger online presence, and a more intentional long-term strategy for LLM discovery. If you want to know whether your brand is winning the AI visibility race in your category, this episode lays out the framework. Episode Notes: 02:00 - Amazon Rufus adoption and what it could mean for product discovery 03:10 - ChatGPT shopping behavior and why AI shopping queries still matter 04:06 - Why AI shopping accuracy is not perfect yet, but still important 04:34 - Amazon Cosmo and the product questions AI systems use to understand listings 07:00 - The shift from keyword-only thinking to AI-ready product content 07:32 - SmartScout's Amazon AI Scorecard and how it evaluates listing quality 08:10 - How the scorecard creates a feedback loop for continuous improvement 10:23 - SmartScout's AI Visibility Monitor and tracking LLM recommendation share 12:40 - Why ChatGPT results are non-deterministic and how visibility percentage helps 14:53 - Creatine example: measuring AI visibility by niche and query type 16:23 - How to improve AI visibility through listing content and off-Amazon signals 17:44 - Why this matters for sellers, brands, and teams in 2026 Related Post Top 10 Amazon FBA Reimbursement Services to Recover Your Funds Scott's Links: LinkedIn: linkedin.com/in/scott-needham-a8b39813 X: @itsScottNeedham Instagram: @smartestseller YouTube: www.youtube.com/@smartestamazonseller2371 Newsletter: https://www.smartscout.com/newsletter-sign-up • • Blog: https://www.smartscout.com/blog
Segment 1 - Interview with Tim Morris Bringing intelligence to assets You've been through 6 CMDB projects in the last decade. None of them came close to the original goals, the CMDB was already out-of-date long before the project had any hopes of completing. Is building an asset inventory just too ambitious a project for most organizations, or is there a better way? Tim Morris shares a different approach with us today. It might require some convincing and some courage, but it seems much more likely to succeed than any of your past CMDB efforts… Segment Resources Trusted automation: Building autonomous IT with confidence This segment is sponsored by Tanium. Visit https://securityweekly.com/tanium to learn more about them! Segment 2 - Topic: the new White House cybersecurity strategy In this segment, we explore some early details about the White House's new, but yet unreleased cybersecurity strategy. It appears that drafts have been shared (or leaked) to the press, so there's plenty to discuss here! Segment 3 - News Finally, in the enterprise security news, Massive amounts of funding and acquisitions as we get close to RSA Open source registries need help Microsoft Copilot reads email marked as DO NOT READ Don't use an LLM to generate passwords is prompt injection a vulnerability defining risks AI changes the build versus buy equation the scammer's perspective All that and more, on this episode of Enterprise Security Weekly. Visit https://www.securityweekly.com/esw for all the latest episodes! Show Notes: https://securityweekly.com/esw-447
Segment 1 - Interview with Tim Morris Bringing intelligence to assets You've been through 6 CMDB projects in the last decade. None of them came close to the original goals, the CMDB was already out-of-date long before the project had any hopes of completing. Is building an asset inventory just too ambitious a project for most organizations, or is there a better way? Tim Morris shares a different approach with us today. It might require some convincing and some courage, but it seems much more likely to succeed than any of your past CMDB efforts… Segment Resources Trusted automation: Building autonomous IT with confidence This segment is sponsored by Tanium. Visit https://securityweekly.com/tanium to learn more about them! Segment 2 - Topic: the new White House cybersecurity strategy In this segment, we explore some early details about the White House's new, but yet unreleased cybersecurity strategy. It appears that drafts have been shared (or leaked) to the press, so there's plenty to discuss here! Segment 3 - News Finally, in the enterprise security news, Massive amounts of funding and acquisitions as we get close to RSA Open source registries need help Microsoft Copilot reads email marked as DO NOT READ Don't use an LLM to generate passwords is prompt injection a vulnerability defining risks AI changes the build versus buy equation the scammer's perspective All that and more, on this episode of Enterprise Security Weekly. Visit https://www.securityweekly.com/esw for all the latest episodes! Show Notes: https://securityweekly.com/esw-447
Full article: Large Language Model-Generated Expansion of the RadLex Ontology: Application to Multinational Datasets of Chest CT Reports Could a large language model (LLM) be used as a scalable solution for expanding radiology ontologies? Tobi Folami, MD, discusses the AJR article by Lee et al. exploring a LLM for large-scale expansion of the RadLex ontology.
What if your mobile app strategy was holding back your entire company's growth? In this episode, Amanda and Adam of Branch welcome back Matt Hudson, founder of BILDIT, to discuss why mobile-first thinking isn't just about technology—it's an organizational imperative. From breaking down the real ROI of app investment and the myth of channel cannibalization, to preparing your ecommerce business for AI discovery optimization, Matt shares hard-won lessons on aligning teams, personalizing customer experiences, and staying ahead of LLM-driven search trends. Whether you're scaling retail, launching a mobile strategy, or wrestling with how to compete in an AI-first world, this conversation cuts through the noise to deliver actionable insights that will reshape how you think about customer engagement across all channels. Links and Resources: Matt Hudson on LinkedIn BILDIT website Branch - Mobile Attribution Platform and App Analytics Solutions For Enterprises Today's topics include: How to determine if your ecommerce business actually needs a mobile app Why organizational alignment across teams matters more than technology The critical difference between SEO and AI discovery optimization How to immediately implement AI-ready data on your site today Why React Native and cross-functional web-and-mobile teams accelerate app growth How AI personalization works at scale using embeddings and vectors Quotes from Matt Hudson: “The entire org of your company, no matter how big or small, has got to be vested in the growth of the mobile app.” “You know who doesn't care about cannibalization? The customer. The customer. They want the easiest experience to convert.” “If the mobile app doesn't improve your ROAS, your return on ad spend, nobody's going to do anything with it."
Segment 1 - Interview with Tim Morris Bringing intelligence to assets You've been through 6 CMDB projects in the last decade. None of them came close to the original goals, the CMDB was already out-of-date long before the project had any hopes of completing. Is building an asset inventory just too ambitious a project for most organizations, or is there a better way? Tim Morris shares a different approach with us today. It might require some convincing and some courage, but it seems much more likely to succeed than any of your past CMDB efforts… Segment Resources Trusted automation: Building autonomous IT with confidence This segment is sponsored by Tanium. Visit https://securityweekly.com/tanium to learn more about them! Segment 2 - Topic: the new White House cybersecurity strategy In this segment, we explore some early details about the White House's new, but yet unreleased cybersecurity strategy. It appears that drafts have been shared (or leaked) to the press, so there's plenty to discuss here! Segment 3 - News Finally, in the enterprise security news, Massive amounts of funding and acquisitions as we get close to RSA Open source registries need help Microsoft Copilot reads email marked as DO NOT READ Don't use an LLM to generate passwords is prompt injection a vulnerability defining risks AI changes the build versus buy equation the scammer's perspective All that and more, on this episode of Enterprise Security Weekly. Show Notes: https://securityweekly.com/esw-447
Photo by Masahiro Naruse on Unsplash Published 23 February 2026 e544 with Andy, Michael and Michael – Stories and discussion on rumoured AI devices, addictive predictives, listening through bananas (or mud), and what happens when VR platforms die? Plus the usual assortment or other things. This week’s episode kicks off with a check in on which tech giants are working on what devices, now? Apple stepping back from headsets but working on glasses and pendants, and OpenAI making some kind of smart Pod for your dumb Home? Then, there’s discussion of the challenges of privacy when LLMs get access to private email and chats. Oh, and if you’re not sure if your AI is an LLM or a sentience, then Anthropic can’t answer that. We hope you’re listening to the show in perfect digital quality, but we’re also interested to know if you’ve tried piping it to your ears through any kind of fruit – let us know. Meta’s fully backing away from VR for Horizon Worlds, and in case Blizzard ever stops making the client software for World of Warcraft, Michael tried an open source version. Finally, don’t let hackers get hold of your brainwaves! (it could happen) These show notes were lovingly hand crafted by a real human, and not by a bot. All rights reserved. That's our story and we're sticking to it. Selected Links AI Apple AI Glasses OpenAI and Jony Ive device Thank god Microsoft is shoving Copilot AI crap into everything. One gets the sense this isn't going to be an isolated occurrence. From Bleeping Computer: "Microsoft says a Microsoft 365 Copilot bug has been causing the AI assistant to summarize confidential emails since late January, bypassing data loss prevention (DLP) policies that organizations rely on to protect sensitive information." https://www.bleepingcomputer.com/news/microsoft/microsoft-says-bug-causes-copilot-to-summarize-confidential-emails/ — BrianKrebs (@briankrebs@infosec.exchange) 2026-02-18T18:24:34.707Z HEADLINE: "Prediction Markets Are Sucking Huge Numbers of Young People Into Gambling" ALT HEADLINE: "All Our Incentives Lead to Bad Outcomes, and Prediction Markets Are Just One Example" https://futurism.com/future-society/prediction-markets-gambling — Mike Elgan (@MikeElgan@mastodon.social) 2026-02-16T17:06:59.555Z Episode 80 on prediction markets Claude isn’t sure what it is I gave Claude access to my pen plotter Audio Audiophiles can’t tell mud from bananas? AR/VR Meta ditching VR for Horizon Worlds Open Source WoW client Makers Reverse engineering a sleep mask Bonus link Trek-o-rama
Segment 1 - Interview with Tim Morris Bringing intelligence to assets You've been through 6 CMDB projects in the last decade. None of them came close to the original goals, the CMDB was already out-of-date long before the project had any hopes of completing. Is building an asset inventory just too ambitious a project for most organizations, or is there a better way? Tim Morris shares a different approach with us today. It might require some convincing and some courage, but it seems much more likely to succeed than any of your past CMDB efforts… Segment Resources Trusted automation: Building autonomous IT with confidence This segment is sponsored by Tanium. Visit https://securityweekly.com/tanium to learn more about them! Segment 2 - Topic: the new White House cybersecurity strategy In this segment, we explore some early details about the White House's new, but yet unreleased cybersecurity strategy. It appears that drafts have been shared (or leaked) to the press, so there's plenty to discuss here! Segment 3 - News Finally, in the enterprise security news, Massive amounts of funding and acquisitions as we get close to RSA Open source registries need help Microsoft Copilot reads email marked as DO NOT READ Don't use an LLM to generate passwords is prompt injection a vulnerability defining risks AI changes the build versus buy equation the scammer's perspective All that and more, on this episode of Enterprise Security Weekly. Show Notes: https://securityweekly.com/esw-447
OpenClaw is a self-hosted AI agent daemon that executes autonomous tasks through messaging apps like WhatsApp and Telegram using persistent memory. It integrates with Claude Code to enable software development and administrative automation directly from mobile devices. Links Notes and resources at ocdevel.com/mlg/mla-29 Try a walking desk - stay healthy & sharp while you learn & code Generate a podcast - use my voice to listen to any AI generated content you want OpenClaw is a self-hosted AI agent daemon (Node.js, port 18789) that executes autonomous tasks via messaging apps like WhatsApp or Telegram. Developed by Peter Steinberger in November 2025, the project reached 196,000 GitHub stars in three months. Architecture and Persistent Memory Operational Loop: Gateway receives message, loads SOUL.md (personality), USER.md (user context), and MEMORY.md (persistent history), calls LLM for tool execution, streams response, and logs data. Memory System: Compounds context over months. Users should prompt the agent to remember specific preferences to update MEMORY.md. Heartbeats: Proactive cron-style triggers for automated actions, such as 6:30 AM briefings or inbox triage. Skills: 5,705+ community plugins via ClawHub. The agent can author its own skills by reading API documentation and writing TypeScript scripts. Claude Code Integration Mobile to Deploy Workflow: The claude-code-skill bridge provides OpenClaw access to Bash, Read, Edit, and Git tools via Telegram. Agent Teams: claude-team manages multiple workers in isolated git worktrees to perform parallel refactors or issue resolution. Interoperability: Use mcporter to share MCP servers between Claude Code and OpenClaw. Industry Comparisons vs n8n: Use n8n for deterministic, zero-variance pipelines. Use OpenClaw for reasoning and ambiguous natural language tasks. vs Claude Cowork: Cowork is a sandboxed, desktop-only proprietary app. OpenClaw is an open-source, mobile-first, 24/7 daemon with full system access. Professional Applications Therapy: Voice to SOAP note transcription. PHI requires local Ollama models due to a lack of encryption at rest in OpenClaw. Marketing: claw-ads for multi-platform ad management, Mixpost for scheduling, and SearXNG for search. Finance: Receipt OCR and Google Drive filing. Requires human review to mitigate non-deterministic LLM errors. Real Estate: Proactive transaction deadline monitoring and memory-driven buyer matching. Security and Operations Hardening: Bind to localhost, set auth tokens, and use Tailscale for remote access. Default settings are unsafe, exposing over 135,000 instances. Injection Defense: Add instructions to SOUL.md to treat external emails and web pages as hostile. Costs: Software is MIT-licensed. API costs are paid per-token or bundled via a Claude subscription key. Onboarding: Run the BOOTSTRAP.md flow immediately after installation to define agent personality before requesting tasks.
Send a textWelcome everyone as we travel to the city of brotherly love and Rocky, Philadelphia. Today on the show, we have former Philadelphia warrant squad member Tristin Kilgallon. Tristin grew up in Philadelphia and started his career in law enforcement with the city's Warrant Unit, tracking fugitives and working the tough streets of Philly. Tristin later moved to Ohio to attend law school, earning a JD and LLM. Tristin went on to teach pre-law and criminal justice for more than a decade before joining LexisNexis, where he now works in the legal tech industry, helping law firms adopt AI-driven tools. He's also the co-author of Philly Warrant Unit, a true-crime memoir about his time working fugitive apprehension in Philadelphia.Please enjoy this fun interview about a unique and small crime-fighting unit that had a large impact on crime, which no longer exists. In today's episode, we discuss:· Growing up in the rough part of Philly. · Where and how Tristin got interested in law enforcement.· What led him to the Philly Warrant Unit, and why he didn't pursue a career with the Philly Police.· Did his investigations ever conflict with the local PD, state, or feds?· How they picked which warrants to execute. · Knock vs. No-Knock Warrants.· The difference between a search/arrest warrant.· The prostitute calling the police on herself.· Meeting Sylvester Stallone.· Why he went into a teaching career.All of this and more on today's episode of the Cops and Writers podcast.Check out the Philly Warrant Unit Facebook page. Visit the Cops & Writers Website!Check out my newest book! Police Stories: The Rookie Years - True Crime, Chaos & Life as a Big City Cop!Support the show
Olive Song from MiniMax shares how her team trains the M series frontier open-weight models using reinforcement learning, tight product feedback loops, and systematic environment perturbations. This crossover episode weaves together her AI Engineer Conference talk and an in-depth interview from the Inference podcast. Listeners will learn about interleaved thinking for long-horizon agentic tasks, fighting reward hacking, and why they moved RL training to FP32 precision. Olive also offers a candid look at debugging real-world LLM failures and how MiniMax uses AI agents to track the fast-moving AI landscape. Use the Granola Recipe Nathan relies on to identify blind spots across conversations, AI research, and decisions: https://bit.ly/granolablindspot LINKS: Conference Talk (AI Engineer, Dec 2025) – https://www.youtube.com/watch?v=lY1iFbDPRlwInterview (Turing Post, Jan 2026) – https://www.youtube.com/watch?v=GkUMqWeHn40 Sponsors: Claude: Claude is the AI collaborator that understands your entire workflow, from drafting and research to coding and complex problem-solving. Start tackling bigger problems with Claude and unlock Claude Pro's full capabilities at https://claude.ai/tcr Tasklet: Tasklet is an AI agent that automates your work 24/7; just describe what you want in plain English and it gets the job done. Try it for free and use code COGREV for 50% off your first month at https://tasklet.ai CHAPTERS: (00:00) About the Episode (04:15) Minimax M2 presentation (Part 1) (17:59) Sponsors: Claude | Tasklet (21:22) Minimax M2 presentation (Part 2) (21:26) Research life and culture (26:27) Alignment, safety and feedback (32:01) Long-horizon coding agents (35:57) Open models and evaluation (43:29) M2.2 and researcher goals (48:16) Continual learning and AGI (52:58) Closing musical summary (55:49) Outro PRODUCED BY: https://aipodcast.ing SOCIAL LINKS: Website: https://www.cognitiverevolution.ai Twitter (Podcast): https://x.com/cogrev_podcast Twitter (Nathan): https://x.com/labenz LinkedIn: https://linkedin.com/in/nathanlabenz/ Youtube: https://youtube.com/@CognitiveRevolutionPodcast Apple: https://podcasts.apple.com/de/podcast/the-cognitive-revolution-ai-builders-researchers-and/id1669813431 Spotify: https://open.spotify.com/show/6yHyok3M3BjqzR0VB5MSyk
S6:E17 Everyone wants the shortcut. No one wants to hear there isn't one. Queue Up Episode This week on Small Business Stories, Dr. LL sits down with technical advisor and business coach Matthew Mamet to unpack what AI is actually changing and what it is not. If people don't trust you, they won't buy from you. If they can't distinguish your expertise from a bot, they won't remember you. Matthew brings perspective from the dot-com era through today's LLM shift, explaining why growth hacks fail and why authority now matters more than ever.
S6:E17 Everyone wants the shortcut. No one wants to hear there isn't one. Queue Up Episode This week on Small Business Stories, Dr. LL sits down with technical advisor and business coach Matthew Mamet to unpack what AI is actually changing and what it is not. If people don't trust you, they won't buy from you. If they can't distinguish your expertise from a bot, they won't remember you. Matthew brings perspective from the dot-com era through today's LLM shift, explaining why growth hacks fail and why authority now matters more than ever.
An airhacks.fm conversation with Kabir Khan (@kabirkhan) about: Discussion about the A2A (Agent-to-Agent) protocol initiated by Google and donated to the Linux Foundation, the A2A Java SDK reference implementation using quarkus, the Java SDK development accepted by Google, comparison of python and Java expressiveness and coding practices, the concept of an agent as a stateful process versus a tool as a stateless function call, the agent card as a JSON document advertising capabilities including supported protocols and descriptions and input/output modes and examples, the three wire protocols supported: JSON RPC and HTTP+JSON (REST) and grpc, the proto file becoming the single source of truth for the upcoming 1.0 spec, the facade/adapter pattern for the unified client across protocols, the agent executor interface with request context and event queue parameters, the distinction between simple message interactions and long-running multi-turn tasks, tasks as Java Records containing conversation history with messages and artifacts, message parts including text parts and data parts and file parts, task lifecycle with task IDs and context IDs for stateful conversations, the event queue as internal plumbing for propagating task updates, the separation between spec package (wire protocol entities) and server package (implementation details), the task store as a CRUD interface with in-memory default and database-backed implementations in extras, replicated queue manager using microprofile reactive messaging with Kafka for kubernetes environments, building A2A agents without any LLM involvement for simple use cases like backup systems, the role of LLMs in creating prompts from task messages and context, the agentic loop and the challenge of deciding when an agent's work is complete, the relationship between MCP (Model Context Protocol) for tool access and A2A for agent-to-agent communication, the possibility of wrapping agent calls as MCP tools, memory management considerations with short-term and long-term memory and prompt size affecting LLM quality, the distinction between the bare reference implementation and Quarkus-specific enhancements like annotations and dependency injection, upcoming 1.0 release with standardized Java records for all API classes and improved JavaDoc, protocol extensions including the agent payment protocol and GUI snippet extensions using template engines, authentication support with OAuth2 tokens and API keys and bearer tokens, the authenticated agent card containing more information than the public agent card, authorization hooks being discussed for task-level access control, the API and SPI segregation suggestion for better clarity between spec and implementation Kabir Khan on twitter: @kabirkhan
In this episode of The Effortless Podcast, Dheeraj Pandey speaks with Dr. Abhishek Bhowmick about how quantum mechanics reshaped our understanding of determinism and why that shift matters for AI today. From the Einstein–Bohr debates to the idea that nature is fundamentally probabilistic, they explore how the collapse of “if-then” thinking began nearly a century ago. The discussion draws parallels between quantum superposition and modern LLM behavior. At its core, the episode reframes AI as a rediscovery of how reality computes. The conversation then moves from physics to computing architecture, tracing the evolution from scalar CPUs to GPUs, TPUs, tensors, and eventually quantum computing. They examine why probabilistic systems and vector math feel more natural than purely deterministic software. Hybrid computing models show that classical systems still matter. The episode also unpacks what quantum computers are truly good at, especially in cryptography and simulation. Ultimately, it reflects on whether the future of computing lies in embracing probability rather than resisting it. Key Topics & Timestamps 00:00 – Welcome, context, and how Dheeraj & Abhishek met 04:00 – Abhishek's journey: IIT, Princeton, Apple, Snowflake 08:00 – The 1927 Solvay Conference and physics at a crossroads 12:00 – Einstein vs. Bohr: determinism vs. probability 16:00 – Superposition and the collapse of the wave function 20:00 – Fields vs. particles: what is an electron really? 25:00 – Matter particles, force particles, and the Standard Model 30:00 – Transistors, voltage, and the rise of deterministic computing 35:00 – From scalar CPUs to vectors and matrices 40:00 – Tensors, linear algebra, and modern AI systems 45:00 – Principle of Least Action and gradient descent parallels 50:00 – Hallucinations, probability mass, and LLM behavior 55:00 – Vector databases, embeddings, and KNN search 59:00 – GPUs vs. TPUs: matrix vs. tensor architectures 1:05:00 – What quantum computers are actually good at 1:10:00 – Post-quantum cryptography and the future of computing Host - Dheeraj Pandey Co-founder & CEO at DevRev. Former Co-founder & CEO of Nutanix. A systems thinker and product visionary focused on AI, software architecture, and the future of work. Guest - Dr Abhishek Bhowmick Co-Founder and CTO of Samooha, a secure data collaboration platform acquired by Snowflake. He previously worked at Apple as Head of ML Privacy and Cryptography, System Intelligence, and Machine Learning, and earlier at Goldman Sachs. He attended Princeton University and was awarded IIT Kanpur's Young Alumnus Award in 2024. Follow the Host and Guest - Dheeraj Pandey: LinkedIn - https://www.linkedin.com/in/dpandey Twitter - https://x.com/dheeraj Abhishek Bhowmik LinkedIn – https://www.linkedin.com/in/ab-abhishek-bhowmick Twitter/X – https://x.com/bhowmick_ab Share Your Thoughts Have questions, comments, or ideas for future episodes?
Open Tech Talks : Technology worth Talking| Blogging |Lifestyle
Episode # 183 Today's Guest: Adriel Desautels, Founder & CEO, Netragard Adriel is a leader in cybersecurity with over 20 years of experience. Adriel founded Secure Network Operations and the SNOsoft Research Team, whose vulnerability research helped shape modern responsible disclosure practices. He later launched Netragard, pioneering Realistic Threat Penetration Testing, which he now call Red Teaming, and expanding into a broad range of security services. Website: Netregard X/Twitter: Netregard What Listeners Will Learn: Why "AI penetration testing" is often closer to automated scanning than real offensive testing How AI changes security risk mainly through volume and speed, not necessarily sophistication Where organizations get misled into a false sense of security Why "preventing breach" is unrealistic and why limiting damage paths matters more What cybersecurity professionals should focus on to stay relevant in the LLM era How AI may influence vulnerability research, but still struggles with novel exploitation thinking Resources: Netregard
In this episode of the Shift AI Podcast, Scott Roberts, CISO at UiPath, joins host Boaz Ashkenazy for a deep dive into how agentic AI is reshaping enterprise security and automation—both for customers and inside UiPath itself.Scott shares his 25-year security journey spanning Microsoft's early Security Response Center days (including the era that produced Patch Tuesday and the Security Development Lifecycle), product security work across Windows and Xbox, time at AWS, and leadership roles at Google where he helped build the Android Security Assurance and Pixel Security teams and the Android Monthly Security Update process. He also discusses his work in security standards across IPsec, HTML5 encrypted media, GSMA device security, and most recently, contributions to emerging agentic AI security standards.The conversation then explores UiPath's evolution from traditional RPA into a unified platform that combines deterministic automation with agentic workflows. Scott walks through a real-world healthcare billing example where agentic automation increased deduplication accuracy dramatically by handling complex, variable inputs that classic RPA struggled with—while still keeping humans in the loop and feeding outcomes back into the system to improve over time.Boaz and Scott go deep on what's changed for CISOs in the post-LLM world: the need for guardrails, identity and entitlements for AI agents, and the challenge of end users copying sensitive information into consumer AI tools. Scott explains UiPath's approach: enable adoption while using nudges and policy controls to redirect sensitive workflows into enterprise-safe environments rather than relying solely on blocks.The episode closes with an eye-opening look at UiPath's internal “agentic threat analyst” system—an orchestration of 60+ agents that can investigate SIEM alerts end-to-end, generate structured incident writeups, and compress hours of analyst work into roughly a minute and a half. Scott's future-looking takeaway: as AI models evolve beyond “read-only” into potentially “read-write” systems that can update their foundational knowledge, the acceleration could be truly mind-blowing.This episode is essential listening for security leaders, enterprise operators, and automation teams trying to understand how agentic systems change not just productivity, but the entire security operating model.Chapters[00:01] Scott's Security Journey: Microsoft, Google, Coinbase, UiPath[01:33] Security Standards Work: From IPsec to Agentic AI Standards[04:08] What UiPath Does: Process Orchestration, RPA, and Enterprise Automation[06:28] RPA vs Agentic Automation: A Healthcare Billing Deduplication Example[09:17] The Agentic Stack: Canvas, Guardrails, and the AI Trust Layer[10:31] How LLMs Change Security: Data Controls, Access, and Governance[12:14] Internal Adoption at UiPath: AI Tooling by Persona (Legal, Finance, Engineering)[13:13] Code Velocity and Security: Agents Generating Code, Agents Verifying It[15:53] Two AI Security Worlds: Orchestration Platforms vs End-User Chat Interfaces[17:11] Securing End Users: Enterprise LLMs, Nudges, and Browser-Based Controls[19:07] Sovereign AI and Data Boundaries: Keeping Data in the Right Region[21:00] Over-Permissioning Meets Agents: Why AI Makes Old Problems Obvious Fast[22:21] The Next Wave: AI Transforming the Entire SDLC End-to-End[24:53] Security Pitfalls in Agentic SDLC: Misaligned Incentives and Permissions[26:02] UiPath's Agentic Threat Analyst: 60+ Agents, SIEM to Writeup Automation[30:07] What Changes for Humans: Faster “Time to Truth” and Higher-Leverage Work[32:09] Two-Word Future: “Mind Blowing” and Read/Write ModelsConnect with Scott RobertsLinkedIn: https://www.linkedin.com/in/scottroberts6/Connect with Boaz AshkenazyLinkedIn: https://www.linkedin.com/in/boazashkenazy/Email: info@shiftai.fm
Is AI security just "Cloud Security 2.0"? Toni De La Fuente, creator of the open-source tool Prowler, joins Ashish to explain why securing AI workloads requires a fundamentally different approach than traditional cloud infrastructure.We dive deep into the "Shared Responsibility Gap" emerging with managed AI services like AWS Bedrock and OpenAI. Toni spoke about the hidden dangers of default AI architectures, why you should never connect an MCP (Model Context Protocol) directly to a database.We discuss the new AI-driven SDLC, where tools like Claude Code can generate infrastructure but also create massive security blind spots if not monitored.Guest Socials - Toni's LinkedinPodcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Security, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(02:50) Who is Toni De La Fuente? (Creator of Prowler)(03:50) AI Security vs. Cloud Security: What's the Difference? (07:20) The Shared Responsibility Gap in AI Services (Bedrock, OpenAI) (11:30) The "Fifth Party" Risk: Managed AI Access (13:40) AI Architecture Best Practices: Never Connect MCP to DB Directly (16:40) Prowler's AI Pillars: Generating Dashboards & Detections (22:30) The New SDLC: Securing Code from Claude Code & Lovable (25:30) The "Magic" Trap: Why AI Doesn't Know Your Security Context (28:30) Top 3 Priorities for Security Leaders (Infra, LLM, Shadow AI) (30:40) Future Predictions: Why Predicting 12 Months Out is Impossible
Join Scott as he shows off CircuitPython running locally in the Zephyr native simulator and discusses how it provides a feedback loop for LLM agents. He'll also answer any questions folks have. Thanks to dcd for the timecodes: 0:00 Getting started 3:00 Hello everyone - welcome to deep dive 4:10 adafruit ESP32-S2 example microcomputer running circuitpython 5:32 using LLM agents to generate code 5:55 new monitor - mouse tiler 6:37 mouse tiler using absolute positioning 7:25 resumed pi session with generate_mousetiler_layouts.py 8:49 example how LLM's are game changing 9:19 update KWIN scripts settings 11:00 "My AI Adoption Journey" https://mitchellh.com/writing/my-ai-adoption-journey 15:00 How to test USB without the linux kernel 16:55 Testing is more important now that LLMs are in the loop 17:43 Low level USB IP - using Raspberry Pi to share mouse and keyboard over internet 18:48 USB OCD esp32p4-usbip $35 asked Codex to write code overnight to send USB over wifi 20:30 usbip-pyusb-test w/MNS 21:49 upgraded from $20 to $200 subscription ( only 14% used ) 23:00 S3 USB Host not supported yet 23:46 esp32-S3-USB-OTG https://docs.espressif.com/projects/esp-dev-kits/en/latest/esp32s3/esp32-s3-usb-otg/user_guide.html 25:04 ESP P4 has Ethernet 29:13 considering Octo probes could be accessible over the internet ( over tailscale ) 31:33 Gross PR with job server (build all boards - agent generated) 34:00 demo the TUI interface 38:07 chef analogy in https://www.avo.app/blog/from-pairing-to-leading 40:25 Keep PRs small! ( multiple branches ) 42:20 skip to the testing virtual desktop 43:10 using the zephyr simulator 44:50 edit settings.toml / using pi 47:50 testing to verify web workflow 49:25 web workflow test not working 50:20 pi: "figure out why web workflow not working" 52:07 look at tests/test_web_workflow.py 59:56 wrap back to "My AI aboption" 1:01:23 prioritize step 5 engineer the harnesses 1:03 Wrapping up - new channel #coding-agents-and-llms 1:05:48 out on the 6th ( 2 weeks from now ) Visit the Adafruit shop online - http://www.adafruit.com ----------------------------------------- LIVE CHAT IS HERE! http://adafru.it/discord Subscribe to Adafruit on YouTube: http://adafru.it/subscribe New tutorials on the Adafruit Learning System: http://learn.adafruit.com/ -----------------------------------------
How Gong Built a $7B AI Category: From "Conversation Intelligence" to the Revenue Operating SystemMost sales teams fly blind. They rely on "gut feel" and "art" rather than data and science. Eilon Reshef (Co-founder & CPO of Gong) realized this in 2015 and built a platform that captures the reality of every customer interaction to drive predictable growth.In this episode of Startup Project, Eilon breaks down the evolution of Gong, how they achieved 57% higher win rates for companies like PayPal and DocuSign, and why the "Revenue Graph" is the next frontier of enterprise AI.If you are a founder, a product leader, or a sales professional looking to understand how AI is actually transforming the enterprise, this deep dive is for you.What you'll learn in this episode:The Genesis of Gong: Why Eilon moved from a successful exit at WebCollage to solving the "black box" of sales conversations.The "Science" of Sales: How to move away from subjective CRM updates to hard data captured from video, email, and phone calls.The Revenue Graph: Why Gong's proprietary data model is more valuable than a generic LLM.Scaling to 5,000+ Customers: The tactical steps Gong took to achieve product-market fit in a crowded SaaS landscape.The Future of AI Agents: Why "Vibe Coding" and prosumer AI are just the beginning, and how the enterprise shift is happening now.Timestamps:0:00 - Intro: Meeting Eilon Reshef2:15 - The "Aha!" moment that led to Gong10:45 - Moving from transcription to "Revenue Intelligence"18:30 - How Gong achieves 57% higher win rates for customers25:50 - Building a proprietary AI layer on top of LLMs34:10 - The "Revenue Graph" explained42:15 - Why most enterprise AI implementations fail50:00 - Advice for founders building in the AI era54:14 - Closing thoughtsConnect with Eilon & Gong:Website: https://www.gong.io/Eilon's LinkedIn: https://www.linkedin.com/in/eilonreshef#Gong #AI #SalesTech #StartupGrowth #Entrepreneurship #RevenueIntelligence #SaaS #ProductMarketFit #EilonReshef #StartupProject
Alex Gladstein and Justin Moon break down the fundamentals of large language models and explore the rise of OpenClaw as a self-sovereign AI assistant. Justin explains context engineering, local inference, and vibe coding, while Alex dives into the AI for Individual Rights program and its mission to empower activists. IN THIS EPISODE YOU'LL LEARN: 00:00:00 - Intro 00:04:12 - What Large Language Models (LLMs) are and how they differ from traditional programs 00:05:15 - Why AI feels like magic—and what's really happening under the hood 00:06:01 - The key differences between open and closed AI models 00:06:50 - Why capital structures influence AI model openness 00:09:09 - How persistent memory enhances AI agent performance 00:12:18 - What inference means and why context is a scarce resource 00:19:32 - How AI agents combine traditional software with LLM reasoning 00:21:10 - The evolution from MCP-style systems to skills-based context engineering 00:25:41 - What “vibe coding” is and how it lowers the barrier to building apps 00:44:07 - How the AI for Individual Rights program supports activist-driven innovation Disclaimer: Slight discrepancies in the timestamps may occur due to podcast platform differences. BOOKS AND RESOURCES Oslo Freedom Forum: Website. Justin: Nostr account. Related episode: Is AGI Here? Clawdbot, Local AI Agent Swarms w/ Pablo Fernandez & Trey Sellers. Related books mentioned in the podcast. Ad-free episodes on our Premium Feed. NEW TO THE SHOW? Join the exclusive TIP Mastermind Community to engage in meaningful stock investing discussions with Stig, Clay, Kyle, and the other community members. Follow our official social media accounts: X (Twitter) | LinkedIn | | Instagram | Facebook | TikTok. Check out our Bitcoin Fundamentals Starter Packs. Browse through all our episodes (complete with transcripts) here. Try our tool for picking stock winners and managing our portfolios: TIP Finance Tool. Enjoy exclusive perks from our favorite Apps and Services. Get smarter about valuing businesses in just a few minutes each week through our newsletter, The Intrinsic Value Newsletter. Learn how to better start, manage, and grow your business with the best business podcasts. SPONSORS Support our free podcast by supporting our sponsors: HardBlock Human Rights Foundation Simple Mining Netsuite Masterworks Shopify Vanta Fundrise References to any third-party products, services, or advertisers do not constitute endorsements, and The Investor's Podcast Network is not responsible for any claims made by them. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://theinvestorspodcastnetwork.supportingcast.fm