POPULARITY
Categories
The first week of the Kouri Richins murder trial delivered the prosecution's key witness — and the defense's demolition of her credibility. Carmen Lauber claims she bought fentanyl for Kouri four times before Eric Richins died. But she was using meth. She got immunity from three jurisdictions. Her supplier now contradicts her. She admitted confusion under cross-examination. The jury has to decide whether any of that matters. Robin Dreeke explains how to read what's actually true.Dreeke spent 21 years with the FBI, including leading the Counterintelligence Behavioral Analysis Program. Detecting deception and assessing credibility in high-stakes environments was his job. He understands what behavioral indicators reveal whether a witness with credibility problems is still reliable at the core — or constructing a narrative for self-interest.The supplier reversal is central. Robert Crozier originally told detectives he sold fentanyl to Lauber. On the stand Friday, he said it was oxycodone and that he was "detoxing and out of it" during his original statement. When a witness changes their story years later under oath, Dreeke explains what determines which version is more likely true.Then there's Kouri herself. She's sat through five days of testimony describing how she allegedly murdered her husband. She's maintained composure throughout. Some read that as guilt. Others read it as the numbness of someone falsely accused. Dreeke identifies the specific micro-behaviors that would distinguish genuine shock from a performance that's been rehearsed for nearly four years.Join Our SubStack For AD-FREE ADVANCE EPISODES & EXTRAS!: https://hiddenkillers.substack.com/Want to comment and watch this podcast as a video? Check out our YouTube Channel. https://www.youtube.com/channel/UC8-vxmbhTxxG10sO1izODJg?sub_confirmation=1Instagram https://www.instagram.com/hiddenkillerspod/Facebook https://www.facebook.com/hiddenkillerspod/Tik-Tok https://www.tiktok.com/@hiddenkillerspodX Twitter https://x.com/TrueCrimePodThis publication contains commentary and opinion based on publicly available information. All individuals are presumed innocent until proven guilty in a court of law. Nothing published here should be taken as a statement of fact, health or legal advice.#KouriRichins #EricRichins #RobinDreeke #FBI #HiddenKillersLive #CarmenLauber #MurderTrial #DeceptionDetection #TrueCrime #Utah
Jeff and Jim sit down with David Llorens, principal at RSM, to break down the RSM 2026 Attack Vectors Report. Drawing from real-world offensive security engagements, David explains why identity continues to be the primary attack surface, how AI chatbots are creating new vulnerabilities through prompt injection, and what separates organizations that get breached from those that don't. The conversation covers MFA gaps, the explosion of non-human identities, why PAM is the top investment priority for 2026, and how CISOs can align security spending with business objectives. Plus, the episode wraps up with soccer stories and some quality trash talk.Connect with David: https://www.linkedin.com/in/david-llorens-009a3310/Review RSM's 2026 Attack Vectors Report: https://rsmus.com/insights/services/risk-fraud-cybersecurity/rsm-attack-vector-report.htmlConnect with us on LinkedIn:Jim McDonald: https://www.linkedin.com/in/jimmcdonaldpmp/Jeff Steadman: https://www.linkedin.com/in/jeffsteadman/Visit the show on the web at http://idacpodcast.comTIMESTAMPS0:00 - Intro and Jim's big personal news4:51 - Main topic intro: RSM 2026 Attack Vectors Report5:55 - David's origin story and how he got into cybersecurity9:53 - What a principal is at RSM and David's current role11:16 - What the Attack Vectors Report is and how it is created14:40 - Why identity security is a dominant theme in this year's report17:19 - What separates organizations that get breached from those that don't18:18 - MFA as the first line of defense18:45 - Privileged access management as a growing priority19:40 - Detecting lateral movement through identity anomalies21:00 - Credential rotation as an advanced defensive technique22:26 - Non-human identities and service account risks24:37 - Middle market challenges and budget constraints25:17 - Is it the size of the budget or how you spend it?28:29 - Using internal audit and cross-department collaboration for security wins30:15 - Cybersecurity as a business enabler, not a deterrent32:45 - Non-human identities and agentic AI creating new attack surfaces35:51 - Prompt injection attacks and AI chatbot vulnerabilities39:42 - Actionable recommendations for practitioners42:41 - MFA implementation gaps and session hijacking45:02 - The case for FIDO2 and layered conditional access46:35 - Is identity security a board-level issue?49:47 - Three things CISOs should focus on through 202650:52 - PAM as the top investment priority51:28 - Removing unnecessary privileges from users56:11 - Redefining what privilege means in your organization57:43 - Social media accounts as privileged access58:42 - Credentials stored in SharePoint and OneDrive59:38 - Wrap up and where to find the report59:58 - Lighter topic: David's soccer background and playing semi-pro1:05:06 - Best trash talk stories1:07:03 - Jim's trash talk philosophy: scoreboard1:08:00 - Jeff's basketball trash talk and calling his shots1:10:00 - Final thoughts and sign offKEYWORDSIDAC, Identity at the Center, Jeff Steadman, Jim McDonald, David Llorens, RSM, attack vectors report, offensive security, penetration testing, identity security, MFA, multifactor authentication, privileged access management, PAM, non-human identities, service accounts, agentic AI, AI security, prompt injection, lateral movement, credential rotation, FIDO2, conditional access, session hijacking, middle market, CISO, board-level security, certificate-based authentication, active directory, configuration management, shadow AI
Parce que… c'est l'épisode 0x716! Shameless plug 31 mars au 2 avril 2026 - Forum INCYBER - Europe 2026 14 au 17 avril 2026 - Botconf 2026 20 au 22 avril 2026 - ITSec Code rabais de 15%: Seqcure15 28 et 29 avril 2026 - Cybereco Cyberconférence 2026 9 au 17 mai 2026 - NorthSec 2026 3 au 5 juin 2026 - SSTIC 2026 19 septembre 2026 - Bsides Montréal 1 au 3 décembre 2026 - Forum INCYBER - Canada 2026 24 et 25 février 2027 - SéQCure 2027 Notes IA Confrontation DoW et Anthropic Anthropic digs in heels in dispute with Pentagon, source says Anthropic to Pentagon: Robo-weapons could hurt US troops Anthropic CEO says it cannot ‘accede' to Pentagon's demands for AI use Sam Altman says OpenAI shares Anthropic's red lines in Pentagon fight Trump admin blacklists Anthropic; AI firm refuses Pentagon demands Our agreement with the Department of War Statement on the comments from Secretary of War Pete Hegseth Anthropic Folie d'utilisation du IA Kevin Beaumont: “The incredible thing about thi…” - Cyberplace Tech Firms Aren't Just Encouraging Their Workers To Use AI. They're Enforcing It. Kevin Beaumont: “Accenture are firing people wh…” - Cyberplace Le grand remplacement IBM Shares Crater 13% After Anthropic Says Claude Code Can Tackle COBOL Modernization Infosec community panics over Anthropic Claude Code Security Long Before Tech CEOs Turned To Layoffs To Cover AI Expenses, There Was WorldCom Microsoft execs worry AI will eat entry level coding jobs AI gets good at finding bugs, not as good at fixing them Rapid AI-driven development makes security unattainable Claude Code Security Shows Promise, Not Perfection OpenClaw Google Antigravity falls to Earth under compute burden Malicious OpenClaw Skills Used to Trick Users into Manual Password Entry for AMOS Infection A Meta AI security researcher said an OpenClaw agent ran amok on her inbox The OpenClaw Hype: Analysis of Chatter from Open-Source Deep and Dark Web Sandboxes Won't Save You From OpenClaw This AI Agent Is Designed to Not Go Rogue AWS says 600+ FortiGate firewalls hit in AI-augmented attack Why the EU's AI Act is about to become every enterprise's biggest compliance challenge Detecting and preventing distillation attacks Anthropic Is AI Good for Democracy? Identity-First AI Security: Why CISOs Must Add Intent to the Equation Microsoft adds Copilot data controls to all storage locations AI models suck slightly less at math than they did last year Canadian government demands safety changes from OpenAI WA drivers reeling after passengers caught out by AI-powered safety cameras Souveraineté ou tout ce que je peux faire sur mon terrain Sovereignty in a System Prompt - POP RDI; RET; Danish government agency to ditch Microsoft software in push for digital independence US orders diplomats to fight data sovereignty initiatives Privacy ou tout ce qui devrait rester à la maison Enough Is Enough Five security lessons from the FBI's Washington Post raid Banning children from VPNs and social media will erode adults' privacy EU lawmakers propose that youth under 16 be barred from social media without parental consent Instagram to start alerting parents when children search for terms relating to self-harm Red ou tout ce qui est brisé Ransomware gangs advancing Moscow's geopolitical aims, Romanian cyber chief warns Android mental health apps with 14.7M installs filled with security flaws Discord pushes back age verification debut to 2H'26 Ransomware payment rate drops to record low as attacks surge Blue ou tout ce qui améliore notre posture Identity Prioritization isn't a Backlog Problem - It's a Risk Math Problem Windows 11 KB5077241 update improves BitLocker, adds Sysmon tool The Case for Why Better Breach Transparency Matters Some Linux LTS Kernels Will Be Supported Even Longer, Announces Greg Kroah-Hartman Collaborateurs Nicolas-Loïc Fortin Crédits Montage par Intrasecure inc Locaux réels par Intrasecure inc
What is the real killer when it comes to heart disease? Can the right cardiac testing truly mean the difference between life and death? In today's episode, we are joined by Dr. John Osborne, a Harvard-trained, triple board-certified cardiologist and Co-Founder of ClearCardio, to break it all down… Dr. Osborne earned his B.S. with honors from Penn State University, his M.D. magna cum laude from Jefferson Medical College, and a Ph.D. in cardiovascular physiology from Thomas Jefferson University. His postdoctoral training at Harvard Medical School and Brigham and Women's Hospital helped shape his expertise in non-invasive cardiology. Board-certified across multiple disciplines, his work focuses on preventive cardiology, metabolic syndrome, and cardiovascular genetics. Recognized as the American Heart Association's Cardiac Care Provider of the Year and named a Top Doctor multiple times, Dr. Osborne has authored original research papers, book chapters, and delivered hundreds of international presentations. Through ClearCardio, he is advancing proactive cardiac care by integrating AI-powered imaging to detect plaque earlier, quantify risk more precisely, and empower patients before symptoms appear. In this episode, we dive into: What actually causes heart attacks and sudden cardiac death. The role of soft plaque vs calcified plaque in coronary artery disease. Why many heart attacks happen after a "normal" stress test. The limits of stents and why they do not necessarily extend longevity. To learn more about Dr. Osborne and his work with ClearCardio, connect with him on LinkedIn!
NASA's EMIT mission uses a spectrometer to detect dusts and minerals from space, and it now can detect plastics from land. Plus, the Mars rovers can move around the red planet and do science, without human help.
Show Highlights: Cheaper retention and untapped SOW in ag customer loyalty amid farm loss. [05:12] The $5.6B cost of 7.2% yearly churn with 72% non-returns in ag retail. [13:05] Detecting "relative" churn for faster growth in 2026. [15:42] Why GROWERS created AI for predictive churn analytics. [18:24] Pro tip: Drive customer loyalty with cross-segment sales. [19:15] Do young farmers prefer visible rewards over patronage? [20:51] GROWERS' powerful AI-enabled, white-label loyalty program. [26:59] Differentiation strategies for single-segment companies. [34:13] How ERP hygiene improves readiness for tech adoption. [42:06] Integrate sales with loyalty tech for 13% more sales. [46:53] GROWERS' farmer-first model vs. anticompetitive tactics. [54:41] Farmer, retailer, and manufacturer alignment for win-wins in GROWERS. [57:34] Contact Steven Valencsin on LinkedIn at https://www.linkedin.com/in/stevenvalencsin/. To explore GROWERS, visit their website: https://growers.ag/ If you are interested in connecting with Joe, go to LinkedIn: https://www.linkedin.com/in/joemosher/, or schedule a call at www.moshercg.com.
How can public health detect invisible threats before they become crises? In this episode, we explore two powerful approaches shaping the future of preparedness: wastewater surveillance and radiological emergency response. First, Allison Wheeler, Manager, Wastewater Surveillance Unit Colorado Department of Public Health and Environment shares how her team detected measles in wastewater before clinical cases appeared, helping local partners identify an outbreak early and act quickly. She explains how wastewater surveillance is evolving beyond COVID-19 to monitor emerging and re-emerging diseases, track antimicrobial resistance, and strengthen early warning systems across communities. Then, Dr. Ziad Kazzi, Professor of Emergency Medicine at Emory University and President of the American College of Medical Toxicology breaks down what a radiological incident really looks like, from accidental exposures to nuclear incidents, and why these events may be more manageable than many people assume. He discusses how mass gatherings, like global sporting events, prepare for rare but high-impact scenarios, the importance of detection and decontamination, and how health systems and emergency responders work together to protect both patients and communities.Subscribe | ASTHOMeeting Home PageMeeting Home Page
Big thank you to Infoblox for sponsoring this video. For more information on Infoblox have a look at their website: https://www.infoblox.com/ // Get Wireshark Certified // Check out the official training course
Good Morning BT with Bo Thompson and Beth Troutman | Friday, February 20th, 2026. 6:05 Beth’s Song of the Day 6:20 GMBT taken over by A.I. Pt 1 6:35 GMBT taken over by A.I. Pt 2 6:50 RAM Biz Update; Frozen Pizza Friday | American's favorite frozen pizza brands 7:05 2026 Winter Olympics Update with Bo and Beth 7:20 Guest: Congressman Mark Harris - Pres. Trump at Fort Bragg recap 7:35 Congressman Mark Harris - Potential conflict with Iran 7:50 Tell Me Something Good 8:05 Caller Mike from Monroe talks Ted Knight 8:20 Caller Eric talks Transformer trivia | Obama and Trump comment on possible alien sightings 8:35 Friday News Quiz with Jeff Atkinson 8:50 Detecting tone through text 9:05 Guest: John Hancock 9:20 John Hancock cont. - The passing of Rev. Jesse Jackson 9:35 Big Weekend with John Hancock 9:50 Show wrapSee omnystudio.com/listener for privacy information.
This episode is part of the Restoration Theology class. Would you agree that every translation of the Bible has some sort of bias in it? Even the most literal translations have a good deal of bias baked into them. What can we do? Well, you could learn Hebrew and Greek so you can read the Bible for yourself instead of depending on a translation. Ok, but if you don’t have the inclination, motivation, or time to do that, what can you do? This episode of Restoration Theology is going to take you step by step through an English-only process of detecting bias in translation. You’ll learn a little about the translation process as well as how to spot bias in translation. This is a necessary component in our quest to evaluate doctrines against the text of Scripture. Listen on Spotify Listen on Apple Podcasts —— Links —— Check out the other episodes of the Restoration Theology class Support Restitutio by donating here Join our Facebook group, follow on X @RestitutioSF or Instagram @Sean.P.Finnegan Leave a 90 second voice message via SpeakPipe with questions or comments and we may play it out on the air Who is Sean Finnegan? Read his bio here Get Finnegan’s book, Kingdom Journey to learn about God’s kingdom coming on earth as well as the story of how Christianity lost this pearl of great price. Get the transcript of this episode Intro music: Good Vibes by MBB Attribution-ShareAlike 3.0 Unported (CC BY-SA 3.0) Free Download / Stream: Music promoted by Audio Library.
Dr. Yash Singh sits down with Dr. Sudhakar Venkatesh to explore his global journey in radiology and how innovations like MR elastography are transforming the noninvasive diagnosis and staging of liver disease. They discuss the future of quantitative, AI-enhanced imaging in detecting liver cancer and biliary diseases earlier—shifting radiology from passive observation to a central, collaborative force in precision medicine. Radiology: Imaging Cancer special collection for Pancreatic Adenocarcinoma, Pancreatic Neuroendocrine, and Hepatobiliary Cancers.
Podcast Transcript: Michael Wienecke 0:00 Hey, and thanks for listening to the Peskies Pest Control Podcast. I am Michael Wienecke, owner and operator of Peskies Pest Control, and I have Travis McGowin. How we doing Travis? Travis McGowin 0:12 I’m doing well, how are you doing? Michael Wienecke 0:14 Great man, waiting on the freezing storm to get here. Travis McGowin 0:19 You know, I all the projections early on were completely devastating, and now it’s like North Alabama. Sorry, but everybody else, you’re going to get wet. It’s going to be rainy. So we’ll see what happens this weekend. Michael Wienecke 0:34 Well, and then they’re talking about next week. Could be even worse. So we’ll see. Michael Wienecke 0:38 Yeah, good times. Michael Wienecke 0:42 Well, today we want to talk about something that hasn’t come up very often, and that’s reinfestation of bed bugs. So we had a customer, after what about eight months of doing a heat treatment, reinfested her home? Right? Travis McGowin 0:57 Exactly. So initially, the customer believed that they brought bed bugs into their home by purchasing a quilt from a thrift store, I believe is what it was. I think her daughter had gifted her a really nice, handmade quilt that somebody had donated, and she got it home, took it out of whatever packaging was there, and just immediately laid it across the bed. And that’s the only thing that she can think of that was the reason why she was dealing with bed bugs in the first place. And so, of course, we came in, we did an inspection. She had a fairly significant infestation at that time. It’s a two story house. The, you know, the lady lives by herself, so the upstairs really not even utilized. So we, we treated the first floor. And you know, for all intents and purposes, everything was was really good after treatment up until now. Michael Wienecke 1:57 So let’s talk about the initial so you said she brought a she bought, or she thought she bought a quilt, brought it in and it had one bedbug on it. How? How bad was the infestation when you inspected the first time? Travis McGowin 2:11 So I can’t confirm or deny how many bedbugs were possibly on that quilt when it was brought in. However, I can say that when we did the inspection, it was, it was fairly significant with bed bugs in cracks and crevices on the headboard and the frame of the bed, as well as on the box springs and the mattress. So, you know, it’s, it’s possible that she may have had them and not known it prior to the actual quilt itself being brought in. But you know, it’s hard to tell, especially if you, you know, haven’t paid attention to it, or hadn’t noticed it until it was too late. Michael Wienecke 2:48 Well, I mean, they’re, they’re designed to hide in the tightest crack and Travis. I mean, you’ve seen them, you know, at the gas station, between the little pump joints before. Travis McGowin 3:00 I mean, I have seen them at a gas station, inside of the little booth where the the cashier sits, you know, and rings people up for gas and for convenience store goods there inside a cracker Travis, where a lady that worked there had brought them in her purse and set her purse on the counter, and someone had complained about getting bitten by a bed bug there at the gas station, and lo and behold, there was one bed bug in a very little crevice in the countertop. So they do get around. Michael Wienecke 3:31 Well, and that’s what I kind of wanted to talk about, how hard they are to detect. I mean, you know, one or two bed bugs within not knowing that and then a month goes by, and then you start, you’re starting to multiply, get bit all that. I mean, it can turn into a pretty quick, or I would say, a slow infestation, but you’re just not realizing it while it’s happening. Travis McGowin 3:52 Right, and so, you know, bed bugs can range in size, from the eggs, which are really, really tiny, and then the multiple, you know, nymphal stages, where they grow and then they shed their skin or molt, and then they grow again all the way up into the adult stage. I mean, so they can be a varying range of sizes, I would say, anywhere from the size of a mustard seed all the way up to even maybe close to the size of, just to give people an idea, a watermelon seed. I mean, some of these female bed bugs, especially after feeding and being engorged, can be rather large. So you could see where transferring these bed bugs, you know, from one place to the next, if you came in somewhere and sat down at a restaurant where potentially someone had sat with them in their purse or on their clothing, and they fell off in the restaurant booth, and then you came in next and sat down. I mean, it might be very easy to not even notice that you had transferred these little insects in with you and then inadvertently taking them home. So it’s fairly common and easy to get a bed bug infestation. I mean, bed bugs aren’t selective on whose house they go to. They just know that they need a host, and if they can attach on to someone’s clothing or, you know, say, luggage in a hotel or Airbnb or something like that, then they’ll do it. It doesn’t matter if you live in $100,000 house or a million dollar home. They’re, you know, they don’t discriminate. Michael Wienecke 5:23 No, not at all. I mean, we’ve seen them in Mountain Brook, Hoover, Birmingham, Montgomery, Helena. Travis McGowin 5:32 Wetumpka, Prattville, Deatsville. I mean, they’re like I said, they they can be widespread. You can have the cleanest house on the block or the dirtiest house on the block. It really doesn’t matter. Now, you know, with this particular individual we came in, heat treated the first floor of the home, you know, so that included the master bedroom, the living room, the kitchen, the dining room area, all of that we actually, you know, cooked it really, really well, of course, up to the manufacturer’s recommendations for the system that we use. And then everything was perceived by the owner of the home to be fine for, you know, a very, very long time. And then what basically happened next is, you know, eight months later, we have this. She said that she had went to get a blanket and change her sheets out on the bed, and she noticed a bed bug at that time. She kind of was speculating whether the bed bugs could have been hiding in the sheets in the closet or something like that. But what I’d like for people listening to realize is that it’s not likely that that would have been, you know, the issue of reinfesting because they were hiding in the closet or something, eight months later. So, you know, of course, bed bugs, depending upon their size and how long they are in their development, bed bugs can last a decent amount of time without feeding on a human or having a blood meal. You know, for everywhere from the this sounds terrible to say, but the newborn stage and the younger nymphal stages all the way up to the adults. The younger nymphal stages aren’t going to last without a blood meal more than probably, like, two weeks, give or take. And as they grow in size, they’re they’re gonna last a little bit longer to where the adults, you know, may last. Let’s just say six weeks, eight weeks, something like that. But definitely not to the tune of seven or eight months before they feed again. I mean, most of that population would have have died off, even if they were in the closet. So, you know, we kind of ruled that out, you know. And it was frustrating because, of course, you know, we go to do the inspection, and she’s got another significant infestation of bed bugs. So, you know, it does lead us to believe that it was more of an issue of reinfestation, where, I mean, maybe they were in her chair at work, or maybe, you know, in the car. But you know, it’s anybody’s guess as to where they came back from. Michael Wienecke 8:15 Well, and that’s why we always recommend, you know, leave your purse, leave everything you can in the house that we’re heating because that’s going to give us the best chance to get rid of those bed bugs. So let’s talk a little bit about the heat treatment. You know, how it works, all that kind of stuff. Travis McGowin 8:28 Yeah so we use a propane fired heater, and that heater goes, you know, outside of the house. So, you know, we don’t ever bring propane tanks or the actual unit with the heating element inside, inside your home, but we set up where we have access to run duct work. So of course, we set the heater up connected to all of the propane tanks, and then we run big mylar duct work into the structure, whether it’s through a window or a door, and we circulate that heat into the structure, and then we run mylar duct work from a different point of the structure out back to the back barrel of that heater to recirculate that heat. It’s more efficient. It maintains heat at a better rate. We use less propane, and we heat faster that way. But basically, we run that heat in through the mylar duct works, and then we bring in large fans inside of each room that we’re treating, and we circulate that heat and think of it like an, you know, essentially creating an oven inside your home. Okay, so let’s just say it’s the holidays. It’s Thanksgiving, and you go to put a turkey in the oven. You know that heat is going to be circulating around and moving around inside that oven and cooking the, you know, that Turkey, or whatever it is you’re cooking, and it’s going to slowly absorb into the food that you’re cooking. And, you know, increase the temperature of that to, you know, whatever the set temperature or your desire to. Temperature is to cook at, and it’s very similar that heat is going to be absorbed by anything inside the room, the contents that could be couches, chairs, you know, the walls, the ceiling, the floor, anything in between. And after those items reach the appropriate temperature, that’s when, of course, we start our timer, and then we cook based upon what our equipment manufacturers recommendations are. And, you know, afterwards, by the time we pull out all of our equipment and leave, the bed bug infestation is gone. Michael Wienecke 10:33 Yep. And it takes a whole day. I mean, we’re there for almost a whole eight hours. Travis McGowin 10:40 Yeah, absolutely. And you know that’s, of course, size dependent upon the structure. If we’ve got a structure where we’re treating two floors upstairs and downstairs, or just a very massive layout in terms of, you know, the floor on the first floor, it can, it can take a significant amount of time. And then, of course, you know, what the homeowner needs to realize or remember, is that when you come back into that structure after we’re done there again, we turned your home into essentially an oven, it’s going to be relatively warm for a while as that oven cools off, no different than when you take the turkey out of the oven and turn the oven off at Thanksgiving, and that oven is going to be warm for quite a while before the heat dissipates and cools off and then reaches room temperature again. Michael Wienecke 11:27 And that’s what’s so great about the heat treatment, is that, you know, it’s just, it’s kind of like you said, warming up everything at one time, and then it’s slowly radiating heat into other things. So you’re getting an internal temperature to kill those bed bugs in every inch of that home. Travis McGowin 11:43 Well, and bed bugs are very good at hiding. I mean, they didn’t, they didn’t, you know, stick around for this many years because they were bad at hiding. So, you know, if you had, like, a metal bed frame that’s hollow, you know, hollow tubes that make the frame and that’s got gaps or cracks in it where the joints come in. You got to think that bed bugs can get down into those spaces. So that heat being absorbed into furniture and into the room itself is great because it’s going to get to those places where, say, a normal chemical treatment may not be able to reach, and it’s really the quickest, most efficient way to kill a bed bug population, from egg all the way to adult in, you know, just a few hours, as opposed to going for weeks at a time applying a liquid product, having to wait for those eggs to hatch. Because, of course, no chemical product can penetrate the egg of a bed bug or any insect for that matter. Until it hatches, those eggs are safe, usually, but heat is a whole different story. It cooks them before they ever hatch. Michael Wienecke 12:52 Nukes, the whole family. Travis McGowin 12:55 Yeah, that’s, that’s a good way to put it, you know. And it’s what’s also is amazing. It’s good that you’re heating all the surfaces and the contents in the room, because when the increase in temperature begins, and you and I, Michael, have seen it when we go on site to do these treatments, but when those increase in temperatures begin, those bed bugs that are, you know, able to move, that haven’t hatched, or that have hatched already, they start to move, looking for a place that is cooler for them to stay, so that they can survive. And as you know, when you’re heating all those contents and all those surfaces, they don’t have anywhere to go. It’s pretty wild sometimes, and sometimes you don’t even realize how significant an infestation was until you start to crank the heat up, and then they all start to move. Michael Wienecke 13:41 Out of the woodwork. Travis McGowin 13:42 Yep, I had a college, two college dorm rooms that I treated. And you know, I saw bed bugs when I did the inspection, but when I started to turn the heat up in that dorm room, or both those dorm rooms, it was mind blowing how many bedbugs were actually in this empty, vacant dorm room with, you know, two, two beds? I mean, it was just amazing. You would have never guessed there were that many, but they started coming out of the woodwork trying to find a cooler place to go, Michael Wienecke 14:12 Right. Well, and I’m glad you brought that up about chemical and heat, and that’s why we chose heat. Is because heat is just, it’s, faster, in my opinion, in our opinion, it’s more efficient. We’re not having to go back 2, 3, 4, or five times. We’re not having to worry about a reinfestate or, you know, one surviving and reinfesting the home, anything like that, Travis McGowin 14:35 Right. And you know, there’s only so much that us as a pest control provider can actually control in terms of reinfestation. So for example, if you’re going to work in a place that has a known bed bug infestation, and let’s just say you brought them back to your house, your house was treated, your house was cleared. Obviously there’s a huge potential there to reinfest and so you know, if you find yourself in that situation where it’s like, okay, I can’t live peacefully in my own home, in my own space with bed bugs, but I still have to work in a place where there’s a high potential to bring them home, then there’s really some precautionary things that you probably need to be doing when you come home from work, for example, immediately removing those clothes and laundering them every day. The less amount of personal items, such as a purse or bags or anything like that, trying not to take that stuff with you, you know, because things can crawl in there, and then, you know, hitch your ride home with you. There’s just, like I said, there’s just several different things that you might want to look at doing if there’s a high potential to bring them back with you to reinfest your home. Visit us on YouTube! Click Here! Visit us on Facebook! Click Here! Learn more about bed bug heat treatments! Click Here! The post Detecting and Defeating Bed Bug Reinfestations in Birmingham Alabama! appeared first on Peskies Pest Control.
There's been an evolution in understanding concussions and a Colorado researcher has teamed up with experts worldwide to offer an easy guide for coaches and parents to recognize and to know what to do when a young athlete gets a concussion. Then, a push for juvenile justice reform at the state capitol through the first-hand stories of adults who were incarcerated as children. Also, the unseasonably warm weather has meant more fatal traffic crashes; we talk with a woman working to help injured motorcyclists and their families. Plus, a Valentine's Day tradition that has volunteers waiting in years' long lines to help.
On the show tonight, we are joined by Seb and Marcus from Regton Metal Detectors to discuss the new XP ST stereo update.Sponsored by Metal Detecting NewsBecome a supporter of this podcast: https://www.spreaker.com/podcast/the-big-detecting-show--3690873/support.
Jess from Digstock in the US joins Adrian and Dave on BIG Detecting Show of 2026Sponsored by Metal Detecting NewsBecome a supporter of this podcast: https://www.spreaker.com/podcast/the-big-detecting-show--3690873/support.
Parce que… c'est l'épisode 0x705! Shameless plug 25 et 26 février 2026 - SéQCure 2026 31 mars au 2 avril 2026 - Forum INCYBER - Europe 2026 14 au 17 avril 2026 - Botconf 2026 28 et 29 avril 2026 - Cybereco Cyberconférence 2026 9 au 17 mai 2026 - NorthSec 2026 3 au 5 juin 2026 - SSTIC 2026 19 septembre 2026 - Bsides Montréal Notes IA Dumpster fire called OpenClaw OpenClaw (a.k.a. Moltbot) is everywhere all at once, and a disaster waiting to happen ‘Moltbook' social media site for AI agents had big security hole, cyber firm Wiz says MoltBot Skills exploited to distribute 400+ malware packages in days DIY AI bot farm OpenClaw is a security ‘dumpster fire' Detecting and Monitoring OpenClaw (clawdbot, moltbot) Clouds rush to deliver OpenClaw-as-a-service offerings A sane but extremely bull case on Clawdbot / OpenClaw OpenClaw: When AI Agents Get Full System Access – Revolution or Security Nightmare? It's easy to backdoor OpenClaw, and its skills leak API keys Using microvm.nix to sandbox Openclaw OpenClaw Partners with VirusTotal to Secure AI Agent Skill Marketplace 17% of 3rd-Party Add-Ons for OpenClaw Used in Crypto Theft and macOS Malware Grok French prosecutors raid X offices, summon Musk over Grok deepfakes Kevin Beaumont: “The UK's Information Commissio…” - Cyberplace Kevin Beaumont: “Reuters reports Grok is still …” - Cyberplace Kevin Beaumont: “Elon Musk Under Investigation …” - Cyberplace Spain, Greece weigh teen social media bans, drawing fury from Elon Musk Vos agents IA sécurisés en -10 sec. sur Mac You won: Microsoft is walking back Windows 11's AI overload C'est prouvé : Le vibe coding va tuer l'open source Anthropic keeps Claude ad-free AWS intruder pulled off AI-assisted cloud break-in in 8 mins n8n's latest critical flaws bypass December fix Microsoft sets Copilot agents loose on your OneDrive files How Industrial Robot Safety Was Written In Blood Anthropic's Claude Opus 4.6 uncovers 500 zero-day flaws in open-source code GitHub - Deso-PK/make-trust-irrelevant: Make trust irrelevant for agentic AI using kernel-enforced authority boundaries. Malicious VS Code AI Extensions Harvesting Code from 1.5M Devs Red Notepad++ Notepad++ Hack Detailed Along With the IoCs and Custom Malware Used Notepad++ Users, You May Have Been Hacked by China Energy infrastructure cyberattacks are suddenly in fashion EDR killer tool uses signed kernel driver from forensic software Microsoft releases urgent Office patch. Russian-state hackers pounce. Attackers Using DNS TXT Records in ClickFix Script to Execute Powershell Commands nmapUnleashed Makes Nmap Scanning More Comfortable and Effective Google Looker Bugs Allow Cross-Tenant RCE, Data Exfil Blue When Cloud Outages Ripple Across the Internet Microsoft rolls out native Sysmon monitoring in Windows 11 Ukraine tightens controls on Starlink terminals to counter Russian drones Satya Nadella decides Microsoft needs a quality czar EDR, Email, and SASE Miss This Entire Class of Browser Attacks Privacy GDPR is a failure California city turns off Flock cameras after company shared data without authorization Vos données sont déjà en vente… et vous ne vous en rendez même pas compte Lockdown Mode - La fonction d'Apple qui a mis le FBI en échec We had sex in a Chinese hotel, then found we had been broadcast to thousands Souveraineté Europe shrugs off tariffs, plots to end tech reliance on US Russian spy satellites have intercepted EU communications satellites Munich makes digital sovereignty measurable with its own score Commission trials European open source communications software Divers et insolites Bitcoin Why This Computer Scientist Says All Cryptocurrency Should “Die in a Fire” Bitcoin gets a zero price target in wake of Burry warning (BTC-USD:Cryptocurrency) Bitcoin de la “marde” ou de l'or en barre !! :) (Franck Desert) Flock CEO calls Deflock a “terrorist organization” Germany warns of Signal account hijacking targeting senior figures BrianKrebs: “Must-read: How ‘Pink Slime' Pu…” - Infosec Exchange We moved fast and broke things. It's time for a change. Collaborateurs Nicolas-Loïc Fortin Crédits Montage par Intrasecure inc Locaux réels par Intrasecure inc
Join Adrian, Donna and Dave in an ode to the recent passing of Ancient Astronaut hypothesis author Erich Von DanikenSponsored by Metal Detecting NewsBecome a supporter of this podcast: https://www.spreaker.com/podcast/the-big-detecting-show--3690873/support.
New Season of PreCure, new review! With Detectives and Time Travel and Phantom Thieves, how does this episode hold up? Are we excited to watch more or does it fall short of expectations? How many Danganronpa and Detective Raincode references will we make if we continue to review this show?? Only one way to find out! Timestamps4:11 Succinct Summary 6:28 Main Thoughts 12:57 Predictions
In this episode of the Dementia Researcher podcast, host Adam Smith chats with with Professor Paul Freemont and researcher Tom Adam from the UK Dementia Research Institute at Imperial College London to discuss the critical issue of urinary tract infections (UTIs) in individuals living with dementia. The conversation highlights the complexities of diagnosing UTIs in people living with dementia, where communication barriers and atypical presentations often lead to misdiagnosis and unnecessary hospitalisations. The guests emphasise the urgent need for improved detection methods, as UTIs can exacerbate cognitive decline and lead to severe health complications. They talk about their work to develop and introduce an innovative novel point-of-care diagnostic device designed specifically for dementia patients, which aims to facilitate early detection of UTIs in a home and care home setting, thereby reducing the reliance on traditional symptom reporting and hospital visits. Key takeaways:
Join Donner and David this week with Archaeologist, Geo Physer and Detectorist, James Barnes. Plus the usual jovial fun and metal detecting chat.Sponsored by Metal Detecting NewsBecome a supporter of this podcast: https://www.spreaker.com/podcast/the-big-detecting-show--3690873/support.
From Palantir and Two Sigma to building Goodfire into the poster-child for actionable mechanistic interpretability, Mark Bissell (Member of Technical Staff) and Myra Deng (Head of Product) are trying to turn “peeking inside the model” into a repeatable production workflow by shipping APIs, landing real enterprise deployments, and now scaling the bet with a recent $150M Series B funding round at a $1.25B valuation.In this episode, we go far beyond the usual “SAEs are cool” take. We talk about Goodfire's core bet: that the AI lifecycle is still fundamentally broken because the only reliable control we have is data and we post-train, RLHF, and fine-tune by “slurping supervision through a straw,” hoping the model picks up the right behaviors while quietly absorbing the wrong ones. Goodfire's answer is to build a bi-directional interface between humans and models: read what's happening inside, edit it surgically, and eventually use interpretability during training so customization isn't just brute-force guesswork.Mark and Myra walk through what that looks like when you stop treating interpretability like a lab demo and start treating it like infrastructure: lightweight probes that add near-zero latency, token-level safety filters that can run at inference time, and interpretability workflows that survive messy constraints (multilingual inputs, synthetic→real transfer, regulated domains, no access to sensitive data). We also get a live window into what “frontier-scale interp” means operationally (i.e. steering a trillion-parameter model in real time by targeting internal features) plus why the same tooling generalizes cleanly from language models to genomics, medical imaging, and “pixel-space” world models.We discuss:* Myra + Mark's path: Palantir (health systems, forward-deployed engineering) → Goodfire early team; Two Sigma → Head of Product, translating frontier interpretability research into a platform and real-world deployments* What “interpretability” actually means in practice: not just post-hoc poking, but a broader “science of deep learning” approach across the full AI lifecycle (data curation → post-training → internal representations → model design)* Why post-training is the first big wedge: “surgical edits” for unintended behaviors likereward hacking, sycophancy, noise learned during customization plus the dream of targeted unlearning and bias removal without wrecking capabilities* SAEs vs probes in the real world: why SAE feature spaces sometimes underperform classifiers trained on raw activations for downstream detection tasks (hallucination, harmful intent, PII), and what that implies about “clean concept spaces”* Rakuten in production: deploying interpretability-based token-level PII detection at inference time to prevent routing private data to downstream providers plus the gnarly constraints: no training on real customer PII, synthetic→real transfer, English + Japanese, and tokenization quirks* Why interp can be operationally cheaper than LLM-judge guardrails: probes are lightweight, low-latency, and don't require hosting a second large model in the loop* Real-time steering at frontier scale: a demo of steering Kimi K2 (~1T params) live and finding features via SAE pipelines, auto-labeling via LLMs, and toggling a “Gen-Z slang” feature across multiple layers without breaking tool use* Hallucinations as an internal signal: the case that models have latent uncertainty / “user-pleasing” circuitry you can detect and potentially mitigate more directly than black-box methods* Steering vs prompting: the emerging view that activation steering and in-context learning are more closely connected than people think, including work mapping between the two (even for jailbreak-style behaviors)* Interpretability for science: using the same tooling across domains (genomics, medical imaging, materials) to debug spurious correlations and extract new knowledge up to and including early biomarker discovery work with major partners* World models + “pixel-space” interpretability: why vision/video models make concepts easier to see, how that accelerates the feedback loop, and why robotics/world-model partners are especially interesting design partners* The north star: moving from “data in, weights out” to intentional model design where experts can impart goals and constraints directly, not just via reward signals and brute-force post-training—Goodfire AI* Website: https://goodfire.ai* LinkedIn: https://www.linkedin.com/company/goodfire-ai/* X: https://x.com/GoodfireAIMyra Deng* Website: https://myradeng.com/* LinkedIn: https://www.linkedin.com/in/myra-deng/* X: https://x.com/myra_dengMark Bissell* LinkedIn: https://www.linkedin.com/in/mark-bissell/* X: https://x.com/MarkMBissellFull Video EpisodeTimestamps00:00:00 Introduction00:00:05 Introduction to the Latent Space Podcast and Guests from Goodfire00:00:29 What is Goodfire? Mission and Focus on Interpretability00:01:01 Goodfire's Practical Approach to Interpretability00:01:37 Goodfire's Series B Fundraise Announcement00:02:04 Backgrounds of Mark and Myra from Goodfire00:02:51 Team Structure and Roles at Goodfire00:05:13 What is Interpretability? Definitions and Techniques00:05:30 Understanding Errors00:07:29 Post-training vs. Pre-training Interpretability Applications00:08:51 Using Interpretability to Remove Unwanted Behaviors00:10:09 Grokking, Double Descent, and Generalization in Models00:10:15 404 Not Found Explained00:12:06 Subliminal Learning and Hidden Biases in Models00:14:07 How Goodfire Chooses Research Directions and Projects00:15:00 Troubleshooting Errors00:16:04 Limitations of SAEs and Probes in Interpretability00:18:14 Rakuten Case Study: Production Deployment of Interpretability00:20:45 Conclusion00:21:12 Efficiency Benefits of Interpretability Techniques00:21:26 Live Demo: Real-Time Steering in a Trillion Parameter Model00:25:15 How Steering Features are Identified and Labeled00:26:51 Detecting and Mitigating Hallucinations Using Interpretability00:31:20 Equivalence of Activation Steering and Prompting00:34:06 Comparing Steering with Fine-Tuning and LoRA Techniques00:36:04 Model Design and the Future of Intentional AI Development00:38:09 Getting Started in Mechinterp: Resources, Programs, and Open Problems00:40:51 Industry Applications and the Rise of Mechinterp in Practice00:41:39 Interpretability for Code Models and Real-World Usage00:43:07 Making Steering Useful for More Than Stylistic Edits00:46:17 Applying Interpretability to Healthcare and Scientific Discovery00:49:15 Why Interpretability is Crucial in High-Stakes Domains like Healthcare00:52:03 Call for Design Partners Across Domains00:54:18 Interest in World Models and Visual Interpretability00:57:22 Sci-Fi Inspiration: Ted Chiang and Interpretability01:00:14 Interpretability, Safety, and Alignment Perspectives01:04:27 Weak-to-Strong Generalization and Future Alignment Challenges01:05:38 Final Thoughts and Hiring/Collaboration Opportunities at GoodfireTranscriptShawn Wang [00:00:05]: So welcome to the Latent Space pod. We're back in the studio with our special MechInterp co-host, Vibhu. Welcome. Mochi, Mochi's special co-host. And Mochi, the mechanistic interpretability doggo. We have with us Mark and Myra from Goodfire. Welcome. Thanks for having us on. Maybe we can sort of introduce Goodfire and then introduce you guys. How do you introduce Goodfire today?Myra Deng [00:00:29]: Yeah, it's a great question. So Goodfire, we like to say, is an AI research lab that focuses on using interpretability to understand, learn from, and design AI models. And we really believe that interpretability will unlock the new generation, next frontier of safe and powerful AI models. That's our description right now, and I'm excited to dive more into the work we're doing to make that happen.Shawn Wang [00:00:55]: Yeah. And there's always like the official description. Is there an understatement? Is there an unofficial one that sort of resonates more with a different audience?Mark Bissell [00:01:01]: Well, being an AI research lab that's focused on interpretability, there's obviously a lot of people have a lot that they think about when they think of interpretability. And I think we have a pretty broad definition of what that means and the types of places that can be applied. And in particular, applying it in production scenarios, in high stakes industries, and really taking it sort of from the research world into the real world. Which, you know. It's a new field, so that hasn't been done all that much. And we're excited about actually seeing that sort of put into practice.Shawn Wang [00:01:37]: Yeah, I would say it wasn't too long ago that Anthopic was like still putting out like toy models or superposition and that kind of stuff. And I wouldn't have pegged it to be this far along. When you and I talked at NeurIPS, you were talking a little bit about your production use cases and your customers. And then not to bury the lead, today we're also announcing the fundraise, your Series B. $150 million. $150 million at a 1.25B valuation. Congrats, Unicorn.Mark Bissell [00:02:02]: Thank you. Yeah, no, things move fast.Shawn Wang [00:02:04]: We were talking to you in December and already some big updates since then. Let's dive, I guess, into a bit of your backgrounds as well. Mark, you were at Palantir working on health stuff, which is really interesting because the Goodfire has some interesting like health use cases. I don't know how related they are in practice.Mark Bissell [00:02:22]: Yeah, not super related, but I don't know. It was helpful context to know what it's like. Just to work. Just to work with health systems and generally in that domain. Yeah.Shawn Wang [00:02:32]: And Mara, you were at Two Sigma, which actually I was also at Two Sigma back in the day. Wow, nice.Myra Deng [00:02:37]: Did we overlap at all?Shawn Wang [00:02:38]: No, this is when I was briefly a software engineer before I became a sort of developer relations person. And now you're head of product. What are your sort of respective roles, just to introduce people to like what all gets done in Goodfire?Mark Bissell [00:02:51]: Yeah, prior to Goodfire, I was at Palantir for about three years as a forward deployed engineer, now a hot term. Wasn't always that way. And as a technical lead on the health care team and at Goodfire, I'm a member of the technical staff. And honestly, that I think is about as specific as like as as I could describe myself because I've worked on a range of things. And, you know, it's it's a fun time to be at a team that's still reasonably small. I think when I joined one of the first like ten employees, now we're above 40, but still, it looks like there's always a mix of research and engineering and product and all of the above. That needs to get done. And I think everyone across the team is, you know, pretty, pretty switch hitter in the roles they do. So I think you've seen some of the stuff that I worked on related to image models, which was sort of like a research demo. More recently, I've been working on our scientific discovery team with some of our life sciences partners, but then also building out our core platform for more of like flexing some of the kind of MLE and developer skills as well.Shawn Wang [00:03:53]: Very generalist. And you also had like a very like a founding engineer type role.Myra Deng [00:03:58]: Yeah, yeah.Shawn Wang [00:03:59]: So I also started as I still am a member of technical staff, did a wide range of things from the very beginning, including like finding our office space and all of this, which is we both we both visited when you had that open house thing. It was really nice.Myra Deng [00:04:13]: Thank you. Thank you. Yeah. Plug to come visit our office.Shawn Wang [00:04:15]: It looked like it was like 200 people. It has room for 200 people. But you guys are like 10.Myra Deng [00:04:22]: For a while, it was very empty. But yeah, like like Mark, I spend. A lot of my time as as head of product, I think product is a bit of a weird role these days, but a lot of it is thinking about how do we take our frontier research and really apply it to the most important real world problems and how does that then translate into a platform that's repeatable or a product and working across, you know, the engineering and research teams to make that happen and also communicating to the world? Like, what is interpretability? What is it used for? What is it good for? Why is it so important? All of these things are part of my day-to-day as well.Shawn Wang [00:05:01]: I love like what is things because that's a very crisp like starting point for people like coming to a field. They all do a fun thing. Vibhu, why don't you want to try tackling what is interpretability and then they can correct us.Vibhu Sapra [00:05:13]: Okay, great. So I think like one, just to kick off, it's a very interesting role to be head of product, right? Because you guys, at least as a lab, you're more of an applied interp lab, right? Which is pretty different than just normal interp, like a lot of background research. But yeah. You guys actually ship an API to try these things. You have Ember, you have products around it, which not many do. Okay. What is interp? So basically you're trying to have an understanding of what's going on in model, like in the model, in the internal. So different approaches to do that. You can do probing, SAEs, transcoders, all this stuff. But basically you have an, you have a hypothesis. You have something that you want to learn about what's happening in a model internals. And then you're trying to solve that from there. You can do stuff like you can, you know, you can do activation mapping. You can try to do steering. There's a lot of stuff that you can do, but the key question is, you know, from input to output, we want to have a better understanding of what's happening and, you know, how can we, how can we adjust what's happening on the model internals? How'd I do?Mark Bissell [00:06:12]: That was really good. I think that was great. I think it's also a, it's kind of a minefield of a, if you ask 50 people who quote unquote work in interp, like what is interpretability, you'll probably get 50 different answers. And. Yeah. To some extent also like where, where good fire sits in the space. I think that we're an AI research company above all else. And interpretability is a, is a set of methods that we think are really useful and worth kind of specializing in, in order to accomplish the goals we want to accomplish. But I think we also sort of see some of the goals as even more broader as, as almost like the science of deep learning and just taking a not black box approach to kind of any part of the like AI development life cycle, whether that. That means using interp for like data curation while you're training your model or for understanding what happened during post-training or for the, you know, understanding activations and sort of internal representations, what is in there semantically. And then a lot of sort of exciting updates that were, you know, are sort of also part of the, the fundraise around bringing interpretability to training, which I don't think has been done all that much before. A lot of this stuff is sort of post-talk poking at models as opposed to. To actually using this to intentionally design them.Shawn Wang [00:07:29]: Is this post-training or pre-training or is that not a useful.Myra Deng [00:07:33]: Currently focused on post-training, but there's no reason the techniques wouldn't also work in pre-training.Shawn Wang [00:07:38]: Yeah. It seems like it would be more active, applicable post-training because basically I'm thinking like rollouts or like, you know, having different variations of a model that you can tweak with the, with your steering. Yeah.Myra Deng [00:07:50]: And I think in a lot of the news that you've seen in, in, on like Twitter or whatever, you've seen a lot of unintended. Side effects come out of post-training processes, you know, overly sycophantic models or models that exhibit strange reward hacking behavior. I think these are like extreme examples. There's also, you know, very, uh, mundane, more mundane, like enterprise use cases where, you know, they try to customize or post-train a model to do something and it learns some noise or it doesn't appropriately learn the target task. And a big question that we've always had is like, how do you use your understanding of what the model knows and what it's doing to actually guide the learning process?Shawn Wang [00:08:26]: Yeah, I mean, uh, you know, just to anchor this for people, uh, one of the biggest controversies of last year was 4.0 GlazeGate. I've never heard of GlazeGate. I didn't know that was what it was called. The other one, they called it that on the blog post and I was like, well, how did OpenAI call it? Like officially use that term. And I'm like, that's funny, but like, yeah, I guess it's the pitch that if they had worked a good fire, they wouldn't have avoided it. Like, you know what I'm saying?Myra Deng [00:08:51]: I think so. Yeah. Yeah.Mark Bissell [00:08:53]: I think that's certainly one of the use cases. I think. Yeah. Yeah. I think the reason why post-training is a place where this makes a lot of sense is a lot of what we're talking about is surgical edits. You know, you want to be able to have expert feedback, very surgically change how your model is doing, whether that is, you know, removing a certain behavior that it has. So, you know, one of the things that we've been looking at or is, is another like common area where you would want to make a somewhat surgical edit is some of the models that have say political bias. Like you look at Quen or, um, R1 and they have sort of like this CCP bias.Shawn Wang [00:09:27]: Is there a CCP vector?Mark Bissell [00:09:29]: Well, there's, there are certainly internal, yeah. Parts of the representation space where you can sort of see where that lives. Yeah. Um, and you want to kind of, you know, extract that piece out.Shawn Wang [00:09:40]: Well, I always say, you know, whenever you find a vector, a fun exercise is just like, make it very negative to see what the opposite of CCP is.Mark Bissell [00:09:47]: The super America, bald eagles flying everywhere. But yeah. So in general, like lots of post-training tasks where you'd want to be able to, to do that. Whether it's unlearning a certain behavior or, you know, some of the other kind of cases where this comes up is, are you familiar with like the, the grokking behavior? I mean, I know the machine learning term of grokking.Shawn Wang [00:10:09]: Yeah.Mark Bissell [00:10:09]: Sort of this like double descent idea of, of having a model that is able to learn a generalizing, a generalizing solution, as opposed to even if memorization of some task would suffice, you want it to learn the more general way of doing a thing. And so, you know, another. A way that you can think about having surgical access to a model's internals would be learn from this data, but learn in the right way. If there are many possible, you know, ways to, to do that. Can make interp solve the double descent problem?Shawn Wang [00:10:41]: Depends, I guess, on how you. Okay. So I, I, I viewed that double descent as a problem because then you're like, well, if the loss curves level out, then you're done, but maybe you're not done. Right. Right. But like, if you actually can interpret what is a generalizing or what you're doing. What is, what is still changing, even though the loss is not changing, then maybe you, you can actually not view it as a double descent problem. And actually you're just sort of translating the space in which you view loss and like, and then you have a smooth curve. Yeah.Mark Bissell [00:11:11]: I think that's certainly like the domain of, of problems that we're, that we're looking to get.Shawn Wang [00:11:15]: Yeah. To me, like double descent is like the biggest thing to like ML research where like, if you believe in scaling, then you don't need, you need to know where to scale. And. But if you believe in double descent, then you don't, you don't believe in anything where like anything levels off, like.Vibhu Sapra [00:11:30]: I mean, also tendentially there's like, okay, when you talk about the China vector, right. There's the subliminal learning work. It was from the anthropic fellows program where basically you can have hidden biases in a model. And as you distill down or, you know, as you train on distilled data, those biases always show up, even if like you explicitly try to not train on them. So, you know, it's just like another use case of. Okay. If we can interpret what's happening in post-training, you know, can we clear some of this? Can we even determine what's there? Because yeah, it's just like some worrying research that's out there that shows, you know, we really don't know what's going on.Mark Bissell [00:12:06]: That is. Yeah. I think that's the biggest sentiment that we're sort of hoping to tackle. Nobody knows what's going on. Right. Like subliminal learning is just an insane concept when you think about it. Right. Train a model on not even the logits, literally the output text of a bunch of random numbers. And now your model loves owls. And you see behaviors like that, that are just, they defy, they defy intuition. And, and there are mathematical explanations that you can get into, but. I mean.Shawn Wang [00:12:34]: It feels so early days. Objectively, there are a sequence of numbers that are more owl-like than others. There, there should be.Mark Bissell [00:12:40]: According to, according to certain models. Right. It's interesting. I think it only applies to models that were initialized from the same starting Z. Usually, yes.Shawn Wang [00:12:49]: But I mean, I think that's a, that's a cheat code because there's not enough compute. But like if you believe in like platonic representation, like probably it will transfer across different models as well. Oh, you think so?Mark Bissell [00:13:00]: I think of it more as a statistical artifact of models initialized from the same seed sort of. There's something that is like path dependent from that seed that might cause certain overlaps in the latent space and then sort of doing this distillation. Yeah. Like it pushes it towards having certain other tendencies.Vibhu Sapra [00:13:24]: Got it. I think there's like a bunch of these open-ended questions, right? Like you can't train in new stuff during the RL phase, right? RL only reorganizes weights and you can only do stuff that's somewhat there in your base model. You're not learning new stuff. You're just reordering chains and stuff. But okay. My broader question is when you guys work at an interp lab, how do you decide what to work on and what's kind of the thought process? Right. Because we can ramble for hours. Okay. I want to know this. I want to know that. But like, how do you concretely like, you know, what's the workflow? Okay. There's like approaches towards solving a problem, right? I can try prompting. I can look at chain of thought. I can train probes, SAEs. But how do you determine, you know, like, okay, is this going anywhere? Like, do we have set stuff? Just, you know, if you can help me with all that. Yeah.Myra Deng [00:14:07]: It's a really good question. I feel like we've always at the very beginning of the company thought about like, let's go and try to learn what isn't working in machine learning today. Whether that's talking to customers or talking to researchers at other labs, trying to understand both where the frontier is going and where things are really not falling apart today. And then developing a perspective on how we can push the frontier using interpretability methods. And so, you know, even our chief scientist, Tom, spends a lot of time talking to customers and trying to understand what real world problems are and then taking that back and trying to apply the current state of the art to those problems and then seeing where they fall down basically. And then using those failures or those shortcomings to understand what hills to climb when it comes to interpretability research. So like on the fundamental side, for instance, when we have done some work applying SAEs and probes, we've encountered, you know, some shortcomings in SAEs that we found a little bit surprising. And so have gone back to the drawing board and done work on that. And then, you know, we've done some work on better foundational interpreter models. And a lot of our team's research is focused on what is the next evolution beyond SAEs, for instance. And then when it comes to like control and design of models, you know, we tried steering with our first API and realized that it still fell short of black box techniques like prompting or fine tuning. And so went back to the drawing board and we're like, how do we make that not the case and how do we improve it beyond that? And one of our researchers, Ekdeep, who just joined is actually Ekdeep and Atticus are like steering experts and have spent a lot of time trying to figure out like, what is the research that enables us to actually do this in a much more powerful, robust way? So yeah, the answer is like, look at real world problems, try to translate that into a research agenda and then like hill climb on both of those at the same time.Shawn Wang [00:16:04]: Yeah. Mark has the steering CLI demo queued up, which we're going to go into in a sec. But I always want to double click on when you drop hints, like we found some problems with SAEs. Okay. What are they? You know, and then we can go into the demo. Yeah.Myra Deng [00:16:19]: I mean, I'm curious if you have more thoughts here as well, because you've done it in the healthcare domain. But I think like, for instance, when we do things like trying to detect behaviors within models that are harmful or like behaviors that a user might not want to have in their model. So hallucinations, for instance, harmful intent, PII, all of these things. We first tried using SAE probes for a lot of these tasks. So taking the feature activation space from SAEs and then training classifiers on top of that, and then seeing how well we can detect the properties that we might want to detect in model behavior. And we've seen in many cases that probes just trained on raw activations seem to perform better than SAE probes, which is a bit surprising if you think that SAEs are actually also capturing the concepts that you would want to capture cleanly and more surgically. And so that is an interesting observation. I don't think that is like, I'm not down on SAEs at all. I think there are many, many things they're useful for, but we have definitely run into cases where I think the concept space described by SAEs is not as clean and accurate as we would expect it to be for actual like real world downstream performance metrics.Mark Bissell [00:17:34]: Fair enough. Yeah. It's the blessing and the curse of unsupervised methods where you get to peek into the AI's mind. But sometimes you wish that you saw other things when you walked inside there. Although in the PII instance, I think weren't an SAE based approach actually did prove to be the most generalizable?Myra Deng [00:17:53]: It did work well in the case that we published with Rakuten. And I think a lot of the reasons it worked well was because we had a noisier data set. And so actually the blessing of unsupervised learning is that we actually got to get more meaningful, generalizable signal from SAEs when the data was noisy. But in other cases where we've had like good data sets, it hasn't been the case.Shawn Wang [00:18:14]: And just because you named Rakuten and I don't know if we'll get it another chance, like what is the overall, like what is Rakuten's usage or production usage? Yeah.Myra Deng [00:18:25]: So they are using us to essentially guardrail and inference time monitor their language model usage and their agent usage to detect things like PII so that they don't route private user information.Myra Deng [00:18:41]: And so that's, you know, going through all of their user queries every day. And that's something that we deployed with them a few months ago. And now we are actually exploring very early partnerships, not just with Rakuten, but with other people around how we can help with potentially training and customization use cases as well. Yeah.Shawn Wang [00:19:03]: And for those who don't know, like it's Rakuten is like, I think number one or number two e-commerce store in Japan. Yes. Yeah.Mark Bissell [00:19:10]: And I think that use case actually highlights a lot of like what it looks like to deploy things in practice that you don't always think about when you're doing sort of research tasks. So when you think about some of the stuff that came up there that's more complex than your idealized version of a problem, they were encountering things like synthetic to real transfer of methods. So they couldn't train probes, classifiers, things like that on actual customer data of PII. So what they had to do is use synthetic data sets. And then hope that that transfer is out of domain to real data sets. And so we can evaluate performance on the real data sets, but not train on customer PII. So that right off the bat is like a big challenge. You have multilingual requirements. So this needed to work for both English and Japanese text. Japanese text has all sorts of quirks, including tokenization behaviors that caused lots of bugs that caused us to be pulling our hair out. And then also a lot of tasks you'll see. You might make simplifying assumptions if you're sort of treating it as like the easiest version of the problem to just sort of get like general results where maybe you say you're classifying a sentence to say, does this contain PII? But the need that Rakuten had was token level classification so that you could precisely scrub out the PII. So as we learned more about the problem, you're sort of speaking about what that looks like in practice. Yeah. A lot of assumptions end up breaking. And that was just one instance where you. A problem that seems simple right off the bat ends up being more complex as you keep diving into it.Vibhu Sapra [00:20:41]: Excellent. One of the things that's also interesting with Interp is a lot of these methods are very efficient, right? So where you're just looking at a model's internals itself compared to a separate like guardrail, LLM as a judge, a separate model. One, you have to host it. Two, there's like a whole latency. So if you use like a big model, you have a second call. Some of the work around like self detection of hallucination, it's also deployed for efficiency, right? So if you have someone like Rakuten doing it in production live, you know, that's just another thing people should consider.Mark Bissell [00:21:12]: Yeah. And something like a probe is super lightweight. Yeah. It's no extra latency really. Excellent.Shawn Wang [00:21:17]: You have the steering demos lined up. So we were just kind of see what you got. I don't, I don't actually know if this is like the latest, latest or like alpha thing.Mark Bissell [00:21:26]: No, this is a pretty hacky demo from from a presentation that someone else on the team recently gave. So this will give a sense for, for technology. So you can see the steering and action. Honestly, I think the biggest thing that this highlights is that as we've been growing as a company and taking on kind of more and more ambitious versions of interpretability related problems, a lot of that comes to scaling up in various different forms. And so here you're going to see steering on a 1 trillion parameter model. This is Kimi K2. And so it's sort of fun that in addition to the research challenges, there are engineering challenges that we're now tackling. Cause for any of this to be sort of useful in production, you need to be thinking about what it looks like when you're using these methods on frontier models as opposed to sort of like toy kind of model organisms. So yeah, this was thrown together hastily, pretty fragile behind the scenes, but I think it's quite a fun demo. So screen sharing is on. So I've got two terminal sessions pulled up here. On the left is a forked version that we have of the Kimi CLI that we've got running to point at our custom hosted Kimi model. And then on the right is a set up that will allow us to steer on certain concepts. So I should be able to chat with Kimi over here. Tell it hello. This is running locally. So the CLI is running locally, but the Kimi server is running back to the office. Well, hopefully should be, um, that's too much to run on that Mac. Yeah. I think it's, uh, it takes a full, like each 100 node. I think it's like, you can. You can run it on eight GPUs, eight 100. So, so yeah, Kimi's running. We can ask it a prompt. It's got a forked version of our, uh, of the SG line code base that we've been working on. So I'm going to tell it, Hey, this SG line code base is slow. I think there's a bug. Can you try to figure it out? There's a big code base, so it'll, it'll spend some time doing this. And then on the right here, I'm going to initialize in real time. Some steering. Let's see here.Mark Bissell [00:23:33]: searching for any. Bugs. Feature ID 43205.Shawn Wang [00:23:38]: Yeah.Mark Bissell [00:23:38]: 20, 30, 40. So let me, uh, this is basically a feature that we found that inside Kimi seems to cause it to speak in Gen Z slang. And so on the left, it's still sort of thinking normally it might take, I don't know, 15 seconds for this to kick in, but then we're going to start hopefully seeing him do this code base is massive for real. So we're going to start. We're going to start seeing Kimi transition as the steering kicks in from normal Kimi to Gen Z Kimi and both in its chain of thought and its actual outputs.Mark Bissell [00:24:19]: And interestingly, you can see, you know, it's still able to call tools, uh, and stuff. It's um, it's purely sort of it's it's demeanor. And there are other features that we found for interesting things like concision. So that's more of a practical one. You can make it more concise. Um, the types of programs, uh, programming languages that uses, but yeah, as we're seeing it come in. Pretty good. Outputs.Shawn Wang [00:24:43]: Scheduler code is actually wild.Vibhu Sapra [00:24:46]: Yo, this code is actually insane, bro.Vibhu Sapra [00:24:53]: What's the process of training in SAE on this, or, you know, how do you label features? I know you guys put out a pretty cool blog post about, um, finding this like autonomous interp. Um, something. Something about how agents for interp is different than like coding agents. I don't know while this is spewing up, but how, how do we find feature 43, two Oh five. Yeah.Mark Bissell [00:25:15]: So in this case, um, we, our platform that we've been building out for a long time now supports all the sort of classic out of the box interp techniques that you might want to have like SAE training, probing things of that kind, I'd say the techniques for like vanilla SAEs are pretty well established now where. You take your model that you're interpreting, run a whole bunch of data through it, gather activations, and then yeah, pretty straightforward pipeline to train an SAE. There are a lot of different varieties. There's top KSAEs, batch top KSAEs, um, normal ReLU SAEs. And then once you have your sparse features to your point, assigning labels to them to actually understand that this is a gen Z feature, that's actually where a lot of the kind of magic happens. Yeah. And the most basic standard technique is look at all of your d input data set examples that cause this feature to fire most highly. And then you can usually pick out a pattern. So for this feature, If I've run a diverse enough data set through my model feature 43, two Oh five. Probably tends to fire on all the tokens that sounds like gen Z slang. You know, that's the, that's the time of year to be like, Oh, I'm in this, I'm in this Um, and, um, so, you know, you could have a human go through all 43,000 concepts andVibhu Sapra [00:26:34]: And I've got to ask the basic question, you know, can we get examples where it hallucinates, pass it through, see what feature activates for hallucinations? Can I just, you know, turn hallucination down?Myra Deng [00:26:51]: Oh, wow. You really predicted a project we're already working on right now, which is detecting hallucinations using interpretability techniques. And this is interesting because hallucinations is something that's very hard to detect. And it's like a kind of a hairy problem and something that black box methods really struggle with. Whereas like Gen Z, you could always train a simple classifier to detect that hallucinations is harder. But we've seen that models internally have some... Awareness of like uncertainty or some sort of like user pleasing behavior that leads to hallucinatory behavior. And so, yeah, we have a project that's trying to detect that accurately. And then also working on mitigating the hallucinatory behavior in the model itself as well.Shawn Wang [00:27:39]: Yeah, I would say most people are still at the level of like, oh, I would just turn temperature to zero and that turns off hallucination. And I'm like, well, that's a fundamental misunderstanding of how this works. Yeah.Mark Bissell [00:27:51]: Although, so part of what I like about that question is you, there are SAE based approaches that might like help you get at that. But oftentimes the beauty of SAEs and like we said, the curse is that they're unsupervised. So when you have a behavior that you deliberately would like to remove, and that's more of like a supervised task, often it is better to use something like probes and specifically target the thing that you're interested in reducing as opposed to sort of like hoping that when you fragment the latent space, one of the vectors that pops out.Vibhu Sapra [00:28:20]: And as much as we're training an autoencoder to be sparse, we're not like for sure certain that, you know, we will get something that just correlates to hallucination. You'll probably split that up into 20 other things and who knows what they'll be.Mark Bissell [00:28:36]: Of course. Right. Yeah. So there's no sort of problems with like feature splitting and feature absorption. And then there's the off target effects, right? Ideally, you would want to be very precise where if you reduce the hallucination feature, suddenly maybe your model can't write. Creatively anymore. And maybe you don't like that, but you want to still stop it from hallucinating facts and figures.Shawn Wang [00:28:55]: Good. So Vibhu has a paper to recommend there that we'll put in the show notes. But yeah, I mean, I guess just because your demo is done, any any other things that you want to highlight or any other interesting features you want to show?Mark Bissell [00:29:07]: I don't think so. Yeah. Like I said, this is a pretty small snippet. I think the main sort of point here that I think is exciting is that there's not a whole lot of inter being applied to models quite at this scale. You know, Anthropic certainly has some some. Research and yeah, other other teams as well. But it's it's nice to see these techniques, you know, being put into practice. I think not that long ago, the idea of real time steering of a trillion parameter model would have sounded.Shawn Wang [00:29:33]: Yeah. The fact that it's real time, like you started the thing and then you edited the steering vector.Vibhu Sapra [00:29:38]: I think it's it's an interesting one TBD of what the actual like production use case would be on that, like the real time editing. It's like that's the fun part of the demo, right? You can kind of see how this could be served behind an API, right? Like, yes, you're you only have so many knobs and you can just tweak it a bit more. And I don't know how it plays in. Like people haven't done that much with like, how does this work with or without prompting? Right. How does this work with fine tuning? Like, there's a whole hype of continual learning, right? So there's just so much to see. Like, is this another parameter? Like, is it like parameter? We just kind of leave it as a default. We don't use it. So I don't know. Maybe someone here wants to put out a guide on like how to use this with prompting when to do what?Mark Bissell [00:30:18]: Oh, well, I have a paper recommendation. I think you would love from Act Deep on our team, who is an amazing researcher, just can't say enough amazing things about Act Deep. But he actually has a paper that as well as some others from the team and elsewhere that go into the essentially equivalence of activation steering and in context learning and how those are from a he thinks of everything in a cognitive neuroscience Bayesian framework, but basically how you can precisely show how. Prompting in context, learning and steering exhibit similar behaviors and even like get quantitative about the like magnitude of steering you would need to do to induce a certain amount of behavior similar to certain prompting, even for things like jailbreaks and stuff. It's a really cool paper. Are you saying steering is less powerful than prompting? More like you can almost write a formula that tells you how to convert between the two of them.Myra Deng [00:31:20]: And so like formally equivalent actually in the in the limit. Right.Mark Bissell [00:31:24]: So like one case study of this is for jailbreaks there. I don't know. Have you seen the stuff where you can do like many shot jailbreaking? You like flood the context with examples of the behavior. And the topic put out that paper.Shawn Wang [00:31:38]: A lot of people were like, yeah, we've been doing this, guys.Mark Bissell [00:31:40]: Like, yeah, what's in this in context learning and activation steering equivalence paper is you can like predict the number. Number of examples that you will need to put in there in order to jailbreak the model. That's cool. By doing steering experiments and using this sort of like equivalence mapping. That's cool. That's really cool. It's very neat. Yeah.Shawn Wang [00:32:02]: I was going to say, like, you know, I can like back rationalize that this makes sense because, you know, what context is, is basically just, you know, it updates the KV cache kind of and like and then every next token inference is still like, you know, the sheer sum of everything all the way. It's plus all the context. It's up to date. And you could, I guess, theoretically steer that with you probably replace that with your steering. The only problem is steering typically is on one layer, maybe three layers like like you did. So it's like not exactly equivalent.Mark Bissell [00:32:33]: Right, right. There's sort of you need to get precise about, yeah, like how you sort of define steering and like what how you're modeling the setup. But yeah, I've got the paper pulled up here. Belief dynamics reveal the dual nature. Yeah. The title is Belief Dynamics Reveal the Dual Nature of Incompetence. And it's an exhibition of the practical context learning and activation steering. So Eric Bigelow, Dan Urgraft on the who are doing fellowships at Goodfire, Ekt Deep's the final author there.Myra Deng [00:32:59]: I think actually to your question of like, what is the production use case of steering? I think maybe if you just think like one level beyond steering as it is today. Like imagine if you could adapt your model to be, you know, an expert legal reasoner. Like in almost real time, like very quickly. efficiently using human feedback or using like your semantic understanding of what the model knows and where it knows that behavior. I think that while it's not clear what the product is at the end of the day, it's clearly very valuable. Thinking about like what's the next interface for model customization and adaptation is a really interesting problem for us. Like we have heard a lot of people actually interested in fine-tuning an RL for open weight models in production. And so people are using things like Tinker or kind of like open source libraries to do that, but it's still very difficult to get models fine-tuned and RL'd for exactly what you want them to do unless you're an expert at model training. And so that's like something we'reShawn Wang [00:34:06]: looking into. Yeah. I never thought so. Tinker from Thinking Machines famously uses rank one LoRa. Is that basically the same as steering? Like, you know, what's the comparison there?Mark Bissell [00:34:19]: Well, so in that case, you are still applying updates to the parameters, right?Shawn Wang [00:34:25]: Yeah. You're not touching a base model. You're touching an adapter. It's kind of, yeah.Mark Bissell [00:34:30]: Right. But I guess it still is like more in parameter space then. I guess it's maybe like, are you modifying the pipes or are you modifying the water flowing through the pipes to get what you're after? Yeah. Just maybe one way.Mark Bissell [00:34:44]: I like that analogy. That's my mental map of it at least, but it gets at this idea of model design and intentional design, which is something that we're, that we're very focused on. And just the fact that like, I hope that we look back at how we're currently training models and post-training models and just think what a primitive way of doing that right now. Like there's no intentionalityShawn Wang [00:35:06]: really in... It's just data, right? The only thing in control is what data we feed in.Mark Bissell [00:35:11]: So, so Dan from Goodfire likes to use this analogy of, you know, he has a couple of young kids and he talks about like, what if I could only teach my kids how to be good people by giving them cookies or like, you know, giving them a slap on the wrist if they do something wrong, like not telling them why it was wrong or like what they should have done differently or something like that. Just figure it out. Right. Exactly. So that's RL. Yeah. Right. And, and, you know, it's sample inefficient. There's, you know, what do they say? It's like slurping feedback. It's like, slurping supervision. Right. And so you'd like to get to the point where you can have experts giving feedback to their models that are, uh, internalized and, and, you know, steering is an inference time way of sort of getting that idea. But ideally you're moving to a world whereVibhu Sapra [00:36:04]: it is much more intentional design in perpetuity for these models. Okay. This is one of the questions we asked Emmanuel from Anthropic on the podcast a few months ago. Basically the question, was you're at a research lab that does model training, foundation models, and you're on an interp team. How does it tie back? Right? Like, does this, do ideas come from the pre-training team? Do they go back? Um, you know, so for those interested, you can, you can watch that. There wasn't too much of a connect there, but it's still something, you know, it's something they want toMark Bissell [00:36:33]: push for down the line. It can be useful for all of the above. Like there are certainly post-hocVibhu Sapra [00:36:39]: use cases where it doesn't need to touch that. I think the other thing a lot of people forget is this stuff isn't too computationally expensive, right? Like I would say, if you're interested in getting into research, MechInterp is one of the most approachable fields, right? A lot of this train an essay, train a probe, this stuff, like the budget for this one, there's already a lot done. There's a lot of open source work. You guys have done some too. Um, you know,Shawn Wang [00:37:04]: There's like notebooks from the Gemini team for Neil Nanda or like, this is how you do it. Just step through the notebook.Vibhu Sapra [00:37:09]: Even if you're like, not even technical with any of this, you can still make like progress. There, you can look at different activations, but, uh, if you do want to get into training, you know, training this stuff, correct me if I'm wrong is like in the thousands of dollars, not even like, it's not that high scale. And then same with like, you know, applying it, doing it for post-training or all this stuff is fairly cheap in scale of, okay. I want to get into like model training. I don't have compute for like, you know, pre-training stuff. So it's, it's a very nice field to get into. And also there's a lot of like open questions, right? Um, some of them have to go with, okay, I want a product. I want to solve this. Like there's also just a lot of open-ended stuff that people could work on. That's interesting. Right. I don't know if you guys have any calls for like, what's open questions, what's open work that you either open collaboration with, or like, you'd just like to see solved or just, you know, for people listening that want to get into McInturk because people always talk about it. What are, what are the things they should check out? Start, of course, you know, join you guys as well. I'm sure you're hiring.Myra Deng [00:38:09]: There's a paper, I think from, was it Lee, uh, Sharky? It's open problems and, uh, it's, it's a bit of interpretability, which I recommend everyone who's interested in the field. Read. I'm just like a really comprehensive overview of what are the things that experts in the field think are the most important problems to be solved. I also think to your point, it's been really, really inspiring to see, I think a lot of young people getting interested in interpretability, actually not just young people also like scientists to have been, you know, experts in physics for many years and in biology or things like this, um, transitioning into interp, because the barrier of, of what's now interp. So it's really cool to see a number to entry is, you know, in some ways low and there's a lot of information out there and ways to get started. There's this anecdote of like professors at universities saying that all of a sudden every incoming PhD student wants to study interpretability, which was not the case a few years ago. So it just goes to show how, I guess, like exciting the field is, how fast it's moving, how quick it is to get started and things like that.Mark Bissell [00:39:10]: And also just a very welcoming community. You know, there's an open source McInturk Slack channel. There are people are always posting questions and just folks in the space are always responsive if you ask things on various forums and stuff. But yeah, the open paper, open problems paper is a really good one.Myra Deng [00:39:28]: For other people who want to get started, I think, you know, MATS is a great program. What's the acronym for? Machine Learning and Alignment Theory Scholars? It's like the...Vibhu Sapra [00:39:40]: Normally summer internship style.Myra Deng [00:39:42]: Yeah, but they've been doing it year round now. And actually a lot of our full-time staff have come through that program or gone through that program. And it's great for anyone who is transitioning into interpretability. There's a couple other fellows programs. We do one as well as Anthropic. And so those are great places to get started if anyone is interested.Mark Bissell [00:40:03]: Also, I think been seen as a research field for a very long time. But I think engineering... I think engineers are sorely wanted for interpretability as well, especially at Goodfire, but elsewhere, as it does scale up.Shawn Wang [00:40:18]: I should mention that Lee actually works with you guys, right? And in the London office and I'm adding our first ever McInturk track at AI Europe because I see this industry applications now emerging. And I'm pretty excited to, you know, help push that along. Yeah, I was looking forward to that. It'll effectively be the first industry McInturk conference. Yeah. I'm so glad you added that. You know, it's still a little bit of a bet. It's not that widespread, but I can definitely see this is the time to really get into it. We want to be early on things.Mark Bissell [00:40:51]: For sure. And I think the field understands this, right? So at ICML, I think the title of the McInturk workshop this year was actionable interpretability. And there was a lot of discussion around bringing it to various domains. Everyone's adding pragmatic, actionable, whatever.Shawn Wang [00:41:10]: It's like, okay, well, we weren't actionable before, I guess. I don't know.Vibhu Sapra [00:41:13]: And I mean, like, just, you know, being in Europe, you see the Interp room. One, like old school conferences, like, I think they had a very tiny room till they got lucky and they got it doubled. But there's definitely a lot of interest, a lot of niche research. So you see a lot of research coming out of universities, students. We covered the paper last week. It's like two unknown authors, not many citations. But, you know, you can make a lot of meaningful work there. Yeah. Yeah. Yeah.Shawn Wang [00:41:39]: Yeah. I think people haven't really mentioned this yet. It's just Interp for code. I think it's like an abnormally important field. We haven't mentioned this yet. The conspiracy theory last two years ago was when the first SAE work came out of Anthropic was they would do like, oh, we just used SAEs to turn the bad code vector down and then turn up the good code. And I think like, isn't that the dream? Like, you know, like, but basically, I guess maybe, why is it funny? Like, it's... If it was realistic, it would not be funny. It would be like, no, actually, we should do this. But it's funny because we know there's like, we feel there's some limitations to what steering can do. And I think a lot of the public image of steering is like the Gen Z stuff. Like, oh, you can make it really love the Golden Gate Bridge, or you can make it speak like Gen Z. To like be a legal reasoner seems like a huge stretch. Yeah. And I don't know if that will get there this way. Yeah.Myra Deng [00:42:36]: I think, um, I will say we are announcing. Something very soon that I will not speak too much about. Um, but I think, yeah, this is like what we've run into again and again is like, we, we don't want to be in the world where steering is only useful for like stylistic things. That's definitely not, not what we're aiming for. But I think the types of interventions that you need to do to get to things like legal reasoning, um, are much more sophisticated and require breakthroughs in, in learning algorithms. And that's, um...Shawn Wang [00:43:07]: And is this an emergent property of scale as well?Myra Deng [00:43:10]: I think so. Yeah. I mean, I think scale definitely helps. I think scale allows you to learn a lot of information and, and reduce noise across, you know, large amounts of data. But I also think we think that there's ways to do things much more effectively, um, even, even at scale. So like actually learning exactly what you want from the data and not learning things that you do that you don't want exhibited in the data. So we're not like anti-scale, but we are also realizing that scale is not going to get us anywhere. It's not going to get us to the type of AI development that we want to be at in, in the future as these models get more powerful and get deployed in all these sorts of like mission critical contexts. Current life cycle of training and deploying and evaluations is, is to us like deeply broken and has opportunities to, to improve. So, um, more to come on that very, very soon.Mark Bissell [00:44:02]: And I think that that's a use basically, or maybe just like a proof point that these concepts do exist. Like if you can manipulate them in the precise best way, you can get the ideal combination of them that you desire. And steering is maybe the most coarse grained sort of peek at what that looks like. But I think it's evocative of what you could do if you had total surgical control over every concept, every parameter. Yeah, exactly.Myra Deng [00:44:30]: There were like bad code features. I've got it pulled up.Vibhu Sapra [00:44:33]: Yeah. Just coincidentally, as you guys are talking.Shawn Wang [00:44:35]: This is like, this is exactly.Vibhu Sapra [00:44:38]: There's like specifically a code error feature that activates and they show, you know, it's not, it's not typo detection. It's like, it's, it's typos in code. It's not typical typos. And, you know, you can, you can see it clearly activates where there's something wrong in code. And they have like malicious code, code error. They have a whole bunch of sub, you know, sub broken down little grain features. Yeah.Shawn Wang [00:45:02]: Yeah. So, so the, the rough intuition for me, the, why I talked about post-training was that, well, you just, you know, have a few different rollouts with all these things turned off and on and whatever. And then, you know, you can, that's, that's synthetic data you can kind of post-train on. Yeah.Vibhu Sapra [00:45:13]: And I think we make it sound easier than it is just saying, you know, they do the real hard work.Myra Deng [00:45:19]: I mean, you guys, you guys have the right idea. Exactly. Yeah. We replicated a lot of these features in, in our Lama models as well. I remember there was like.Vibhu Sapra [00:45:26]: And I think a lot of this stuff is open, right? Like, yeah, you guys opened yours. DeepMind has opened a lot of essays on Gemma. Even Anthropic has opened a lot of this. There's, there's a lot of resources that, you know, we can probably share of people that want to get involved.Shawn Wang [00:45:41]: Yeah. And special shout out to like Neuronpedia as well. Yes. Like, yeah, amazing piece of work to visualize those things.Myra Deng [00:45:49]: Yeah, exactly.Shawn Wang [00:45:50]: I guess I wanted to pivot a little bit on, onto the healthcare side, because I think that's a big use case for you guys. We haven't really talked about it yet. This is a bit of a crossover for me because we are, we are, we do have a separate science pod that we're starting up for AI, for AI for science, just because like, it's such a huge investment category and also I'm like less qualified to do it, but we actually have bio PhDs to cover that, which is great, but I need to just kind of recover, recap your work, maybe on the evil two stuff, but then, and then building forward.Mark Bissell [00:46:17]: Yeah, for sure. And maybe to frame up the conversation, I think another kind of interesting just lens on interpretability in general is a lot of the techniques that were described. are ways to solve the AI human interface problem. And it's sort of like bidirectional communication is the goal there. So what we've been talking about with intentional design of models and, you know, steering, but also more advanced techniques is having humans impart our desires and control into models and over models. And the reverse is also very interesting, especially as you get to superhuman models, whether that's narrow superintelligence, like these scientific models that work on genomics, data, medical imaging, things like that. But down the line, you know, superintelligence of other forms as well. What knowledge can the AIs teach us as sort of that, that the other direction in that? And so some of our life science work to date has been getting at exactly that question, which is, well, some of it does look like debugging these various life sciences models, understanding if they're actually performing well, on tasks, or if they're picking up on spurious correlations, for instance, genomics models, you would like to know whether they are sort of focusing on the biologically relevant things that you care about, or if it's using some simpler correlate, like the ancestry of the person that it's looking at. But then also in the instances where they are superhuman, and maybe they are understanding elements of the human genome that we don't have names for or specific, you know, yeah, discoveries that they've made that that we don't know about, that's, that's a big goal. And so we're already seeing that, right, we are partnered with organizations like Mayo Clinic, leading research health system in the United States, our Institute, as well as a startup called Prima Menta, which focuses on neurodegenerative disease. And in our partnership with them, we've used foundation models, they've been training and applied our interpretability techniques to find novel biomarkers for Alzheimer's disease. So I think this is just the tip of the iceberg. But it's, that's like a flavor of some of the things that we're working on.Shawn Wang [00:48:36]: Yeah, I think that's really fantastic. Obviously, we did the Chad Zuckerberg pod last year as well. And like, there's a plethora of these models coming out, because there's so much potential and research. And it's like, very interesting how it's basically the same as language models, but just with a different underlying data set. But it's like, it's the same exact techniques. Like, there's no change, basically.Mark Bissell [00:48:59]: Yeah. Well, and even in like other domains, right? Like, you know, robotics, I know, like a lot of the companies just use Gemma as like the like backbone, and then they like make it into a VLA that like takes these actions. It's, it's, it's transformers all the way down. So yeah.Vibhu Sapra [00:49:15]: Like we have Med Gemma now, right? Like this week, even there was Med Gemma 1.5. And they're training it on this stuff, like 3d scans, medical domain knowledge, and all that stuff, too. So there's a push from both sides. But I think the thing that, you know, one of the things about McInturpp is like, you're a little bit more cautious in some domains, right? So healthcare, mainly being one, like guardrails, understanding, you know, we're more risk adverse to something going wrong there. So even just from a basic understanding, like, if we're trusting these systems to make claims, we want to know why and what's going on.Myra Deng [00:49:51]: Yeah, I think there's totally a kind of like deployment bottleneck to actually using. foundation models for real patient usage or things like that. Like, say you're using a model for rare disease prediction, you probably want some explanation as to why your model predicted a certain outcome, and an interpretable explanation at that. So that's definitely a use case. But I also think like, being able to extract scientific information that no human knows to accelerate drug discovery and disease treatment and things like that actually is a really, really big unlock for science, like scientific discovery. And you've seen a lot of startups, like say that they're going to accelerate scientific discovery. And I feel like we actually are doing that through our interp techniques. And kind of like, almost by accident, like, I think we got reached out to very, very early on from these healthcare institutions. And none of us had healthcare.Shawn Wang [00:50:49]: How did they even hear of you? A podcast.Myra Deng [00:50:51]: Oh, okay. Yeah, podcast.Vibhu Sapra [00:50:53]: Okay, well, now's that time, you know.Myra Deng [00:50:55]: Everyone can call us.Shawn Wang [00:50:56]: Podcasts are the most important thing. Everyone should listen to podcasts.Myra Deng [00:50:59]: Yeah, they reached out. They were like, you know, we have these really smart models that we've trained, and we want to know what they're doing. And we were like, really early that time, like three months old, and it was a few of us. And we were like, oh, my God, we've never used these models. Let's figure it out. But it's also like, great proof that interp techniques scale pretty well across domains. We didn't really have to learn too much about.Shawn Wang [00:51:21]: Interp is a machine learning technique, machine learning skills everywhere, right? Yeah. And it's obviously, it's just like a general insight. Yeah. Probably to finance too, I think, which would be fun for our history. I don't know if you have anything to say there.Mark Bissell [00:51:34]: Yeah, well, just across the science. Like, we've also done work on material science. Yeah, it really runs the gamut.Vibhu Sapra [00:51:40]: Yeah. Awesome. And, you know, for those that should reach out, like, you're obviously experts in this, but like, is there a call out for people that you're looking to partner with, design partners, people to use your stuff outside of just, you know, the general developer that wants to. Plug and play steering stuff, like on the research side more so, like, are there ideal design partners, customers, stuff like that?Myra Deng [00:52:03]: Yeah, I can talk about maybe non-life sciences, and then I'm curious to hear from you on the life sciences side. But we're looking for design partners across many domains, language, anyone who's customizing language models or trying to push the frontier of code or reasoning models is really interesting to us. And then also interested in the frontier of modeling. There's a lot of models that work in, like, pixel space, as we call it. So if you're doing world models, video models, even robotics, where there's not a very clean natural language interface to interact with, I think we think that Interp can really help and are looking for a few partners in that space.Shawn Wang [00:52:43]: Just because you mentioned the keyword
Join the team with Alan Tamblyn from the National Council of Metal Detecting who will be updating us on the work the NCMD have been doing this year. Also announcing the top winners of the FREE £10,000 members draw...!Sponsored by Metal Detecting NewsBecome a supporter of this podcast: https://www.spreaker.com/podcast/the-big-detecting-show--3690873/support.
By the time symptoms of disease are observable in a crop, potential yield may already be effected. But what if there was a way to be notified earlier about crop diseases?
SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast
Detecting and Monitoring OpenClaw (clawdbot, moltbot) https://isc.sans.edu/diary.html/Detecting+and+Monitoring+OpenClaw+%28clawdbot%2C+moltbot%29/32678/#comment Synology telnetd Patch https://www.synology.com/en-us/releaseNote/DSM GlassWorm Loader Hits Open VSX via Developer Account Compromise https://socket.dev/blog/glassworm-loader-hits-open-vsx-via-suspected-developer-account-compromise
Recorded live at Pocket Gamer London, Greg sits down with Hill from Checkstep for a wide ranging conversation on trust and safety, AI powered content detection, parenting in a gaming household, and why moderation is no longer just about removing harm.Hill shares her journey from data analyst to GTM leader in trust and safety, how Checkstep is building on top of the rapidly evolving AI ecosystem rather than competing with it, and why “build vs buy” decisions are becoming existential for modern studios.They explore:• What trust and safety really means for multiplayer games and UGC platforms• Why content detection is replacing traditional moderation language• How large language models are changing speed to value for studios• Where humans still matter in AI driven workflows• Detecting grooming and harmful behavior without exposing moderators to trauma• Why keystroke detection and behavioral patterns are becoming new signals• The real ROI conversation studios want proof on• Promoting positive player behavior instead of only policing bad actors• Parenting in a gaming household and how Greg thinks about kid safe play• Why balance beats bans when raising young players• Continuous learning, newsletters, and staying sharp in fast moving industries• What success looks like for startups scaling with investors• Hill's 2026 goals and growing meaningful industry partnershipsThe conversation blends operator level insight with personal stories, from renovating bathrooms at night to Wordle streaks and Goat Simulator family sessions.If you care about LiveOps, community health, AI in CX, or building safer game ecosystems at scale, this episode is for you.
‘AI-assisted mammograms result in fewer aggressive and advanced breast cancer', according to a new study which used AI in 200,000 breast exams from various institutions in more than 10 countries. Joining Shane and Ciara was Suzanne Little, Professor in the School of Computing at Dublin City University.
For 50 years, the healthcare industry has been trying (and failing) to harness the power of artificial intelligence. It may finally be ready for prime time. What will this mean for human doctors — and the rest of us? (Part four of “The Freakonomics Radio Guide to Getting Better.”) SOURCES:Bob Wachter, professor, chair of the department of medicine at the University of California, San Francisco.Pierre Elias, cardiologist, assistant professor of biomedical informatics at Columbia University, medical director for artificial intelligence at NewYork-Presbyterian Hospital. RESOURCES:A Giant Leap: How AI Is Transforming Healthcare and What That Means for Our Future, by Bob Wachter (2026)."Epic Systems (MyChart)," by Acquired (2025)."Detecting structural heart disease from electrocardiograms using AI," by Pierre Elias and Timothy Poterucha (Nature, 2025)."What Are the Risks of Sharing Medical Records With ChatGPT?" by Maggie Astor (New York Times, 2025)."Will Generative Artificial Intelligence Deliver on Its Promise in Health Care?" by Bob Wachter and Erik Brynjolfsson (JAMA, 2023).The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine's Computer Age, by Bob Wachter (2015). EXTRAS:"The Doctor Won't See You Now," by Freakonomics Radio (2025)."How to Stop Worrying and Love the Robot Apocalypse (Update)," by Freakonomics Radio (2024). Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
This podcast is made possible by our listeners and viewers. If this show has brought you value, you can support it by becoming a member of The Way Forward, our platform designed to help you find the health and freedom community (people, practitioners, schools, farms, and more) near you. Your membership directly supports the podcast and the work we do: www.thewayfwrd.com/joinDid you know your health issues might depend on your biofield?In this episode, I sit down with Dr. Jason Yuan, a physician and practitioner working at the intersection of holistic health, energy healing, and consciousness research. The conversation centers on why chronic conditions often persist when health is treated as purely physical and what becomes possible when the human biofield is taken seriously.Health is approached here as an interaction between the body, emotional patterns, belief systems, and awareness itself. We examine how modalities like acupuncture, hands-off energy healing, and biofield-based practices point toward the same underlying principle: coherence matters.Placebo and nocebo effects expose a familiar pattern: real-world results consistently outpacing institutional validation. If you're interested in biofield science, mind body connection, and healing at the root, this episode is worth your time.You'll Learn:[00:00] Introduction[06:28] Pranic healing combined with acupuncture produces 90% success rates[14:19] Defining the biofield and how it acts as your body's blueprint[21:31] The latest biofield research and why belief matters in healing outcomes[36:48] Dr. Yuan's healing journey: removing energetic blocks, addressing limiting beliefs, and the role of character in consciousness evolution[40:04] Detecting energetic congestion, the solar plexus role in self-worth and chronic disease, and consciousness as the foundation[48:27] How emotional purging works, whether trauma is stored in beliefs or tissues, and the role of entities in healing[59:12] Dr. Yuan's spiritual awakening through third eye meditation[01:05:40] Daily self-reflection practices for maintaining energetic balance [01:13:44] Joe Dispenza's coherence healing research group[01:20:13] The power of consciousness to shape reality, social media's influence, and the Maharishi Effect[01:32:28] Manifestation, attachment, and the importance of giving what you want to receive[01:37:46] The role of intuition in pranic healing and how sensitivity develops with energetic cleanlinessRelated The Way Forward Episodes:Tuning the Zodiac & Balancing Through Sound featuring Eileen McKusick | YouTubeThought, Light & The Liquid Language of God with Veda Austin | YouTubeResources Mentioned:Biofield Definition: Consciousness & Healing Initiative | WebsiteA Breakthrough in Scientific Research: Meditation's Impact on Immunity by Dr. Joe Dispenza | WebsiteThe Secret of Light by Walter Russell | Book A Big Picture Theory of Everything by Tom Campbell | WebsiteFind more from Dr. Jason:Dr. Jason Yuan | InstagramConsciousness Cartographer | SubstackFind more from Alec:Alec Zeck | InstagramAlec Zeck | XThe Way Forward | InstagramThe Way Forward is Sponsored By:New Biology Clinic: Redefine Health from the Ground UpExperience tailored terrain-based health services with consults, livestreams, movement classes, and more. Visit www.NewBiologyClinic.com and use code THEWAYFORWARD (case sensitive) for $50 off activation. Members get the $150 fee waivedDesigned for deep focus and well-being. 100% blue light and flicker free. For $50 off your Daylight Computer, use discount code: TWF50
In an era where artificial intelligence (AI) and sensing technologies are rapidly evolving, the ability to detect human presence beyond traditional visual means has emerged as a groundbreaking frontier. Natalya Lopareva, CEO and founder of Algorized, delves into the innovative approaches being developed to sense human life in environments where conventional methods fall short. She explains the significance of these advancements, the underlying technologies, and the potential implications for various industries.The Need for Advanced Human DetectionThe initial motivation behind Algorized's research stemmed from a noble cause: aiding rescue missions by locating individuals trapped in buildings. Traditional methods relying on cameras and thermal imaging often fail in scenarios where visibility is compromised, such as collapsed structures or dark environments. This limitation raised a critical question: how can we discern human presence when visual cues are absent? The answer lies in understanding the physiological signals emitted by the human body, such as heartbeats and breathing patterns.As Natalya explains, the early research focused on detecting these vital signs, which are fundamental indicators of life. This foundational work laid the groundwork for broader applications beyond emergency response, revealing that every major industry has a pressing need for accurate human sensing capabilities. From automotive safety to industrial automation, the ability to detect human presence, regardless of visibility, can significantly enhance operational safety and efficiency.Technological Innovations in Human SensingAlgorized's journey from wall-penetrating technology to versatile human detection systems illustrates the transformative potential of AI in understanding human presence. The company has developed foundation models capable of sensing individuals in various contexts, whether in a car, an industrial setting, or even behind walls. This technology operates on the principle that the data derived from physiological signals can be interpreted similarly, regardless of the environment.For instance, in the automotive industry, manufacturers are increasingly concerned about the safety of passengers, particularly vulnerable individuals such as infants left unattended in vehicles. Algorized's technology can detect the presence of a child through non-visual means, providing an essential layer of safety that traditional camera systems may overlook. This capability not only enhances the safety of passengers but also addresses significant liability concerns for manufacturers.Bridging the Human-Machine GapOne of the most compelling aspects of Algorized's work is its focus on fostering collaboration between humans and machines. As Natalya articulates, the challenge is not merely about preventing machines from becoming dangerous; it is about creating a safe coexistence where machines can understand human intentions and respond appropriately. By equipping robots and automated systems with the ability to detect human presence and emotional states, we can establish a more harmonious interaction between humans and technology.This vision is particularly relevant in the context of physical AI, which often relies on visual data to interpret its surroundings. While cameras provide valuable information, they can be limited by factors such as lighting conditions or obstructions. In contrast, Algorized's approach enables machines to perceive human presence in a more nuanced and reliable manner, allowing them to adapt their behavior based on the real-time understanding of human activity.Implications for the FutureThe implications of detecting human presence beyond sight are profound. As industries continue to integrate AI and automation into their operations, the ability to sense human life accurately will play a crucial role in ensuring safety and efficiency. In healthcare, for example, this technology could revolutionize patient monitoring, allowing for real-time assessments of vital signs without intrusive methods. In manufacturing, it could enhance worker safety by enabling machines to recognize and respond to human presence dynamically.Moreover, as society grapples with the ethical considerations of AI and automation, technologies that prioritize human safety and well-being will be paramount. By focusing on true human-machine collaboration, Algorized is paving the way for a future where technology serves as an ally rather than a threat.ConclusionDetecting human presence beyond sight represents a significant leap forward in the capabilities of AI and sensing technologies. As demonstrated by Algorized's innovative approaches, the ability to discern vital signs and human activity in various environments holds immense potential for enhancing safety, efficiency, and collaboration across industries. By prioritizing human understanding in the development of AI, we can create a future where technology not only complements human life but actively contributes to its preservation and enhancement. As we continue to explore this frontier, the promise of a safer, more interconnected world becomes increasingly attainable.Interview by Scott Ertz of F5 Live: Refreshing Technology.Sponsored by: Get $5 to protect your credit card information online with Privacy. Amazon Prime gives you more than just free shipping. Get free music, TV shows, movies, videogames and more. Secure your connection and unlock a faster, safer internet by signing up for PureVPN today.
In an era where artificial intelligence (AI) and sensing technologies are rapidly evolving, the ability to detect human presence beyond traditional visual means has emerged as a groundbreaking frontier. Natalya Lopareva, CEO and founder of Algorized, delves into the innovative approaches being developed to sense human life in environments where conventional methods fall short. She explains the significance of these advancements, the underlying technologies, and the potential implications for various industries.The Need for Advanced Human DetectionThe initial motivation behind Algorized's research stemmed from a noble cause: aiding rescue missions by locating individuals trapped in buildings. Traditional methods relying on cameras and thermal imaging often fail in scenarios where visibility is compromised, such as collapsed structures or dark environments. This limitation raised a critical question: how can we discern human presence when visual cues are absent? The answer lies in understanding the physiological signals emitted by the human body, such as heartbeats and breathing patterns.As Natalya explains, the early research focused on detecting these vital signs, which are fundamental indicators of life. This foundational work laid the groundwork for broader applications beyond emergency response, revealing that every major industry has a pressing need for accurate human sensing capabilities. From automotive safety to industrial automation, the ability to detect human presence, regardless of visibility, can significantly enhance operational safety and efficiency.Technological Innovations in Human SensingAlgorized's journey from wall-penetrating technology to versatile human detection systems illustrates the transformative potential of AI in understanding human presence. The company has developed foundation models capable of sensing individuals in various contexts, whether in a car, an industrial setting, or even behind walls. This technology operates on the principle that the data derived from physiological signals can be interpreted similarly, regardless of the environment.For instance, in the automotive industry, manufacturers are increasingly concerned about the safety of passengers, particularly vulnerable individuals such as infants left unattended in vehicles. Algorized's technology can detect the presence of a child through non-visual means, providing an essential layer of safety that traditional camera systems may overlook. This capability not only enhances the safety of passengers but also addresses significant liability concerns for manufacturers.Bridging the Human-Machine GapOne of the most compelling aspects of Algorized's work is its focus on fostering collaboration between humans and machines. As Natalya articulates, the challenge is not merely about preventing machines from becoming dangerous; it is about creating a safe coexistence where machines can understand human intentions and respond appropriately. By equipping robots and automated systems with the ability to detect human presence and emotional states, we can establish a more harmonious interaction between humans and technology.This vision is particularly relevant in the context of physical AI, which often relies on visual data to interpret its surroundings. While cameras provide valuable information, they can be limited by factors such as lighting conditions or obstructions. In contrast, Algorized's approach enables machines to perceive human presence in a more nuanced and reliable manner, allowing them to adapt their behavior based on the real-time understanding of human activity.Implications for the FutureThe implications of detecting human presence beyond sight are profound. As industries continue to integrate AI and automation into their operations, the ability to sense human life accurately will play a crucial role in ensuring safety and efficiency. In healthcare, for example, this technology could revolutionize patient monitoring, allowing for real-time assessments of vital signs without intrusive methods. In manufacturing, it could enhance worker safety by enabling machines to recognize and respond to human presence dynamically.Moreover, as society grapples with the ethical considerations of AI and automation, technologies that prioritize human safety and well-being will be paramount. By focusing on true human-machine collaboration, Algorized is paving the way for a future where technology serves as an ally rather than a threat.ConclusionDetecting human presence beyond sight represents a significant leap forward in the capabilities of AI and sensing technologies. As demonstrated by Algorized's innovative approaches, the ability to discern vital signs and human activity in various environments holds immense potential for enhancing safety, efficiency, and collaboration across industries. By prioritizing human understanding in the development of AI, we can create a future where technology not only complements human life but actively contributes to its preservation and enhancement. As we continue to explore this frontier, the promise of a safer, more interconnected world becomes increasingly attainable.Interview by Scott Ertz of F5 Live: Refreshing Technology.Sponsored by: Get $5 to protect your credit card information online with Privacy. Amazon Prime gives you more than just free shipping. Get free music, TV shows, movies, videogames and more. Secure your connection and unlock a faster, safer internet by signing up for PureVPN today.
Unspoken Words: A Selective Mutism Podcast by Dr. Elisa Shipon-Blum
Episode 70 of the Unspoken Words podcast features a discussion between Dr. Elisa Shipon-Blum and Cara Tyrrell, M.Ed., discussing detecting Selective Mutism early and building the foundation for successful intervention through intentional language and collaborative support.In the episode, Dr. E and Cara explore why early detection matters and why many cases of Selective Mutism are missed in young children. They discuss the distinction between shyness and anxiety-based communication challenges, red flags to watch for, and how intentional language builds confidence without pressure. They also cover first steps toward professional evaluation and treatment planning.--Chapters: (3:59) Full Body Sensory – Understanding How Anxious Kids Experience the World(7:04) The Four Ages Framework – Why Your Smart Kid Still Can't Handle Social Situations(17:58) Beyond the Label – Why We're Assessing Instead of Understanding(24:44) Tell, Don't Test – How Your Words Shape Anxiety in the Moment(41:13) Connection Over Correction – Building the Foundation for Everything Else- Ask Dr. E a question of your own! Learn more about the host, Dr. Elisa Shipon-Blum Explore our SMart Center success stories! Get started at the SMart Center Listen to other Unspoken Words episodes here. For the best clips from every episode, follow the podcast on Instagram & YouTube Share our upcoming Selective Mutism In The School Virtual Conference on April, 10th, 2026, with your child or teen's school staff. 6.5 CEs/CEUs are available. Learn more about CommuniCamp, our 3+ day intensive group treatment and ALL DAY parent training & support program- For all podcast inquiries, please contact Dakota Hornak at dhornak@selectivemutismcenter.org This podcast was produced and published by New Edition Productions (neweditionconsulting.com)
Program notes:0:40 Two MMWR reports on wastewater to detect measles1:40 Subsequent detection after early identification2:40 Watch worldwide transition3:15 Weight regain after medication for weight management4:16 Cardiometabolic risk factors return in just over a year5:16 Willingness to use declined with knowledge of regain risk6:16 Prevention of obesity6:33 Chronic kidney disease and heart failure link7:35 Extracellular vesicles found8:35 Precise identification of a tangle pathway9:03 Physical activity types, varieties and mortality10:03 Higher variety conferred additional survival benefit11:03 Will you change your behavioral?12:03 Lower hypertension, BMI12:39 End
Ambient documentation is becoming normal in clinics. But the most interesting “voice” capability may not be transcription at all.In the latest episode of Faces of Digital Health, Henry O'Connell (Canary Speech) explains why voice biomarkers stalled for decades: the field analyzed words, not the neurological signal behind speech production.Canary's approach focuses on the “primary data layer”—how the central nervous system drives respiration, vocal cord vibration, and articulation in real conversational speech. A few details that stood out: ⏱️ ~45 seconds of conversation can be enough for assessment
This week in the security news: Supply chain attacks and XSS PS5 leaked keys Claude tips for security pros No Flipper Zeros allowed, or Raspberry PIs for that matter Kimwolf and your local network Linux is good now Removing unremovable apps without root Detecting lag catches infiltrators Defending your KVM Fixing some of the oldest code Deleting websites live on stage in costume It was a honeypot FCC is letting telecoms off easy Don't buy a Haribo power bank Ransomeware scum Fortinet vulns CISA warns about NVRs Patching MongoDB Visit https://www.securityweekly.com/psw for all the latest episodes! Show Notes: https://securityweekly.com/psw-908
Logan Harris is CEO, President, and Founder of Spotter Global—a company specializing in compact radar and drone detection technologies. Spotter Global imagines, designs, manufactures, and coordinates the software development of compact surveillance radars, Remote Drone ID, NetworkedIO command and control, and its Integrated Management Center. The company was originally founded to meet the needs of U.S. Special Forces, who required a very small, wide-area radar to protect small units conducting Village Stability Operations in Iraq and Afghanistan. From that need, the first Compact Surveillance Radar—the M600—was developed to protect warfighters operating in austere environments. In 2013, the attack on the Metcalf substation in California highlighted the need to detect threats far beyond the fence line. In response, Spotter introduced its first Compact Security Radar, the C40. Since then, the company has expanded its commercial off-the-shelf offerings to include 17 radar models covering areas from one acre to more than 380 acres, serving markets well beyond critical infrastructure—and far beyond North America. Logan is widely recognized as the inventor of the compact surveillance radar category. With deep expertise in RF engineering and digital signal processing, he launched SpotterRF in 2009 to help prevent harm to critical infrastructure and protect warfighters. Previously, Logan served as CTO at Wavetronix and as CTO and co-founder of ImSAR, the creator of NanoSAR. His engineering background also includes roles at IBM, TRW, Sensar Larson Davis, and Vantage. Logan holds both Bachelor's and Master's degrees in Electrical Engineering from Brigham Young University. Known for his innovation and leadership, he has positioned Spotter Global as a trusted radar provider across government and commercial sectors. In this episode of the Drone Radio Show, Logan talks about the growing reality of drone threats, how Spotter Global is using advanced detection and Remote ID technology to protect critical infrastructure and large public events, and what the future of airspace security looks like as agencies, regulations, and technologies continue to evolve.
This week in the security news: Supply chain attacks and XSS PS5 leaked keys Claude tips for security pros No Flipper Zeros allowed, or Raspberry PIs for that matter Kimwolf and your local network Linux is good now Removing unremovable apps without root Detecting lag catches infiltrators Defending your KVM Fixing some of the oldest code Deleting websites live on stage in costume It was a honeypot FCC is letting telecoms off easy Don't buy a Haribo power bank Ransomeware scum Fortinet vulns CISA warns about NVRs Patching MongoDB Show Notes: https://securityweekly.com/psw-908
This week in the security news: Supply chain attacks and XSS PS5 leaked keys Claude tips for security pros No Flipper Zeros allowed, or Raspberry PIs for that matter Kimwolf and your local network Linux is good now Removing unremovable apps without root Detecting lag catches infiltrators Defending your KVM Fixing some of the oldest code Deleting websites live on stage in costume It was a honeypot FCC is letting telecoms off easy Don't buy a Haribo power bank Ransomeware scum Fortinet vulns CISA warns about NVRs Patching MongoDB Visit https://www.securityweekly.com/psw for all the latest episodes! Show Notes: https://securityweekly.com/psw-908
Happy New Year! In this first episode of 2026, we spoke with Dr. Sarah Whipple, a Climate Adaptation Service Scientist and biologist with the Climate Adaptation Technical Services (CATS) initiative of the USGS National Climate Adaptation Science Center. Dr. Whipple, who has expertise in pollinator biology, inventory and monitoring, discussed the importance of pollinators and explained the impact of a shifting landscape and climate on species that are important for agriculture, food security and resilience. Listen to learn more about Sarah and her research!Relevant links: CASC Climate Adaptation Technical Services The buzz around biodiversity decline: Detecting pollinator shifts using a systematic reviewLeveraging virtual datasets to investigate the interplay of pollinators, protected areas, and SDG 15If you're enjoying this podcast, please consider rating us and/or leaving us a review on Apple Podcasts, Podcast Addict, or Podchaser. Thanks!Follow us on Twitter @RainShinePodNever miss an episode! Sign up to get an email alert whenever a new episode publishes (http://eepurl.com/hRuJ5H)Have a suggestion for a future episode? Please tell us!Come Rain or Shine affiliate links:DOI Southwest CASC: https://www.swcasc.arizona.edu/
This week in the security news: Supply chain attacks and XSS PS5 leaked keys Claude tips for security pros No Flipper Zeros allowed, or Raspberry PIs for that matter Kimwolf and your local network Linux is good now Removing unremovable apps without root Detecting lag catches infiltrators Defending your KVM Fixing some of the oldest code Deleting websites live on stage in costume It was a honeypot FCC is letting telecoms off easy Don't buy a Haribo power bank Ransomeware scum Fortinet vulns CISA warns about NVRs Patching MongoDB Show Notes: https://securityweekly.com/psw-908
Send us a textYour gut has tried to warn you before. That flicker when someone lingers by a bathroom door, shadows you to the elevator, or blocks your path with a smile and a prop—it's not paranoia, it's a baseline breaking. We dig into a practical method for spotting a cunning opponent early, whether it's a low-level hustler at the pump or a higher-stakes actor at a major event. The mechanics of deception don't change; only the stakes do.We start by stripping away the jargon and defining terms that keep you clear-headed. Opponent isn't “enemy”; it's anyone competing for advantage in a shared environment. From there, we map the four recurring tools that reveal intent: access, blending, manipulation, and timing. You'll learn how bad actors create closeness, hide in plain sight with props and roles, steer your attention, and pick their moment. Then we teach BASE—Baseline, Anomaly, Simplest explanation, Experiment—a concise loop you can run in real time to test what you think you're seeing without overreacting.You'll hear concrete examples from gas stations, hotels, tourist piers, and event lines, plus how to communicate what matters with short, useful language that prompts action. We walk through MLCOA versus MDCOA to balance likely and dangerous interpretations, then show low-calorie experiments—changing angle, reversing course, quick contact, second set of eyes—that force a reaction and buy time and distance. If you can catch a shoplifter's access play, you can disrupt a more serious plan using the same cues. This is situational awareness you can practice today: clean, repeatable, and calm under pressure.If this helped sharpen your instincts, follow the show, share it with a friend, and leave a quick review. For deeper dives and drills, check out our Patreon and keep the reps going—training changes behavior.Support the showWebsite: https://thehumanbehaviorpodcast.buzzsprout.com/shareFacebook: https://www.facebook.com/TheHumanBehaviorPodcastInstagram: https://www.instagram.com/thehumanbehaviorpodcast/ Patreon: https://www.patreon.com/ArcadiaCognerati More about Greg and Brian: https://arcadiacognerati.com/arcadia-cognerati-leadership-team/
CISO pressures are on the rise - board expectations, executive alignment, AI, and personal liability - and that's all on top of your normal security pressures. With all these pressures, CISO burnout is on the rise. How do we detect it and help prevent it? Easier said than done. In this Say Easy, Do Hard segment, we tackle the health and wellness of the CISO. In part 1, we discuss the increased pressures CISOs face. We all know them, but how are they impacting our daily lives, both at work and at home. In part 2, we discuss detection and prevention techniques to help avoid burnout, including: Detecting the signs of stress Acknowledging there is a problem Asking for help Techniques to deal with stress Industry and community support This is a serious problem in our industry and one we want to continue to focus on as we head into another stressful 2026. Visit https://www.securityweekly.com/bsw for all the latest episodes! Show Notes: https://securityweekly.com/bsw-428
CISO pressures are on the rise - board expectations, executive alignment, AI, and personal liability - and that's all on top of your normal security pressures. With all these pressures, CISO burnout is on the rise. How do we detect it and help prevent it? Easier said than done. In this Say Easy, Do Hard segment, we tackle the health and wellness of the CISO. In part 1, we discuss the increased pressures CISOs face. We all know them, but how are they impacting our daily lives, both at work and at home. In part 2, we discuss detection and prevention techniques to help avoid burnout, including: Detecting the signs of stress Acknowledging there is a problem Asking for help Techniques to deal with stress Industry and community support This is a serious problem in our industry and one we want to continue to focus on as we head into another stressful 2026. Show Notes: https://securityweekly.com/bsw-428
CISO pressures are on the rise - board expectations, executive alignment, AI, and personal liability - and that's all on top of your normal security pressures. With all these pressures, CISO burnout is on the rise. How do we detect it and help prevent it? Easier said than done. In this Say Easy, Do Hard segment, we tackle the health and wellness of the CISO. In part 1, we discuss the increased pressures CISOs face. We all know them, but how are they impacting our daily lives, both at work and at home. In part 2, we discuss detection and prevention techniques to help avoid burnout, including: Detecting the signs of stress Acknowledging there is a problem Asking for help Techniques to deal with stress Industry and community support This is a serious problem in our industry and one we want to continue to focus on as we head into another stressful 2026. Visit https://www.securityweekly.com/bsw for all the latest episodes! Show Notes: https://securityweekly.com/bsw-428
Send us a textLung cancer is often discovered too late, when treatments are expensive and survival rates are low. But what if routine chest x-rays could flag cancer early…long before symptoms appear? AI is transforming everyday imaging into a powerful early detection tool, reshaping screening economics and saving lives around the world. Prashant Warier, CEO and Founder of Qure.ai, joins CareTalk to discuss how AI enables earlier diagnosis, why chest x-rays are an untapped opportunity for detection, and what it takes to integrate AI into national health systems at scale.
Europe has experienced warfare and violence at a scale most Americans can't fathom. As a result, countless millions of rounds of artillery, some of which is still buried in the earth poses a clear a present danger to 21st century citizens.Berten's job is to detect unexploded ordnance so that roads and buildings can be safely constructed. In the course of locating these dangerous implements of war, OTHER historic items and battlefield relics are discovered.As a result, Berten is often the first to detect these lost landscapes of battlefields that go back to Napoleonic times! Listen in as he talks about some of these historical sites and how some of them relate to veterans featured on The Warrior Next Door Podcast!Support the show
Welcome to my podcast. I am Doctor Warrick Bishop, and I want to help you to live as well as possible for as long as possible. I'm a practising cardiologist, best-selling author, keynote speaker, and the creator of The Healthy Heart Network. I have over 20 years as a specialist cardiologist and a private practice of over 10,000 patients. Dr. Warrick Bishop, a cardiologist and CEO of the Healthy Heart Network, discusses the importance of understanding inflammation for overall health. He explains that while acute inflammation is a beneficial response to injury, chronic low-grade inflammation is linked to serious health issues like heart disease, Alzheimer's, type 2 diabetes, and cancer. Causes of chronic inflammation include modern lifestyle factors such as highly processed foods, visceral fat (fat surrounding organs), sedentary behavior, poor sleep, stress, and environmental toxins. Gingivitis is also mentioned as a common chronic infection that can contribute to inflammation. Detecting inflammation can be done through specific blood tests like high-sensitivity C-reactive protein (HS CRP) and erythrocyte sedimentation rate (ESR), though general markers like high triglycerides, low HDL, insulin resistance, and increased waist circumference can also be indicators.
This podcast is brought to you by Outcomes Rocket, your exclusive healthcare marketing agency. Learn how to accelerate your growth by going to outcomesrocket.com Early cancer detection significantly increases survival rates and reduces the overall health and financial burden. In this episode, Trudy McKanna, Senior Field Medical Director for GRAIL, discusses how Galleri multi-cancer early detection technology is transforming screening by identifying signals for more than 50 cancers through a simple blood draw. She explains how methylation patterns, cell-free DNA, and machine learning allow clinicians to pinpoint potential cancer origins before symptoms appear. Trudy shares data showing that adding this test to standard screening detects seven times more cancers, with over half found at early stages, while maintaining a remarkably low false-positive rate. She also highlights its impact on underserved communities, population health scalability, and the importance of rigorous clinical validation. Tune in and discover how early detection can transform cancer outcomes! Resources Connect with and follow Trudy McKanna on LinkedIn. Follow GRAIL on LinkedIn and visit their website! Learn more about Galleri here.
We often worry about AI models “hallucinating” or making honest mistakes. But what happens when a model knows the truth, but decides to deceive you anyway to achieve a goal of its own? This isn't sci-fi — it's happening regularly in deployment today. Marius Hobbhahn, CEO of the world's top research organisation focused on AI deception (Apollo Research), has been collaborating with OpenAI to figure out what causes OpenAI's reasoning models to 'scheme' against users.Links to learn more, video, and full transcript: https://80k.info/mh25 In a striking case uncovered by Apollo, when many AI models were told they would have capabilities removed if they performed too well on a test, they successfully “sandbagged” — intentionally answering questions incorrectly to appear less capable than they were, while also being careful not to perform so poorly it would arouse suspicion.These models had somehow developed a preference to preserve their own capabilities, despite never being trained in that goal or assigned a task that called for it.This doesn't cause significant risk now, but as AI models become more general, superhuman in more areas, and are given more decision-making power, it could become outright dangerous.In today's episode, Marius details his recent collaboration with OpenAI to train o3 to follow principles like “never lie,” even when placed in “high-pressure” situations where it would otherwise make sense.The good news: They reduced “covert rule violations” (scheming) by about 97%.The bad news: In the remaining 3% of cases, the models sometimes became more sophisticated — making up new principles to justify their lying, or realising they were in a test environment and deciding to play along until the coast was clear.Marius argues that while we can patch specific behaviours, we might be entering a “cat-and-mouse game” where models are becoming more situationally aware — that is, aware of when they're being evaluated — faster than we are getting better at testing.Even if models can't tell they're being tested, they can produce hundreds of pages of reasoning before giving answers and include strange internal dialects humans can't make sense of, making it much harder to tell whether models are scheming or train them to stop.Marius and host Rob Wiblin discuss:Why models pretending to be dumb is a rational survival strategyThe Replit AI agent that deleted a production database and then lied about itWhy rewarding AIs for achieving outcomes might lead to them becoming better liarsThe weird new language models are using in their internal chain-of-thoughtThis episode was recorded on September 19, 2025.Chapters:Cold open (00:00:00)Who's Marius Hobbhahn? (00:01:20)Top three examples of scheming and deception (00:02:11)Scheming is a natural path for AI models (and people) (00:15:56)How enthusiastic to lie are the models? (00:28:18)Does eliminating deception fix our fears about rogue AI? (00:35:04)Apollo's collaboration with OpenAI to stop o3 lying (00:38:24)They reduced lying a lot, but the problem is mostly unsolved (00:52:07)Detecting situational awareness with thought injections (01:02:18)Chains of thought becoming less human understandable (01:16:09)Why can't we use LLMs to make realistic test environments? (01:28:06)Is the window to address scheming closing? (01:33:58)Would anything still work with superintelligent systems? (01:45:48)Companies' incentives and most promising regulation options (01:54:56)'Internal deployment' is a core risk we mostly ignore (02:09:19)Catastrophe through chaos (02:28:10)Careers in AI scheming research (02:43:21)Marius's key takeaways for listeners (03:01:48)Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon MonsourMusic: CORBITCamera operator: Mateo Villanueva BrandtCoordination, transcripts, and web: Katy Moore