Podcasts about 25b

  • 252PODCASTS
  • 325EPISODES
  • 36mAVG DURATION
  • 1WEEKLY EPISODE
  • Feb 13, 2026LATEST

POPULARITY

20192020202120222023202420252026


Best podcasts about 25b

Latest podcast episodes about 25b

Touching Base
Women in Science, Robotics, Automation, SLAS, and Lilly Updates

Touching Base

Play Episode Listen Later Feb 13, 2026 32:44


Women in Science Day (February 11) was the top of the list for discussion for GEN editors in this week's podcast. They shared an anecdote on the history of the term “scientist”—hint it was coined for a woman. A modern scientist, Medra CEO Michelle Lee, discussed with GEN how the company is integrating robotics with AI for use in biological research. GEN attended SLAS this week and we got an update on the automation updates along with endeavors to increase the presence of women in biotech leadership. Finally, we get an update on Eli Lilly's recent major deals followed by an update on Nektar Therapeutics clinical trial updates. Join GEN editors Corinna Singleman, PhD, Fay Lin, PhD, Uduak Thomas and Alex Philippidis for a discussion of the latest biotech and biopharma news. Listed below are links to the GEN stories referenced in this episode of Touching Base: Data Is a Robotics Problem, Medra CEO Says Physical AI Will Transform BiologyBy Fay Lin, PhD, GEN Edge, February 11, 2026Robots on the Red Line: A Video Update from SLAS 2026GEN, February 11, 2026SLAS Highlights: AI Labs, Small-Molecule SPR, Protein Interaction Assays, and Paper LabwareBy Uduak Thomas, GEN, February 11, 2026SLAS Highlights: Opening Keynote Spotlights Novel Target in Genomically Unstable TumorsBy Uduak Thomas, GEN, February 11, 2026Opentrons Uses Nvidia Tech to Build Training Data That Powers Physical AI in the LabBy Uduak Thomas, GEN, February 9, 2026Beyond Obesity: Lilly Inks Up to $11.25B in Cancer, Immune System DealsBy Alex Philippidis, GEN Edge, February 10, 2026Lilly, Seamless Ink Up-to-$1.12B Hearing Loss CollaborationBy Alex Philippidis, GEN Edge, January 28, 2026Touching Base Podcast Hosted by Corinna Singleman, PhD Behind the Breakthroughs Hosted by Jonathan D. Grinstein, PhD Hosted on Acast. See acast.com/privacy for more information.

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
The First Mechanistic Interpretability Frontier Lab — Myra Deng & Mark Bissell of Goodfire AI

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Feb 6, 2026 68:01


From Palantir and Two Sigma to building Goodfire into the poster-child for actionable mechanistic interpretability, Mark Bissell (Member of Technical Staff) and Myra Deng (Head of Product) are trying to turn “peeking inside the model” into a repeatable production workflow by shipping APIs, landing real enterprise deployments, and now scaling the bet with a recent $150M Series B funding round at a $1.25B valuation.In this episode, we go far beyond the usual “SAEs are cool” take. We talk about Goodfire's core bet: that the AI lifecycle is still fundamentally broken because the only reliable control we have is data and we post-train, RLHF, and fine-tune by “slurping supervision through a straw,” hoping the model picks up the right behaviors while quietly absorbing the wrong ones. Goodfire's answer is to build a bi-directional interface between humans and models: read what's happening inside, edit it surgically, and eventually use interpretability during training so customization isn't just brute-force guesswork.Mark and Myra walk through what that looks like when you stop treating interpretability like a lab demo and start treating it like infrastructure: lightweight probes that add near-zero latency, token-level safety filters that can run at inference time, and interpretability workflows that survive messy constraints (multilingual inputs, synthetic→real transfer, regulated domains, no access to sensitive data). We also get a live window into what “frontier-scale interp” means operationally (i.e. steering a trillion-parameter model in real time by targeting internal features) plus why the same tooling generalizes cleanly from language models to genomics, medical imaging, and “pixel-space” world models.We discuss:* Myra + Mark's path: Palantir (health systems, forward-deployed engineering) → Goodfire early team; Two Sigma → Head of Product, translating frontier interpretability research into a platform and real-world deployments* What “interpretability” actually means in practice: not just post-hoc poking, but a broader “science of deep learning” approach across the full AI lifecycle (data curation → post-training → internal representations → model design)* Why post-training is the first big wedge: “surgical edits” for unintended behaviors likereward hacking, sycophancy, noise learned during customization plus the dream of targeted unlearning and bias removal without wrecking capabilities* SAEs vs probes in the real world: why SAE feature spaces sometimes underperform classifiers trained on raw activations for downstream detection tasks (hallucination, harmful intent, PII), and what that implies about “clean concept spaces”* Rakuten in production: deploying interpretability-based token-level PII detection at inference time to prevent routing private data to downstream providers plus the gnarly constraints: no training on real customer PII, synthetic→real transfer, English + Japanese, and tokenization quirks* Why interp can be operationally cheaper than LLM-judge guardrails: probes are lightweight, low-latency, and don't require hosting a second large model in the loop* Real-time steering at frontier scale: a demo of steering Kimi K2 (~1T params) live and finding features via SAE pipelines, auto-labeling via LLMs, and toggling a “Gen-Z slang” feature across multiple layers without breaking tool use* Hallucinations as an internal signal: the case that models have latent uncertainty / “user-pleasing” circuitry you can detect and potentially mitigate more directly than black-box methods* Steering vs prompting: the emerging view that activation steering and in-context learning are more closely connected than people think, including work mapping between the two (even for jailbreak-style behaviors)* Interpretability for science: using the same tooling across domains (genomics, medical imaging, materials) to debug spurious correlations and extract new knowledge up to and including early biomarker discovery work with major partners* World models + “pixel-space” interpretability: why vision/video models make concepts easier to see, how that accelerates the feedback loop, and why robotics/world-model partners are especially interesting design partners* The north star: moving from “data in, weights out” to intentional model design where experts can impart goals and constraints directly, not just via reward signals and brute-force post-training—Goodfire AI* Website: https://goodfire.ai* LinkedIn: https://www.linkedin.com/company/goodfire-ai/* X: https://x.com/GoodfireAIMyra Deng* Website: https://myradeng.com/* LinkedIn: https://www.linkedin.com/in/myra-deng/* X: https://x.com/myra_dengMark Bissell* LinkedIn: https://www.linkedin.com/in/mark-bissell/* X: https://x.com/MarkMBissellFull Video EpisodeTimestamps00:00:00 Introduction00:00:05 Introduction to the Latent Space Podcast and Guests from Goodfire00:00:29 What is Goodfire? Mission and Focus on Interpretability00:01:01 Goodfire's Practical Approach to Interpretability00:01:37 Goodfire's Series B Fundraise Announcement00:02:04 Backgrounds of Mark and Myra from Goodfire00:02:51 Team Structure and Roles at Goodfire00:05:13 What is Interpretability? Definitions and Techniques00:05:30 Understanding Errors00:07:29 Post-training vs. Pre-training Interpretability Applications00:08:51 Using Interpretability to Remove Unwanted Behaviors00:10:09 Grokking, Double Descent, and Generalization in Models00:10:15 404 Not Found Explained00:12:06 Subliminal Learning and Hidden Biases in Models00:14:07 How Goodfire Chooses Research Directions and Projects00:15:00 Troubleshooting Errors00:16:04 Limitations of SAEs and Probes in Interpretability00:18:14 Rakuten Case Study: Production Deployment of Interpretability00:20:45 Conclusion00:21:12 Efficiency Benefits of Interpretability Techniques00:21:26 Live Demo: Real-Time Steering in a Trillion Parameter Model00:25:15 How Steering Features are Identified and Labeled00:26:51 Detecting and Mitigating Hallucinations Using Interpretability00:31:20 Equivalence of Activation Steering and Prompting00:34:06 Comparing Steering with Fine-Tuning and LoRA Techniques00:36:04 Model Design and the Future of Intentional AI Development00:38:09 Getting Started in Mechinterp: Resources, Programs, and Open Problems00:40:51 Industry Applications and the Rise of Mechinterp in Practice00:41:39 Interpretability for Code Models and Real-World Usage00:43:07 Making Steering Useful for More Than Stylistic Edits00:46:17 Applying Interpretability to Healthcare and Scientific Discovery00:49:15 Why Interpretability is Crucial in High-Stakes Domains like Healthcare00:52:03 Call for Design Partners Across Domains00:54:18 Interest in World Models and Visual Interpretability00:57:22 Sci-Fi Inspiration: Ted Chiang and Interpretability01:00:14 Interpretability, Safety, and Alignment Perspectives01:04:27 Weak-to-Strong Generalization and Future Alignment Challenges01:05:38 Final Thoughts and Hiring/Collaboration Opportunities at GoodfireTranscriptShawn Wang [00:00:05]: So welcome to the Latent Space pod. We're back in the studio with our special MechInterp co-host, Vibhu. Welcome. Mochi, Mochi's special co-host. And Mochi, the mechanistic interpretability doggo. We have with us Mark and Myra from Goodfire. Welcome. Thanks for having us on. Maybe we can sort of introduce Goodfire and then introduce you guys. How do you introduce Goodfire today?Myra Deng [00:00:29]: Yeah, it's a great question. So Goodfire, we like to say, is an AI research lab that focuses on using interpretability to understand, learn from, and design AI models. And we really believe that interpretability will unlock the new generation, next frontier of safe and powerful AI models. That's our description right now, and I'm excited to dive more into the work we're doing to make that happen.Shawn Wang [00:00:55]: Yeah. And there's always like the official description. Is there an understatement? Is there an unofficial one that sort of resonates more with a different audience?Mark Bissell [00:01:01]: Well, being an AI research lab that's focused on interpretability, there's obviously a lot of people have a lot that they think about when they think of interpretability. And I think we have a pretty broad definition of what that means and the types of places that can be applied. And in particular, applying it in production scenarios, in high stakes industries, and really taking it sort of from the research world into the real world. Which, you know. It's a new field, so that hasn't been done all that much. And we're excited about actually seeing that sort of put into practice.Shawn Wang [00:01:37]: Yeah, I would say it wasn't too long ago that Anthopic was like still putting out like toy models or superposition and that kind of stuff. And I wouldn't have pegged it to be this far along. When you and I talked at NeurIPS, you were talking a little bit about your production use cases and your customers. And then not to bury the lead, today we're also announcing the fundraise, your Series B. $150 million. $150 million at a 1.25B valuation. Congrats, Unicorn.Mark Bissell [00:02:02]: Thank you. Yeah, no, things move fast.Shawn Wang [00:02:04]: We were talking to you in December and already some big updates since then. Let's dive, I guess, into a bit of your backgrounds as well. Mark, you were at Palantir working on health stuff, which is really interesting because the Goodfire has some interesting like health use cases. I don't know how related they are in practice.Mark Bissell [00:02:22]: Yeah, not super related, but I don't know. It was helpful context to know what it's like. Just to work. Just to work with health systems and generally in that domain. Yeah.Shawn Wang [00:02:32]: And Mara, you were at Two Sigma, which actually I was also at Two Sigma back in the day. Wow, nice.Myra Deng [00:02:37]: Did we overlap at all?Shawn Wang [00:02:38]: No, this is when I was briefly a software engineer before I became a sort of developer relations person. And now you're head of product. What are your sort of respective roles, just to introduce people to like what all gets done in Goodfire?Mark Bissell [00:02:51]: Yeah, prior to Goodfire, I was at Palantir for about three years as a forward deployed engineer, now a hot term. Wasn't always that way. And as a technical lead on the health care team and at Goodfire, I'm a member of the technical staff. And honestly, that I think is about as specific as like as as I could describe myself because I've worked on a range of things. And, you know, it's it's a fun time to be at a team that's still reasonably small. I think when I joined one of the first like ten employees, now we're above 40, but still, it looks like there's always a mix of research and engineering and product and all of the above. That needs to get done. And I think everyone across the team is, you know, pretty, pretty switch hitter in the roles they do. So I think you've seen some of the stuff that I worked on related to image models, which was sort of like a research demo. More recently, I've been working on our scientific discovery team with some of our life sciences partners, but then also building out our core platform for more of like flexing some of the kind of MLE and developer skills as well.Shawn Wang [00:03:53]: Very generalist. And you also had like a very like a founding engineer type role.Myra Deng [00:03:58]: Yeah, yeah.Shawn Wang [00:03:59]: So I also started as I still am a member of technical staff, did a wide range of things from the very beginning, including like finding our office space and all of this, which is we both we both visited when you had that open house thing. It was really nice.Myra Deng [00:04:13]: Thank you. Thank you. Yeah. Plug to come visit our office.Shawn Wang [00:04:15]: It looked like it was like 200 people. It has room for 200 people. But you guys are like 10.Myra Deng [00:04:22]: For a while, it was very empty. But yeah, like like Mark, I spend. A lot of my time as as head of product, I think product is a bit of a weird role these days, but a lot of it is thinking about how do we take our frontier research and really apply it to the most important real world problems and how does that then translate into a platform that's repeatable or a product and working across, you know, the engineering and research teams to make that happen and also communicating to the world? Like, what is interpretability? What is it used for? What is it good for? Why is it so important? All of these things are part of my day-to-day as well.Shawn Wang [00:05:01]: I love like what is things because that's a very crisp like starting point for people like coming to a field. They all do a fun thing. Vibhu, why don't you want to try tackling what is interpretability and then they can correct us.Vibhu Sapra [00:05:13]: Okay, great. So I think like one, just to kick off, it's a very interesting role to be head of product, right? Because you guys, at least as a lab, you're more of an applied interp lab, right? Which is pretty different than just normal interp, like a lot of background research. But yeah. You guys actually ship an API to try these things. You have Ember, you have products around it, which not many do. Okay. What is interp? So basically you're trying to have an understanding of what's going on in model, like in the model, in the internal. So different approaches to do that. You can do probing, SAEs, transcoders, all this stuff. But basically you have an, you have a hypothesis. You have something that you want to learn about what's happening in a model internals. And then you're trying to solve that from there. You can do stuff like you can, you know, you can do activation mapping. You can try to do steering. There's a lot of stuff that you can do, but the key question is, you know, from input to output, we want to have a better understanding of what's happening and, you know, how can we, how can we adjust what's happening on the model internals? How'd I do?Mark Bissell [00:06:12]: That was really good. I think that was great. I think it's also a, it's kind of a minefield of a, if you ask 50 people who quote unquote work in interp, like what is interpretability, you'll probably get 50 different answers. And. Yeah. To some extent also like where, where good fire sits in the space. I think that we're an AI research company above all else. And interpretability is a, is a set of methods that we think are really useful and worth kind of specializing in, in order to accomplish the goals we want to accomplish. But I think we also sort of see some of the goals as even more broader as, as almost like the science of deep learning and just taking a not black box approach to kind of any part of the like AI development life cycle, whether that. That means using interp for like data curation while you're training your model or for understanding what happened during post-training or for the, you know, understanding activations and sort of internal representations, what is in there semantically. And then a lot of sort of exciting updates that were, you know, are sort of also part of the, the fundraise around bringing interpretability to training, which I don't think has been done all that much before. A lot of this stuff is sort of post-talk poking at models as opposed to. To actually using this to intentionally design them.Shawn Wang [00:07:29]: Is this post-training or pre-training or is that not a useful.Myra Deng [00:07:33]: Currently focused on post-training, but there's no reason the techniques wouldn't also work in pre-training.Shawn Wang [00:07:38]: Yeah. It seems like it would be more active, applicable post-training because basically I'm thinking like rollouts or like, you know, having different variations of a model that you can tweak with the, with your steering. Yeah.Myra Deng [00:07:50]: And I think in a lot of the news that you've seen in, in, on like Twitter or whatever, you've seen a lot of unintended. Side effects come out of post-training processes, you know, overly sycophantic models or models that exhibit strange reward hacking behavior. I think these are like extreme examples. There's also, you know, very, uh, mundane, more mundane, like enterprise use cases where, you know, they try to customize or post-train a model to do something and it learns some noise or it doesn't appropriately learn the target task. And a big question that we've always had is like, how do you use your understanding of what the model knows and what it's doing to actually guide the learning process?Shawn Wang [00:08:26]: Yeah, I mean, uh, you know, just to anchor this for people, uh, one of the biggest controversies of last year was 4.0 GlazeGate. I've never heard of GlazeGate. I didn't know that was what it was called. The other one, they called it that on the blog post and I was like, well, how did OpenAI call it? Like officially use that term. And I'm like, that's funny, but like, yeah, I guess it's the pitch that if they had worked a good fire, they wouldn't have avoided it. Like, you know what I'm saying?Myra Deng [00:08:51]: I think so. Yeah. Yeah.Mark Bissell [00:08:53]: I think that's certainly one of the use cases. I think. Yeah. Yeah. I think the reason why post-training is a place where this makes a lot of sense is a lot of what we're talking about is surgical edits. You know, you want to be able to have expert feedback, very surgically change how your model is doing, whether that is, you know, removing a certain behavior that it has. So, you know, one of the things that we've been looking at or is, is another like common area where you would want to make a somewhat surgical edit is some of the models that have say political bias. Like you look at Quen or, um, R1 and they have sort of like this CCP bias.Shawn Wang [00:09:27]: Is there a CCP vector?Mark Bissell [00:09:29]: Well, there's, there are certainly internal, yeah. Parts of the representation space where you can sort of see where that lives. Yeah. Um, and you want to kind of, you know, extract that piece out.Shawn Wang [00:09:40]: Well, I always say, you know, whenever you find a vector, a fun exercise is just like, make it very negative to see what the opposite of CCP is.Mark Bissell [00:09:47]: The super America, bald eagles flying everywhere. But yeah. So in general, like lots of post-training tasks where you'd want to be able to, to do that. Whether it's unlearning a certain behavior or, you know, some of the other kind of cases where this comes up is, are you familiar with like the, the grokking behavior? I mean, I know the machine learning term of grokking.Shawn Wang [00:10:09]: Yeah.Mark Bissell [00:10:09]: Sort of this like double descent idea of, of having a model that is able to learn a generalizing, a generalizing solution, as opposed to even if memorization of some task would suffice, you want it to learn the more general way of doing a thing. And so, you know, another. A way that you can think about having surgical access to a model's internals would be learn from this data, but learn in the right way. If there are many possible, you know, ways to, to do that. Can make interp solve the double descent problem?Shawn Wang [00:10:41]: Depends, I guess, on how you. Okay. So I, I, I viewed that double descent as a problem because then you're like, well, if the loss curves level out, then you're done, but maybe you're not done. Right. Right. But like, if you actually can interpret what is a generalizing or what you're doing. What is, what is still changing, even though the loss is not changing, then maybe you, you can actually not view it as a double descent problem. And actually you're just sort of translating the space in which you view loss and like, and then you have a smooth curve. Yeah.Mark Bissell [00:11:11]: I think that's certainly like the domain of, of problems that we're, that we're looking to get.Shawn Wang [00:11:15]: Yeah. To me, like double descent is like the biggest thing to like ML research where like, if you believe in scaling, then you don't need, you need to know where to scale. And. But if you believe in double descent, then you don't, you don't believe in anything where like anything levels off, like.Vibhu Sapra [00:11:30]: I mean, also tendentially there's like, okay, when you talk about the China vector, right. There's the subliminal learning work. It was from the anthropic fellows program where basically you can have hidden biases in a model. And as you distill down or, you know, as you train on distilled data, those biases always show up, even if like you explicitly try to not train on them. So, you know, it's just like another use case of. Okay. If we can interpret what's happening in post-training, you know, can we clear some of this? Can we even determine what's there? Because yeah, it's just like some worrying research that's out there that shows, you know, we really don't know what's going on.Mark Bissell [00:12:06]: That is. Yeah. I think that's the biggest sentiment that we're sort of hoping to tackle. Nobody knows what's going on. Right. Like subliminal learning is just an insane concept when you think about it. Right. Train a model on not even the logits, literally the output text of a bunch of random numbers. And now your model loves owls. And you see behaviors like that, that are just, they defy, they defy intuition. And, and there are mathematical explanations that you can get into, but. I mean.Shawn Wang [00:12:34]: It feels so early days. Objectively, there are a sequence of numbers that are more owl-like than others. There, there should be.Mark Bissell [00:12:40]: According to, according to certain models. Right. It's interesting. I think it only applies to models that were initialized from the same starting Z. Usually, yes.Shawn Wang [00:12:49]: But I mean, I think that's a, that's a cheat code because there's not enough compute. But like if you believe in like platonic representation, like probably it will transfer across different models as well. Oh, you think so?Mark Bissell [00:13:00]: I think of it more as a statistical artifact of models initialized from the same seed sort of. There's something that is like path dependent from that seed that might cause certain overlaps in the latent space and then sort of doing this distillation. Yeah. Like it pushes it towards having certain other tendencies.Vibhu Sapra [00:13:24]: Got it. I think there's like a bunch of these open-ended questions, right? Like you can't train in new stuff during the RL phase, right? RL only reorganizes weights and you can only do stuff that's somewhat there in your base model. You're not learning new stuff. You're just reordering chains and stuff. But okay. My broader question is when you guys work at an interp lab, how do you decide what to work on and what's kind of the thought process? Right. Because we can ramble for hours. Okay. I want to know this. I want to know that. But like, how do you concretely like, you know, what's the workflow? Okay. There's like approaches towards solving a problem, right? I can try prompting. I can look at chain of thought. I can train probes, SAEs. But how do you determine, you know, like, okay, is this going anywhere? Like, do we have set stuff? Just, you know, if you can help me with all that. Yeah.Myra Deng [00:14:07]: It's a really good question. I feel like we've always at the very beginning of the company thought about like, let's go and try to learn what isn't working in machine learning today. Whether that's talking to customers or talking to researchers at other labs, trying to understand both where the frontier is going and where things are really not falling apart today. And then developing a perspective on how we can push the frontier using interpretability methods. And so, you know, even our chief scientist, Tom, spends a lot of time talking to customers and trying to understand what real world problems are and then taking that back and trying to apply the current state of the art to those problems and then seeing where they fall down basically. And then using those failures or those shortcomings to understand what hills to climb when it comes to interpretability research. So like on the fundamental side, for instance, when we have done some work applying SAEs and probes, we've encountered, you know, some shortcomings in SAEs that we found a little bit surprising. And so have gone back to the drawing board and done work on that. And then, you know, we've done some work on better foundational interpreter models. And a lot of our team's research is focused on what is the next evolution beyond SAEs, for instance. And then when it comes to like control and design of models, you know, we tried steering with our first API and realized that it still fell short of black box techniques like prompting or fine tuning. And so went back to the drawing board and we're like, how do we make that not the case and how do we improve it beyond that? And one of our researchers, Ekdeep, who just joined is actually Ekdeep and Atticus are like steering experts and have spent a lot of time trying to figure out like, what is the research that enables us to actually do this in a much more powerful, robust way? So yeah, the answer is like, look at real world problems, try to translate that into a research agenda and then like hill climb on both of those at the same time.Shawn Wang [00:16:04]: Yeah. Mark has the steering CLI demo queued up, which we're going to go into in a sec. But I always want to double click on when you drop hints, like we found some problems with SAEs. Okay. What are they? You know, and then we can go into the demo. Yeah.Myra Deng [00:16:19]: I mean, I'm curious if you have more thoughts here as well, because you've done it in the healthcare domain. But I think like, for instance, when we do things like trying to detect behaviors within models that are harmful or like behaviors that a user might not want to have in their model. So hallucinations, for instance, harmful intent, PII, all of these things. We first tried using SAE probes for a lot of these tasks. So taking the feature activation space from SAEs and then training classifiers on top of that, and then seeing how well we can detect the properties that we might want to detect in model behavior. And we've seen in many cases that probes just trained on raw activations seem to perform better than SAE probes, which is a bit surprising if you think that SAEs are actually also capturing the concepts that you would want to capture cleanly and more surgically. And so that is an interesting observation. I don't think that is like, I'm not down on SAEs at all. I think there are many, many things they're useful for, but we have definitely run into cases where I think the concept space described by SAEs is not as clean and accurate as we would expect it to be for actual like real world downstream performance metrics.Mark Bissell [00:17:34]: Fair enough. Yeah. It's the blessing and the curse of unsupervised methods where you get to peek into the AI's mind. But sometimes you wish that you saw other things when you walked inside there. Although in the PII instance, I think weren't an SAE based approach actually did prove to be the most generalizable?Myra Deng [00:17:53]: It did work well in the case that we published with Rakuten. And I think a lot of the reasons it worked well was because we had a noisier data set. And so actually the blessing of unsupervised learning is that we actually got to get more meaningful, generalizable signal from SAEs when the data was noisy. But in other cases where we've had like good data sets, it hasn't been the case.Shawn Wang [00:18:14]: And just because you named Rakuten and I don't know if we'll get it another chance, like what is the overall, like what is Rakuten's usage or production usage? Yeah.Myra Deng [00:18:25]: So they are using us to essentially guardrail and inference time monitor their language model usage and their agent usage to detect things like PII so that they don't route private user information.Myra Deng [00:18:41]: And so that's, you know, going through all of their user queries every day. And that's something that we deployed with them a few months ago. And now we are actually exploring very early partnerships, not just with Rakuten, but with other people around how we can help with potentially training and customization use cases as well. Yeah.Shawn Wang [00:19:03]: And for those who don't know, like it's Rakuten is like, I think number one or number two e-commerce store in Japan. Yes. Yeah.Mark Bissell [00:19:10]: And I think that use case actually highlights a lot of like what it looks like to deploy things in practice that you don't always think about when you're doing sort of research tasks. So when you think about some of the stuff that came up there that's more complex than your idealized version of a problem, they were encountering things like synthetic to real transfer of methods. So they couldn't train probes, classifiers, things like that on actual customer data of PII. So what they had to do is use synthetic data sets. And then hope that that transfer is out of domain to real data sets. And so we can evaluate performance on the real data sets, but not train on customer PII. So that right off the bat is like a big challenge. You have multilingual requirements. So this needed to work for both English and Japanese text. Japanese text has all sorts of quirks, including tokenization behaviors that caused lots of bugs that caused us to be pulling our hair out. And then also a lot of tasks you'll see. You might make simplifying assumptions if you're sort of treating it as like the easiest version of the problem to just sort of get like general results where maybe you say you're classifying a sentence to say, does this contain PII? But the need that Rakuten had was token level classification so that you could precisely scrub out the PII. So as we learned more about the problem, you're sort of speaking about what that looks like in practice. Yeah. A lot of assumptions end up breaking. And that was just one instance where you. A problem that seems simple right off the bat ends up being more complex as you keep diving into it.Vibhu Sapra [00:20:41]: Excellent. One of the things that's also interesting with Interp is a lot of these methods are very efficient, right? So where you're just looking at a model's internals itself compared to a separate like guardrail, LLM as a judge, a separate model. One, you have to host it. Two, there's like a whole latency. So if you use like a big model, you have a second call. Some of the work around like self detection of hallucination, it's also deployed for efficiency, right? So if you have someone like Rakuten doing it in production live, you know, that's just another thing people should consider.Mark Bissell [00:21:12]: Yeah. And something like a probe is super lightweight. Yeah. It's no extra latency really. Excellent.Shawn Wang [00:21:17]: You have the steering demos lined up. So we were just kind of see what you got. I don't, I don't actually know if this is like the latest, latest or like alpha thing.Mark Bissell [00:21:26]: No, this is a pretty hacky demo from from a presentation that someone else on the team recently gave. So this will give a sense for, for technology. So you can see the steering and action. Honestly, I think the biggest thing that this highlights is that as we've been growing as a company and taking on kind of more and more ambitious versions of interpretability related problems, a lot of that comes to scaling up in various different forms. And so here you're going to see steering on a 1 trillion parameter model. This is Kimi K2. And so it's sort of fun that in addition to the research challenges, there are engineering challenges that we're now tackling. Cause for any of this to be sort of useful in production, you need to be thinking about what it looks like when you're using these methods on frontier models as opposed to sort of like toy kind of model organisms. So yeah, this was thrown together hastily, pretty fragile behind the scenes, but I think it's quite a fun demo. So screen sharing is on. So I've got two terminal sessions pulled up here. On the left is a forked version that we have of the Kimi CLI that we've got running to point at our custom hosted Kimi model. And then on the right is a set up that will allow us to steer on certain concepts. So I should be able to chat with Kimi over here. Tell it hello. This is running locally. So the CLI is running locally, but the Kimi server is running back to the office. Well, hopefully should be, um, that's too much to run on that Mac. Yeah. I think it's, uh, it takes a full, like each 100 node. I think it's like, you can. You can run it on eight GPUs, eight 100. So, so yeah, Kimi's running. We can ask it a prompt. It's got a forked version of our, uh, of the SG line code base that we've been working on. So I'm going to tell it, Hey, this SG line code base is slow. I think there's a bug. Can you try to figure it out? There's a big code base, so it'll, it'll spend some time doing this. And then on the right here, I'm going to initialize in real time. Some steering. Let's see here.Mark Bissell [00:23:33]: searching for any. Bugs. Feature ID 43205.Shawn Wang [00:23:38]: Yeah.Mark Bissell [00:23:38]: 20, 30, 40. So let me, uh, this is basically a feature that we found that inside Kimi seems to cause it to speak in Gen Z slang. And so on the left, it's still sort of thinking normally it might take, I don't know, 15 seconds for this to kick in, but then we're going to start hopefully seeing him do this code base is massive for real. So we're going to start. We're going to start seeing Kimi transition as the steering kicks in from normal Kimi to Gen Z Kimi and both in its chain of thought and its actual outputs.Mark Bissell [00:24:19]: And interestingly, you can see, you know, it's still able to call tools, uh, and stuff. It's um, it's purely sort of it's it's demeanor. And there are other features that we found for interesting things like concision. So that's more of a practical one. You can make it more concise. Um, the types of programs, uh, programming languages that uses, but yeah, as we're seeing it come in. Pretty good. Outputs.Shawn Wang [00:24:43]: Scheduler code is actually wild.Vibhu Sapra [00:24:46]: Yo, this code is actually insane, bro.Vibhu Sapra [00:24:53]: What's the process of training in SAE on this, or, you know, how do you label features? I know you guys put out a pretty cool blog post about, um, finding this like autonomous interp. Um, something. Something about how agents for interp is different than like coding agents. I don't know while this is spewing up, but how, how do we find feature 43, two Oh five. Yeah.Mark Bissell [00:25:15]: So in this case, um, we, our platform that we've been building out for a long time now supports all the sort of classic out of the box interp techniques that you might want to have like SAE training, probing things of that kind, I'd say the techniques for like vanilla SAEs are pretty well established now where. You take your model that you're interpreting, run a whole bunch of data through it, gather activations, and then yeah, pretty straightforward pipeline to train an SAE. There are a lot of different varieties. There's top KSAEs, batch top KSAEs, um, normal ReLU SAEs. And then once you have your sparse features to your point, assigning labels to them to actually understand that this is a gen Z feature, that's actually where a lot of the kind of magic happens. Yeah. And the most basic standard technique is look at all of your d input data set examples that cause this feature to fire most highly. And then you can usually pick out a pattern. So for this feature, If I've run a diverse enough data set through my model feature 43, two Oh five. Probably tends to fire on all the tokens that sounds like gen Z slang. You know, that's the, that's the time of year to be like, Oh, I'm in this, I'm in this Um, and, um, so, you know, you could have a human go through all 43,000 concepts andVibhu Sapra [00:26:34]: And I've got to ask the basic question, you know, can we get examples where it hallucinates, pass it through, see what feature activates for hallucinations? Can I just, you know, turn hallucination down?Myra Deng [00:26:51]: Oh, wow. You really predicted a project we're already working on right now, which is detecting hallucinations using interpretability techniques. And this is interesting because hallucinations is something that's very hard to detect. And it's like a kind of a hairy problem and something that black box methods really struggle with. Whereas like Gen Z, you could always train a simple classifier to detect that hallucinations is harder. But we've seen that models internally have some... Awareness of like uncertainty or some sort of like user pleasing behavior that leads to hallucinatory behavior. And so, yeah, we have a project that's trying to detect that accurately. And then also working on mitigating the hallucinatory behavior in the model itself as well.Shawn Wang [00:27:39]: Yeah, I would say most people are still at the level of like, oh, I would just turn temperature to zero and that turns off hallucination. And I'm like, well, that's a fundamental misunderstanding of how this works. Yeah.Mark Bissell [00:27:51]: Although, so part of what I like about that question is you, there are SAE based approaches that might like help you get at that. But oftentimes the beauty of SAEs and like we said, the curse is that they're unsupervised. So when you have a behavior that you deliberately would like to remove, and that's more of like a supervised task, often it is better to use something like probes and specifically target the thing that you're interested in reducing as opposed to sort of like hoping that when you fragment the latent space, one of the vectors that pops out.Vibhu Sapra [00:28:20]: And as much as we're training an autoencoder to be sparse, we're not like for sure certain that, you know, we will get something that just correlates to hallucination. You'll probably split that up into 20 other things and who knows what they'll be.Mark Bissell [00:28:36]: Of course. Right. Yeah. So there's no sort of problems with like feature splitting and feature absorption. And then there's the off target effects, right? Ideally, you would want to be very precise where if you reduce the hallucination feature, suddenly maybe your model can't write. Creatively anymore. And maybe you don't like that, but you want to still stop it from hallucinating facts and figures.Shawn Wang [00:28:55]: Good. So Vibhu has a paper to recommend there that we'll put in the show notes. But yeah, I mean, I guess just because your demo is done, any any other things that you want to highlight or any other interesting features you want to show?Mark Bissell [00:29:07]: I don't think so. Yeah. Like I said, this is a pretty small snippet. I think the main sort of point here that I think is exciting is that there's not a whole lot of inter being applied to models quite at this scale. You know, Anthropic certainly has some some. Research and yeah, other other teams as well. But it's it's nice to see these techniques, you know, being put into practice. I think not that long ago, the idea of real time steering of a trillion parameter model would have sounded.Shawn Wang [00:29:33]: Yeah. The fact that it's real time, like you started the thing and then you edited the steering vector.Vibhu Sapra [00:29:38]: I think it's it's an interesting one TBD of what the actual like production use case would be on that, like the real time editing. It's like that's the fun part of the demo, right? You can kind of see how this could be served behind an API, right? Like, yes, you're you only have so many knobs and you can just tweak it a bit more. And I don't know how it plays in. Like people haven't done that much with like, how does this work with or without prompting? Right. How does this work with fine tuning? Like, there's a whole hype of continual learning, right? So there's just so much to see. Like, is this another parameter? Like, is it like parameter? We just kind of leave it as a default. We don't use it. So I don't know. Maybe someone here wants to put out a guide on like how to use this with prompting when to do what?Mark Bissell [00:30:18]: Oh, well, I have a paper recommendation. I think you would love from Act Deep on our team, who is an amazing researcher, just can't say enough amazing things about Act Deep. But he actually has a paper that as well as some others from the team and elsewhere that go into the essentially equivalence of activation steering and in context learning and how those are from a he thinks of everything in a cognitive neuroscience Bayesian framework, but basically how you can precisely show how. Prompting in context, learning and steering exhibit similar behaviors and even like get quantitative about the like magnitude of steering you would need to do to induce a certain amount of behavior similar to certain prompting, even for things like jailbreaks and stuff. It's a really cool paper. Are you saying steering is less powerful than prompting? More like you can almost write a formula that tells you how to convert between the two of them.Myra Deng [00:31:20]: And so like formally equivalent actually in the in the limit. Right.Mark Bissell [00:31:24]: So like one case study of this is for jailbreaks there. I don't know. Have you seen the stuff where you can do like many shot jailbreaking? You like flood the context with examples of the behavior. And the topic put out that paper.Shawn Wang [00:31:38]: A lot of people were like, yeah, we've been doing this, guys.Mark Bissell [00:31:40]: Like, yeah, what's in this in context learning and activation steering equivalence paper is you can like predict the number. Number of examples that you will need to put in there in order to jailbreak the model. That's cool. By doing steering experiments and using this sort of like equivalence mapping. That's cool. That's really cool. It's very neat. Yeah.Shawn Wang [00:32:02]: I was going to say, like, you know, I can like back rationalize that this makes sense because, you know, what context is, is basically just, you know, it updates the KV cache kind of and like and then every next token inference is still like, you know, the sheer sum of everything all the way. It's plus all the context. It's up to date. And you could, I guess, theoretically steer that with you probably replace that with your steering. The only problem is steering typically is on one layer, maybe three layers like like you did. So it's like not exactly equivalent.Mark Bissell [00:32:33]: Right, right. There's sort of you need to get precise about, yeah, like how you sort of define steering and like what how you're modeling the setup. But yeah, I've got the paper pulled up here. Belief dynamics reveal the dual nature. Yeah. The title is Belief Dynamics Reveal the Dual Nature of Incompetence. And it's an exhibition of the practical context learning and activation steering. So Eric Bigelow, Dan Urgraft on the who are doing fellowships at Goodfire, Ekt Deep's the final author there.Myra Deng [00:32:59]: I think actually to your question of like, what is the production use case of steering? I think maybe if you just think like one level beyond steering as it is today. Like imagine if you could adapt your model to be, you know, an expert legal reasoner. Like in almost real time, like very quickly. efficiently using human feedback or using like your semantic understanding of what the model knows and where it knows that behavior. I think that while it's not clear what the product is at the end of the day, it's clearly very valuable. Thinking about like what's the next interface for model customization and adaptation is a really interesting problem for us. Like we have heard a lot of people actually interested in fine-tuning an RL for open weight models in production. And so people are using things like Tinker or kind of like open source libraries to do that, but it's still very difficult to get models fine-tuned and RL'd for exactly what you want them to do unless you're an expert at model training. And so that's like something we'reShawn Wang [00:34:06]: looking into. Yeah. I never thought so. Tinker from Thinking Machines famously uses rank one LoRa. Is that basically the same as steering? Like, you know, what's the comparison there?Mark Bissell [00:34:19]: Well, so in that case, you are still applying updates to the parameters, right?Shawn Wang [00:34:25]: Yeah. You're not touching a base model. You're touching an adapter. It's kind of, yeah.Mark Bissell [00:34:30]: Right. But I guess it still is like more in parameter space then. I guess it's maybe like, are you modifying the pipes or are you modifying the water flowing through the pipes to get what you're after? Yeah. Just maybe one way.Mark Bissell [00:34:44]: I like that analogy. That's my mental map of it at least, but it gets at this idea of model design and intentional design, which is something that we're, that we're very focused on. And just the fact that like, I hope that we look back at how we're currently training models and post-training models and just think what a primitive way of doing that right now. Like there's no intentionalityShawn Wang [00:35:06]: really in... It's just data, right? The only thing in control is what data we feed in.Mark Bissell [00:35:11]: So, so Dan from Goodfire likes to use this analogy of, you know, he has a couple of young kids and he talks about like, what if I could only teach my kids how to be good people by giving them cookies or like, you know, giving them a slap on the wrist if they do something wrong, like not telling them why it was wrong or like what they should have done differently or something like that. Just figure it out. Right. Exactly. So that's RL. Yeah. Right. And, and, you know, it's sample inefficient. There's, you know, what do they say? It's like slurping feedback. It's like, slurping supervision. Right. And so you'd like to get to the point where you can have experts giving feedback to their models that are, uh, internalized and, and, you know, steering is an inference time way of sort of getting that idea. But ideally you're moving to a world whereVibhu Sapra [00:36:04]: it is much more intentional design in perpetuity for these models. Okay. This is one of the questions we asked Emmanuel from Anthropic on the podcast a few months ago. Basically the question, was you're at a research lab that does model training, foundation models, and you're on an interp team. How does it tie back? Right? Like, does this, do ideas come from the pre-training team? Do they go back? Um, you know, so for those interested, you can, you can watch that. There wasn't too much of a connect there, but it's still something, you know, it's something they want toMark Bissell [00:36:33]: push for down the line. It can be useful for all of the above. Like there are certainly post-hocVibhu Sapra [00:36:39]: use cases where it doesn't need to touch that. I think the other thing a lot of people forget is this stuff isn't too computationally expensive, right? Like I would say, if you're interested in getting into research, MechInterp is one of the most approachable fields, right? A lot of this train an essay, train a probe, this stuff, like the budget for this one, there's already a lot done. There's a lot of open source work. You guys have done some too. Um, you know,Shawn Wang [00:37:04]: There's like notebooks from the Gemini team for Neil Nanda or like, this is how you do it. Just step through the notebook.Vibhu Sapra [00:37:09]: Even if you're like, not even technical with any of this, you can still make like progress. There, you can look at different activations, but, uh, if you do want to get into training, you know, training this stuff, correct me if I'm wrong is like in the thousands of dollars, not even like, it's not that high scale. And then same with like, you know, applying it, doing it for post-training or all this stuff is fairly cheap in scale of, okay. I want to get into like model training. I don't have compute for like, you know, pre-training stuff. So it's, it's a very nice field to get into. And also there's a lot of like open questions, right? Um, some of them have to go with, okay, I want a product. I want to solve this. Like there's also just a lot of open-ended stuff that people could work on. That's interesting. Right. I don't know if you guys have any calls for like, what's open questions, what's open work that you either open collaboration with, or like, you'd just like to see solved or just, you know, for people listening that want to get into McInturk because people always talk about it. What are, what are the things they should check out? Start, of course, you know, join you guys as well. I'm sure you're hiring.Myra Deng [00:38:09]: There's a paper, I think from, was it Lee, uh, Sharky? It's open problems and, uh, it's, it's a bit of interpretability, which I recommend everyone who's interested in the field. Read. I'm just like a really comprehensive overview of what are the things that experts in the field think are the most important problems to be solved. I also think to your point, it's been really, really inspiring to see, I think a lot of young people getting interested in interpretability, actually not just young people also like scientists to have been, you know, experts in physics for many years and in biology or things like this, um, transitioning into interp, because the barrier of, of what's now interp. So it's really cool to see a number to entry is, you know, in some ways low and there's a lot of information out there and ways to get started. There's this anecdote of like professors at universities saying that all of a sudden every incoming PhD student wants to study interpretability, which was not the case a few years ago. So it just goes to show how, I guess, like exciting the field is, how fast it's moving, how quick it is to get started and things like that.Mark Bissell [00:39:10]: And also just a very welcoming community. You know, there's an open source McInturk Slack channel. There are people are always posting questions and just folks in the space are always responsive if you ask things on various forums and stuff. But yeah, the open paper, open problems paper is a really good one.Myra Deng [00:39:28]: For other people who want to get started, I think, you know, MATS is a great program. What's the acronym for? Machine Learning and Alignment Theory Scholars? It's like the...Vibhu Sapra [00:39:40]: Normally summer internship style.Myra Deng [00:39:42]: Yeah, but they've been doing it year round now. And actually a lot of our full-time staff have come through that program or gone through that program. And it's great for anyone who is transitioning into interpretability. There's a couple other fellows programs. We do one as well as Anthropic. And so those are great places to get started if anyone is interested.Mark Bissell [00:40:03]: Also, I think been seen as a research field for a very long time. But I think engineering... I think engineers are sorely wanted for interpretability as well, especially at Goodfire, but elsewhere, as it does scale up.Shawn Wang [00:40:18]: I should mention that Lee actually works with you guys, right? And in the London office and I'm adding our first ever McInturk track at AI Europe because I see this industry applications now emerging. And I'm pretty excited to, you know, help push that along. Yeah, I was looking forward to that. It'll effectively be the first industry McInturk conference. Yeah. I'm so glad you added that. You know, it's still a little bit of a bet. It's not that widespread, but I can definitely see this is the time to really get into it. We want to be early on things.Mark Bissell [00:40:51]: For sure. And I think the field understands this, right? So at ICML, I think the title of the McInturk workshop this year was actionable interpretability. And there was a lot of discussion around bringing it to various domains. Everyone's adding pragmatic, actionable, whatever.Shawn Wang [00:41:10]: It's like, okay, well, we weren't actionable before, I guess. I don't know.Vibhu Sapra [00:41:13]: And I mean, like, just, you know, being in Europe, you see the Interp room. One, like old school conferences, like, I think they had a very tiny room till they got lucky and they got it doubled. But there's definitely a lot of interest, a lot of niche research. So you see a lot of research coming out of universities, students. We covered the paper last week. It's like two unknown authors, not many citations. But, you know, you can make a lot of meaningful work there. Yeah. Yeah. Yeah.Shawn Wang [00:41:39]: Yeah. I think people haven't really mentioned this yet. It's just Interp for code. I think it's like an abnormally important field. We haven't mentioned this yet. The conspiracy theory last two years ago was when the first SAE work came out of Anthropic was they would do like, oh, we just used SAEs to turn the bad code vector down and then turn up the good code. And I think like, isn't that the dream? Like, you know, like, but basically, I guess maybe, why is it funny? Like, it's... If it was realistic, it would not be funny. It would be like, no, actually, we should do this. But it's funny because we know there's like, we feel there's some limitations to what steering can do. And I think a lot of the public image of steering is like the Gen Z stuff. Like, oh, you can make it really love the Golden Gate Bridge, or you can make it speak like Gen Z. To like be a legal reasoner seems like a huge stretch. Yeah. And I don't know if that will get there this way. Yeah.Myra Deng [00:42:36]: I think, um, I will say we are announcing. Something very soon that I will not speak too much about. Um, but I think, yeah, this is like what we've run into again and again is like, we, we don't want to be in the world where steering is only useful for like stylistic things. That's definitely not, not what we're aiming for. But I think the types of interventions that you need to do to get to things like legal reasoning, um, are much more sophisticated and require breakthroughs in, in learning algorithms. And that's, um...Shawn Wang [00:43:07]: And is this an emergent property of scale as well?Myra Deng [00:43:10]: I think so. Yeah. I mean, I think scale definitely helps. I think scale allows you to learn a lot of information and, and reduce noise across, you know, large amounts of data. But I also think we think that there's ways to do things much more effectively, um, even, even at scale. So like actually learning exactly what you want from the data and not learning things that you do that you don't want exhibited in the data. So we're not like anti-scale, but we are also realizing that scale is not going to get us anywhere. It's not going to get us to the type of AI development that we want to be at in, in the future as these models get more powerful and get deployed in all these sorts of like mission critical contexts. Current life cycle of training and deploying and evaluations is, is to us like deeply broken and has opportunities to, to improve. So, um, more to come on that very, very soon.Mark Bissell [00:44:02]: And I think that that's a use basically, or maybe just like a proof point that these concepts do exist. Like if you can manipulate them in the precise best way, you can get the ideal combination of them that you desire. And steering is maybe the most coarse grained sort of peek at what that looks like. But I think it's evocative of what you could do if you had total surgical control over every concept, every parameter. Yeah, exactly.Myra Deng [00:44:30]: There were like bad code features. I've got it pulled up.Vibhu Sapra [00:44:33]: Yeah. Just coincidentally, as you guys are talking.Shawn Wang [00:44:35]: This is like, this is exactly.Vibhu Sapra [00:44:38]: There's like specifically a code error feature that activates and they show, you know, it's not, it's not typo detection. It's like, it's, it's typos in code. It's not typical typos. And, you know, you can, you can see it clearly activates where there's something wrong in code. And they have like malicious code, code error. They have a whole bunch of sub, you know, sub broken down little grain features. Yeah.Shawn Wang [00:45:02]: Yeah. So, so the, the rough intuition for me, the, why I talked about post-training was that, well, you just, you know, have a few different rollouts with all these things turned off and on and whatever. And then, you know, you can, that's, that's synthetic data you can kind of post-train on. Yeah.Vibhu Sapra [00:45:13]: And I think we make it sound easier than it is just saying, you know, they do the real hard work.Myra Deng [00:45:19]: I mean, you guys, you guys have the right idea. Exactly. Yeah. We replicated a lot of these features in, in our Lama models as well. I remember there was like.Vibhu Sapra [00:45:26]: And I think a lot of this stuff is open, right? Like, yeah, you guys opened yours. DeepMind has opened a lot of essays on Gemma. Even Anthropic has opened a lot of this. There's, there's a lot of resources that, you know, we can probably share of people that want to get involved.Shawn Wang [00:45:41]: Yeah. And special shout out to like Neuronpedia as well. Yes. Like, yeah, amazing piece of work to visualize those things.Myra Deng [00:45:49]: Yeah, exactly.Shawn Wang [00:45:50]: I guess I wanted to pivot a little bit on, onto the healthcare side, because I think that's a big use case for you guys. We haven't really talked about it yet. This is a bit of a crossover for me because we are, we are, we do have a separate science pod that we're starting up for AI, for AI for science, just because like, it's such a huge investment category and also I'm like less qualified to do it, but we actually have bio PhDs to cover that, which is great, but I need to just kind of recover, recap your work, maybe on the evil two stuff, but then, and then building forward.Mark Bissell [00:46:17]: Yeah, for sure. And maybe to frame up the conversation, I think another kind of interesting just lens on interpretability in general is a lot of the techniques that were described. are ways to solve the AI human interface problem. And it's sort of like bidirectional communication is the goal there. So what we've been talking about with intentional design of models and, you know, steering, but also more advanced techniques is having humans impart our desires and control into models and over models. And the reverse is also very interesting, especially as you get to superhuman models, whether that's narrow superintelligence, like these scientific models that work on genomics, data, medical imaging, things like that. But down the line, you know, superintelligence of other forms as well. What knowledge can the AIs teach us as sort of that, that the other direction in that? And so some of our life science work to date has been getting at exactly that question, which is, well, some of it does look like debugging these various life sciences models, understanding if they're actually performing well, on tasks, or if they're picking up on spurious correlations, for instance, genomics models, you would like to know whether they are sort of focusing on the biologically relevant things that you care about, or if it's using some simpler correlate, like the ancestry of the person that it's looking at. But then also in the instances where they are superhuman, and maybe they are understanding elements of the human genome that we don't have names for or specific, you know, yeah, discoveries that they've made that that we don't know about, that's, that's a big goal. And so we're already seeing that, right, we are partnered with organizations like Mayo Clinic, leading research health system in the United States, our Institute, as well as a startup called Prima Menta, which focuses on neurodegenerative disease. And in our partnership with them, we've used foundation models, they've been training and applied our interpretability techniques to find novel biomarkers for Alzheimer's disease. So I think this is just the tip of the iceberg. But it's, that's like a flavor of some of the things that we're working on.Shawn Wang [00:48:36]: Yeah, I think that's really fantastic. Obviously, we did the Chad Zuckerberg pod last year as well. And like, there's a plethora of these models coming out, because there's so much potential and research. And it's like, very interesting how it's basically the same as language models, but just with a different underlying data set. But it's like, it's the same exact techniques. Like, there's no change, basically.Mark Bissell [00:48:59]: Yeah. Well, and even in like other domains, right? Like, you know, robotics, I know, like a lot of the companies just use Gemma as like the like backbone, and then they like make it into a VLA that like takes these actions. It's, it's, it's transformers all the way down. So yeah.Vibhu Sapra [00:49:15]: Like we have Med Gemma now, right? Like this week, even there was Med Gemma 1.5. And they're training it on this stuff, like 3d scans, medical domain knowledge, and all that stuff, too. So there's a push from both sides. But I think the thing that, you know, one of the things about McInturpp is like, you're a little bit more cautious in some domains, right? So healthcare, mainly being one, like guardrails, understanding, you know, we're more risk adverse to something going wrong there. So even just from a basic understanding, like, if we're trusting these systems to make claims, we want to know why and what's going on.Myra Deng [00:49:51]: Yeah, I think there's totally a kind of like deployment bottleneck to actually using. foundation models for real patient usage or things like that. Like, say you're using a model for rare disease prediction, you probably want some explanation as to why your model predicted a certain outcome, and an interpretable explanation at that. So that's definitely a use case. But I also think like, being able to extract scientific information that no human knows to accelerate drug discovery and disease treatment and things like that actually is a really, really big unlock for science, like scientific discovery. And you've seen a lot of startups, like say that they're going to accelerate scientific discovery. And I feel like we actually are doing that through our interp techniques. And kind of like, almost by accident, like, I think we got reached out to very, very early on from these healthcare institutions. And none of us had healthcare.Shawn Wang [00:50:49]: How did they even hear of you? A podcast.Myra Deng [00:50:51]: Oh, okay. Yeah, podcast.Vibhu Sapra [00:50:53]: Okay, well, now's that time, you know.Myra Deng [00:50:55]: Everyone can call us.Shawn Wang [00:50:56]: Podcasts are the most important thing. Everyone should listen to podcasts.Myra Deng [00:50:59]: Yeah, they reached out. They were like, you know, we have these really smart models that we've trained, and we want to know what they're doing. And we were like, really early that time, like three months old, and it was a few of us. And we were like, oh, my God, we've never used these models. Let's figure it out. But it's also like, great proof that interp techniques scale pretty well across domains. We didn't really have to learn too much about.Shawn Wang [00:51:21]: Interp is a machine learning technique, machine learning skills everywhere, right? Yeah. And it's obviously, it's just like a general insight. Yeah. Probably to finance too, I think, which would be fun for our history. I don't know if you have anything to say there.Mark Bissell [00:51:34]: Yeah, well, just across the science. Like, we've also done work on material science. Yeah, it really runs the gamut.Vibhu Sapra [00:51:40]: Yeah. Awesome. And, you know, for those that should reach out, like, you're obviously experts in this, but like, is there a call out for people that you're looking to partner with, design partners, people to use your stuff outside of just, you know, the general developer that wants to. Plug and play steering stuff, like on the research side more so, like, are there ideal design partners, customers, stuff like that?Myra Deng [00:52:03]: Yeah, I can talk about maybe non-life sciences, and then I'm curious to hear from you on the life sciences side. But we're looking for design partners across many domains, language, anyone who's customizing language models or trying to push the frontier of code or reasoning models is really interesting to us. And then also interested in the frontier of modeling. There's a lot of models that work in, like, pixel space, as we call it. So if you're doing world models, video models, even robotics, where there's not a very clean natural language interface to interact with, I think we think that Interp can really help and are looking for a few partners in that space.Shawn Wang [00:52:43]: Just because you mentioned the keyword

This Week in Pre-IPO Stocks
E246: SpaceX acquires xAI, now $1.473T in secondary market; Waymo at $116B, +14% vs last round; ElevenLabs at $11B, $330M ARR; + more

This Week in Pre-IPO Stocks

Play Episode Listen Later Feb 6, 2026 14:30


Send us a textInvest in pre-IPO stocks with AG Dillon & Co. Contact aaron.dillon@agdillon.com to learn more. Financial advisors only. www.agdillon.com00:00 - Intro00:07 - SpaceX Acquires xAI at a $1.25T Combined Valuation01:39 - SpaceX Seeks Early Index Inclusion Once Public02:50 - SpaceX Files for 1M AI Data-Center Satellites03:37 - Anthropic Lines Up a $350B Employee Tender04:16 - Anthropic Opus 4.6 Ships With 1,000,000 Tokens and “Agent Teams” for Parallel Work04:53 - Goldman Uses Anthropic's Claude for AI Agents05:49 - Waymo Nears a $16B Round at $110B as Alphabet Writes Most of the Check06:34 - Cerebras Jumps to $23B Post-Money on a $1B Raise and a 750MW OpenAI Compute Deal07:15 - ElevenLabs Raises $500M at $11B and Targets a 2x ARR Step-Up07:59 - Clay's New Employee Tender - $5B Tender After ARR Hits $100M08:52 - Lotus Health Raises $35M to Build a Free AI Primary-Care Practice Across All 50 States09:46 - Goodfire Raises $150M at $1.25B to Make Black-Box Models Debuggable10:42 - Accrual Raises $75M to Deliver AI to Slower-Adopting Industries/Sectors11:28 - Fundamental Emerges With $255M and a $1.2B Valuation to Own Structured Data AI12:33 - OpenAI Launches Standalone Coding App 13:29 - OpenAI Frontier Pitches the Enterprise Agent Control Plane as B2B Revenue Targets 50% of Total Rev

The Agile Brand with Greg Kihlstrom
#804: GenLayer CEO Albert Castellana on AI's accountability gap

The Agile Brand with Greg Kihlstrom

Play Episode Listen Later Jan 28, 2026 25:27


When an AI agent makes a decision that costs your company millions in a lawsuit, who do you fire? Agility requires both the speed to adopt new technologies like AI agents, as well as the foresight to build the guardrails that prevent that speed from driving your brand off a cliff. Today, we're going to talk about the hidden crisis brewing behind the AI revolution: the accountability gap. As companies race to replace roles with autonomous AI agents, a critical question is being ignored: when an agent makes a biased, unethical, or simply wrong decision that harms a customer or an employee, who is actually responsible? This isn't a future problem; it's happening right now, and it poses a massive threat to brand trust, customer relationships, and legal standing. To help me discuss this topic, I'd like to welcome, Albert Castellana, Co-Founder & CEO at GenLayer. About Albert CastellanaAlbert Castellana is Co-Founder & CEO at GenLayer. A serial crypto entrepreneur since 2013, Albert has co-founded and led major blockchain projects including Radix DLT, NEM.io, BadgerDAO, and StakeHound, reaching over $25B in combined market value. Albert brings extensive experience in decentralized finance and governance. Albert's leadership is driven by firsthand insight into how existing legal systems fall short for digital assets, fueling his passion to create a trustless, global arbitration layer. Albert Castellana on LinkedIn: https://www.linkedin.com/in/acastellana/ Resources GenLayer: https://www.genlayer.com Take your personal data back with Incogni! Use code AGILE at the link below and get 60% off an annual plan: ⁠https://incogni.com/agile⁠  The Agile Brand podcast is brought to you by TEKsystems. Learn more here: https://www.teksystems.com/versionnextnow Drive your customers to new horizons at the premier retail event of the year for Retail and Brand marketers. Learn more at CRMC 2026, June 1-3. https://www.thecrmc.com/ Enjoyed the show? Tell us more at and give us a rating so others can find the show at: https://ratethispodcast.com/agile Connect with Greg on LinkedIn: https://www.linkedin.com/in/gregkihlstrom Don't miss a thing: get the latest episodes, sign up for our newsletter and more: https://www.theagilebrand.show Check out The Agile Brand Guide website with articles, insights, and Martechipedia, the wiki for marketing technology: https://www.agilebrandguide.com The Agile Brand is produced by Missing Link—a Latina-owned strategy-driven, creatively fueled production co-op. From ideation to creation, they craft human connections through intelligent, engaging and informative content. https://www.missinglink.company

two & a half gamers

Creative output hits record highs, Turkey and India attract massive funding, Meta shuts down VR studios, Rockstar launches a paid mod marketplace, and Arc Raiders sells 12M copies in just 2.5 months.The State of Gaming Marketing 2026 Report:https://www.appsflyer.com/resources/reports/gaming-app-marketing/?utm_source=influencer&utm_medium=Referral&utm_campaign=CO%3A+State+of+Gaming+01-2026&utm_term=matej&utm_content=NewsletterThis week shows how fast the industry is consolidating around execution, scale, and monetization efficiency.What I cover• Creative volume explosion• $25B global UA spend• Tale Monster & Liquid Nitro funding• Meta VR studio shutdowns• Rockstar's paid mod marketplace• Valve's Steam Machine update• Ubisoft layoffs• Arc Raiders breakout success2026 is not slowing down.---------------------------------------This is no BS gaming podcast 2.5 gamers session. Sharing actionable insights, dropping knowledge from our day-to-day User Acquisition, Game Design, and Ad monetization jobs. We are definitely not discussing the latest industry news, but having so much fun! Let's not forget this is a 4 a.m. conference discussion vibe, so let's not take it too seriously.Panelists: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jakub Remia⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠r,⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Felix Braberg, Matej Lancaric⁠Podcast: Join our slack channel here: https://join.slack.com/t/two-and-half-gamers/shared_invite/zt-2um8eguhf-c~H9idcxM271mnPzdWbipgChapters00:00 — Why this week matters00:32 — Creative volume explodes (2,600 creatives per quarter)01:40 — AI shifts from creation to reporting02:45 — Tale Monster raises $30M (Turkey momentum)03:40 — Liquid Nitro and AI live-service model04:45 — Global UA spend hits $25B05:35 — GDC becomes a premium festival06:20 — Meta shuts down VR studios07:10 — Rockstar launches paid mods08:05 — Valve updates Steam Machine strategy08:45 — Ubisoft layoffs & ARC Raiders breakout09:25 — Final take for 2026---------------------------------------Matej LancaricUser Acquisition & Creatives Consultant⁠https://lancaric.meFelix BrabergAd monetization consultant⁠https://www.felixbraberg.comJakub RemiarGame design consultant⁠https://www.linkedin.com/in/jakubremiar---------------------------------------Please share the podcast with your industry friends, dogs & cats. Especially cats! They love it!Hit the Subscribe button on YouTube, Spotify, and Apple!Please share feedback and comments - matej@lancaric.me---------------------------------------If you are interested in getting UA tips every week on Monday, visit ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠lancaric.substack.com⁠⁠⁠⁠⁠⁠ & sign up for the Brutally Honest newsletter by Matej LancaricDo you have UA questions nobody can answer? Ask ⁠⁠⁠⁠⁠⁠⁠⁠Matej AI⁠⁠⁠⁠⁠⁠ - the First UA AI in the gaming industry! https://lancaric.me/matej-ai

HyperChange
How To Value The SpaceX IPO

HyperChange

Play Episode Listen Later Dec 23, 2025 21:09


Interviewing Larry Goldberg (aka Tesla Larry) about SpaceX's upcoming IPO. We discuss the company's proposed $1.5T valuation and if thats over or under valued. SpaceX is currently operating it's launch and Starlink businesses at a ~$25B revenue run-rate, Larry believes the new V3 Starlink satellites could expand this significantly. Datacenters in space are coming, but may not add to the bottom line for another 4 or 5 years. And everything hinges on the success of Starship to enable these new businesses. 0:00 SpaceX IPO at $1.5T Valuation2:49 Starship Enables New Businesses4:08 Starlink's Military Potential & Strategic Value5:54 New Satellites From Starlink Are Gamechangers7:25 AI Datacenters In Space11:48 Elon Musk's Focus on Tesla's AI Chips13:09 When Does SpaceX Profit From Datacenters in Space14:17 Will Datacenters In Space Work?16:33 Everything Relies On Starship's Success18:48 SpaceX IPO: Under or Overpriced?Tesla Larry on X: https://x.com/TeslaLarryMy X:   / gfilche  HyperChange Patreon :)   / hyperchange   Disclaimer: Tesla Larry and I are long Tesla and SpaceX stock, this show is not financial advice.

Family Office Podcast:  Private Investor Interviews, Ultra-Wealthy Investment Strategies| Commercial Real Estate Investing, P
Why Billionaires Don't Coast: Positioning, Focus & Niche Strategy for Serious Capital Raisers

Family Office Podcast: Private Investor Interviews, Ultra-Wealthy Investment Strategies| Commercial Real Estate Investing, P

Play Episode Listen Later Dec 17, 2025 10:28


Send us a textIn this live investor event session, Richard C. Wilson breaks down what truly separates billionaires and centimillionaires from “normal” successful people, and why most founders, fund managers, and advisors unintentionally take their foot off the gas once they hit a few million.Richard shares insights from dinners with heads of an $8B+ family office, and then dives deep into one of the most overlooked force multipliers in capital raising: positioning. He explains why the founder of Paychex refuses to invest in any company that isn't branded to win, and how a simple rebrand at $8B AUM helped one wealth advisor grow past $25B+ in assets.You'll learn:Why top families keep pushing while others coast at $5–10MHow a crystal-clear niche and brand name can make your firm “preeminent” in your spaceThe Jim Collins hedgehog concept applied to capital raisers (passion, profit, and DNA)How to choose a 7% niche that captures 70% of the profitsWhy generic names like “XYZ Capital” quietly kill replies, deal flow, and investor trustHow to send investor emails that deliver real value first (checklists, DDQ templates, tools) instead of “Can I pitch you my deal?”Richard also shares real before/after examples of rebrands that moved firms from confusing and forgettable to institutional-grade and obvious at a glance, plus how his own YouTube positioning has led to cold inbound from founders doing tens of millions in revenue.

Let's Talk AI
#227 - Jeremie is back! DeepSeek 3.2, TPUs, Nested Learning

Let's Talk AI

Play Episode Listen Later Dec 9, 2025 94:40


Our 227th episode with a summary and discussion of last week's big AI news!Recorded on 12/05/2025Hosted by Andrey Kurenkov and Jeremie HarrisFeel free to email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.aiRead out our text newsletter and comment on the podcast at https://lastweekin.ai/In this episode:Deep Seek 3.2 and Flux 2 release, showcasing advancements in open-source AI models for natural language processing and image generation respectively.Amazon's new AI chips and Google's TPUs signal potential shifts in AI hardware dominance, with growing competition against Nvidia.Anthropic's potential IPO and OpenAI's declared ‘Code Red' indicate significant moves in the AI business landscape, including high venture funding rounds for startups.Key research papers from DeepMind and Google explore advanced memory architectures and multi-agent systems, indicating ongoing efforts to enhance AI reasoning and efficiency.Timestamps:(00:00:10) Intro / Banter(00:02:42) News PreviewTools & Apps(00:03:30) Deepseek 3.2 : New AI Model is Faster, Cheaper and Smarter(00:23:22) Black Forest Labs launches Flux.2 AI image models to challenge Nano Banana Pro and Midjourney(00:28:00) Sora and Nano Banana Pro throttled amid soaring demand | The Verge(00:29:34) Mistral closes in on Big AI rivals with new open-weight frontier and small models | TechCrunch(00:31:41) Kling's Video O1 launches as the first all-in-one video model for generation and editing(00:34:07) Runway rolls out Gen 4.5 AI video model that beats Google, OpenAIApplications & Business(00:35:18) NVIDIA's Partners Are Beginning to Tilt Toward Google's TPU Ecosystem, with Foxconn Reportedly Securing TPU Rack Orders(00:40:37) Amazon releases an impressive new AI chip and teases an Nvidia-friendly roadmap | TechCrunch(00:43:03) OpenAI declares ‘code red' as Google catches up in AI race | The Verge(00:46:20) Anthropic reportedly preparing for massive IPO in race with OpenAI: FT(00:48:41) Black Forest Labs raises $300M at $3.25B valuation | TechCrunch(00:49:20) Paris-based AI voice startup Gradium nabs $70M seed | TechCrunch(00:50:10) OpenAI announced a 1 GW Stargate cluster in Abu Dhabi(00:53:22) OpenAI's investment into Thrive Holdings is its latest circular deal(00:55:11) OpenAI to acquire Neptune, an AI model training assistance startup(00:56:11) Anthropic acquires developer tool startup Bun to scale AI coding(00:56:55) Microsoft drops AI sales targets in half after salespeople miss their quotas - Ars TechnicaProjects & Open Source(00:57:51) [2511.22570] DeepSeekMath-V2: Towards Self-Verifiable Mathematical Reasoning(01:01:52) Evo-Memory: Benchmarking LLM Agent Test-time Learning with Self-Evolving MemoryResearch & Advancements(01:05:44) Nested Learning: The Illusion of Deep Learning Architecture(01:13:30) Multi-Agent Deep Research: Training Multi-Agent Systems with M-GRPO(01:15:50) State of AI: An Empirical 100 Trillion Token Study with OpenRouterPolicy & Safety(01:21:52) Trump signs executive order launching Genesis Mission AI project(01:24:42) OpenAI has trained its LLM to confess to bad behavior | MIT Technology Review(01:29:34) US senators seek to block Nvidia sales of advanced chips to ChinaSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Take 2: Utah's Legislature with Heidi Hatch, Greg Hughes and Jim Dabakis
Take 2 Podcast: Utah GOP fuels repeal push with millions as redistricting fight intensifies

Take 2: Utah's Legislature with Heidi Hatch, Greg Hughes and Jim Dabakis

Play Episode Listen Later Dec 5, 2025 62:25


Utah GOP has raised over $4 million to repeal Proposition 4. The money is being donated by 501(c)(4) out of MA Signatures gathered- Rob Axson Utah GOP Chair reports healthy pace (tens of thousands) and growing momentum. None turned in yet. Utah GOP filed for a stay, no ruling yet Sutherland Institute Survey: Utah voters think elected officials, not judges, should choose congressional maps https://sutherlandinstitute.org/utah-redisticting-survey/ Do voters agree or disagree with the court-driven process produced by proponents of Proposition 4? Sutherland Institute recently commissioned Y2 Analytics to conduct a survey of Utah voters to investigate this question. By a 63-point margin, Utah voters believe that the policymakers they elect should be making redistricting decisions over judges. A full 71% of Utah voters say that an elected body or elected official at the state or county levels ought to have primary responsibility to decide congressional maps, compared to 8% who say judges should be primarily responsible. A plurality of Utah voters say that a body elected by the people (e.g., the Utah Legislature or county council) should be making redistricting decisions, while 21% of voters say it should be a state-level elected official (e.g., the governor), and 15% say it should be a county-level elected official (e,g., a county mayor). 5 Democratic Candidates in the D1 2025 Race State Sen. Nate Blouin announced his candidacy Nov. 24- now backed by BernieNewcomer Luis Villarreal entered the race1st in Sen. Kathleen RiebeFormer congressman Ben McAdamsFormer state Sen. Derek Kitchen. REP. VERONA MAUGA looking to pass gun law change in upcoming session after shooting of Afa Ah Loo at No Kings ProtestDrafting bill to keep long guns off the streets during protestsManslaughter charges filed this week against the “peace keeper” Cox proposes $30.7B budget with funding for homeless campus, child tax creditsGov. Spencer Cox proposes a $30.7 billion budget prioritizing literacy, a homeless campus and child tax credits.The budget includes $20 million for reading support and $25 million for a homeless campus in Salt Lake City.Cox's proposal also allocates funds for school safety, technical colleges and outdoor recreation.fund a literacy public awareness campaign and invest tens of millions in "paraprofessionals" to help teachers give an extra hand to young students who are falling behind. That would include $20 million for reading support in elementary schools that fall below the statewide proficiency benchmark of 70% of third graders reading at grade level. How the Trump administration could make or break Utah Gov. Cox's budget prioritiesTrump tax bill leads to projected $300 million decrease in state income tax revenue.Gov. Cox prioritizes $25 million in budget to finish construction of homeless campus.Cox not proposing income tax cuts like the ones he supported for five years in a row. https://governor.utah.gov/press/gov-cox-releases-fy-2027-budget-proposal-focused-on-responsible-fiscal-management-strong-families-and-long-term-prosperity/ Rep. Blake Moore joins Pres. Trump to celebrate Michael and Susan Dell's $6.25B investment in Trump Accounts Congressman Blake Moore joined President Trump at the White House to celebrate Michael and Susan Dell's monumental $6.25 billion investment in Trump Accounts for children. Congressman Moore introduced legislation in the House to establish these investment accounts, which was passed and signed into law as part of the One Big Beautiful Bill Act on July 4th, 2025. U.S. pauses immigration applications from 19 countries after two National Guard members shot in DCSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Jordan Is My Lawyer
December 4, 2025: The Truth About Somalis in Minnesota, What We Know About Hegseth's Authorization a Follow-Up Strike, Trump Voids Biden's Autopen Actions, and More.

Jordan Is My Lawyer

Play Episode Listen Later Dec 4, 2025 52:29


SUBSCRIBE TO JORDAN'S FREE NEWSLETTER. PEACE TALKS: Want Jordan's advice on how to navigate relationships amid the polarizing political climate? ⁠SUBMIT YOUR DILEMMA HERE⁠. Get the facts, without the spin. UNBIASED offers a clear, impartial recap of US news, including politics, elections, legal news, and more. Hosted by lawyer Jordan Berman, each episode provides a recap of current political events plus breakdowns of complex concepts—like constitutional rights, recent Supreme Court rulings, and new legislation—in an easy-to-understand way. No personal opinions, just the facts you need to stay informed on the daily news that matters. If you miss how journalism used to be, you're in the right place. In today's episode: What We Know About the Follow-Up Strike on the Alleged Drug Boat in the Caribbean (1:12) Trump Threatens to Void All Biden Actions Signed With Autopen, But Can He? (13:42) ICE to Target Somali Migrants in Minnesota Amid Accusations of Fraud; Here's What We Know (~21:27) White House Launches New 'Media Bias' Webpage (~44:13) Quick Hitters: Dell Family Donates $6.25B to Trump Accounts, New DoD Inspector General Report on Hegseth's Signal Chat, Trump Pardons Democratic Representative (~47:29) Rumor Has It: Did the DOJ Spend Nearly $1M in Overtime Pay for Agents to Redact Epstein Files? Does Kamala Harris Want the Voting Age Lowered to 16? (~50:02) Critical Thinking Segment (~53:01) SUBSCRIBE TO JORDAN'S FREE NEWSLETTER. Watch this episode on YouTube. Follow Jordan on Instagram and TikTok. All sources for this episode can be found here.  Learn more about your ad choices. Visit podcastchoices.com/adchoices

AI Chat: ChatGPT & AI News, Artificial Intelligence, OpenAI, Machine Learning
Black Forest Labs Raises $300M at $3.25B Valuation

AI Chat: ChatGPT & AI News, Artificial Intelligence, OpenAI, Machine Learning

Play Episode Listen Later Dec 4, 2025 8:55


In this episode, we dive into Black Forest Labs' $300M raise and what this new $3.25B valuation means for its position in the rapidly evolving AI landscape. We also discuss how this influx of capital could accelerate its model development and competitive ambitions.Get the top 40+ AI Models for $20 at AI Box: ⁠⁠https://aibox.aiAI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferJoin my AI Hustle Community: https://www.skool.com/aihustle--------------See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

The A.M. Update
THIS Is Lost In the Hullabaloo of Blowin' Up Boats | Time For Trump to Go Totally Domestic | 12/3/25

The A.M. Update

Play Episode Listen Later Dec 3, 2025 24:20


The real scandal in the Hegseth narco-terrorist strikes? Democrats are more furious about U.S. strikes on drug boats poisoning America than an Afghan national ambushing National Guardsmen in D.C. – exposing their depravity. Stefanik blasts Speaker Johnson for blocking deep-state reforms; Trump teases 2028 bench including Vance and Rubio; DOT threatens Minnesota's highway funds over fake Somali trucker licenses; Minneapolis mayor vows to shield Somali fraudsters from feds; Michael and Susan Dell pledge $6.25B to seed 25M kids' Trump investment accounts; Colorado dad heroically shoves armed burglar down stairs to protect sleeping children; and Scott Jennings nails why 2026 must laser-focus on domestic wins to avoid leftist court-packing nightmare.   The AM Update, Aaron McIntire, Pete Hegseth narco strikes, Afghan ambush, Somali fraud Minnesota, Trump accounts children, Michael Dell donation, home invasion Colorado, Elise Stefanik Mike Johnson, Scott Jennings 2026, deep state reform, Steve Deace Show

Politics Done Right
Dell's $6.25B for Kids Meets Van Hollen's Medicare for All Push in Defining Policy Clash

Politics Done Right

Play Episode Listen Later Dec 3, 2025 58:00


Dell's $6.25B child-investment plan meets Van Hollen's call for Medicare for All, exposing the gap between charity and structural reform in a nation failing working families.Subscribe to our Newsletter:https://politicsdoneright.com/newsletterPurchase our Books: As I See It: https://amzn.to/3XpvW5o How To Make AmericaUtopia: https://amzn.to/3VKVFnG It's Worth It: https://amzn.to/3VFByXP Lose Weight And BeFit Now: https://amzn.to/3xiQK3K Tribulations of anAfro-Latino Caribbean man: https://amzn.to/4c09rbE

Grain Markets and Other Stuff
"There is No Trade Deal" - China Buys Only 3% of US Soybean "Commitments"

Grain Markets and Other Stuff

Play Episode Listen Later Nov 17, 2025 10:50


Joe's Premium Subscription: www.standardgrain.comGrain Markets and Other Stuff Links-Apple PodcastsSpotifyTikTokYouTubeFutures and options trading involves risk of loss and is not suitable for everyone.

Bright Spots in Healthcare Podcast
Inside Credentialing: Where AI Delivers Measurable ROI for Health Plans

Bright Spots in Healthcare Podcast

Play Episode Listen Later Nov 4, 2025 58:28


In this Bright Spots in Healthcare episode, host Eric Glazer sits down with three leaders reshaping one of healthcare's most overlooked — yet mission-critical — functions: provider credentialing. Credentialing is the quiet infrastructure of trust in healthcare. When it's done right, patients get timely access to high-quality care, providers get paid faster, and health plans stay compliant. When it fails, backlogs grow, compliance risk skyrockets, provider satisfaction plummets, and member access suffers. Joining Eric for this discussion: Sandra Clarke, Former CFO & COO, Blue Shield of California Brett Dooies, Head of Product, Verifiable Janan Dave, VP of Operations, Verifiable Together, they explore how AI and automation are transforming credentialing from a slow, manual compliance task into a strategic capability that improves efficiency, trust, and network readiness. In this episode, you'll learn: Why credentialing sits at the intersection of compliance, provider experience, and member access How legacy processes, staffing limits, and messy data create hidden risk, and why backlogs can grow like quicksand Practical ways health plans are applying AI to reduce verification time, speed onboarding, and triage high-risk cases Why the most successful plans treat credentialing as infrastructure, not paperwork Key metrics to track when modernizing credentialing, including turnaround time, backlog clearance, audit readiness, and provider experience What to automate first,  and why humans still play a critical oversight role Bright Spots include: 97% automated verification in seconds across millions of records monthly New staffing and automation models that increase speed without compromising compliance Real-world examples where AI prevented risk exposure and accelerated network growth Leadership lessons in adopting AI responsibly and avoiding the "lift-and-shift" trap   This conversation offers payer leaders a real-world playbook to modernize credentialing and strengthen the foundation of your healthcare organization. Panelist Bios: Sandra Clarke is a healthcare executive and board advisor with over 25 years of experience leading finance, operations, and large-scale transformation across payer, provider, and life sciences organizations. As former CFO and COO of Blue Shield of California, she oversaw $25B in annual revenue and spearheaded initiatives delivering $700M in annualized savings while reimagining the company's pharmacy care model. Clarke has also held senior leadership roles at Daiichi Sankyo and Philips Healthcare and serves on multiple healthcare boards. She holds degrees from MIT, Bentley University, and Seton Hall University School of Law. Janan Dave is the VP of Operations at Verifiable, a start-up offering software and services solutions for healthcare organizations to ease the challenges surrounding provider network management. Janan has a background in public health and health policy, and has spent the last decade helping scale operations at various healthcare startups. She is passionate about building smart solutions to reduce waste in the healthcare system, and promote better care especially for the aging population, family caregivers, and women. Janan studied public health at the University of Pennsylvania, and lives in Brooklyn, NY. Brett Dooies is the Head of Product at Verifiable, where he leads the development of AI-powered solutions to simplify healthcare credentialing and monitoring. With a decade of experience building enterprise software, he specializes in applying advanced AI and analytics to enhance the customer experience and deliver transformative solutions. Drawing on his background in modernizing banking software, Brett is dedicated to creating products that drive operational excellence, uphold regulatory compliance, and improve data accuracy for Verifiable's partners, helping them scale with confidence in a complex ecosystem. Resources: MIT Sloan "Internet of AI Agents: State of AI in Business 2025" report finds that although over 80 % of organizations have piloted generative AI tools, only around 5 % have achieved meaningful business transformation—a gap dubbed the "GenAI Divide". It highlights that the primary barrier isn't model technology or regulation, but rather the failure of AI systems to integrate deeply into workflows, learn from feedback, and scale beyond the pilot stage. https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Business_2025_Report.pdf Thank you to our Episode Partner, Verifiable: Verifiable is a credentialing and network monitoring platform built to help healthcare organizations optimize operations with error-free, fast verifications and to stay compliant with ease. Backed by their in-house NCQA certified credentialing team that bring a combined 60+ years of experience, Verifiable's innovation supports managing trusted networks at scale through 97% verification automation in seconds with millions processing each month. Verifiable works with leading healthcare organizations such as Humana Dental, Zelis, Talkspace, Headway, Empower Pharmacy, and many others.   Learn more about them at https://verifiable.com/ Want to go deeper or schedule a briefing with Verifiable? Email hkrish@brightspotsventures.com and we'll coordinate time with the Verifiable team to discuss how their approach can help your plan reduce costs, accelerate onboarding, and strengthen network integrity. About Bright Spots Ventures: Bright Spots Ventures is a healthcare strategy and engagement company that creates content, communities, and connections to accelerate innovation.   We help healthcare leaders discover what's working, and how to scale it. By bringing together health plan, hospital, and solution leaders, we facilitate the exchange of ideas that lead to measurable impact. Through our podcast, executive councils, private events, and go-to-market strategy work, we surface and amplify the "bright spots" in healthcare, proven innovations others can learn from and replicate. At our core, we exist to create trusted relationships that make real progress possible. Visit our website at www.brightspotsinhealthcare.com.  

#AutisticAF Out Loud
LIVESTREAM: Trump Brings Gaza War Crimes Home to US Autistics

#AutisticAF Out Loud

Play Episode Listen Later Nov 3, 2025 14:39


Thank you to everyone who tuned into my live video! Join me for my next live video in the app.Show notes and transcript up tomorrow, 11/3.#AutisticAF Out Loud Newsletter is a reader-supported publication. Click to receive new posts… free. To support my work, please consider a paid subscription.Notes, sources, and further readingnot comprehensive or complete, but where I startedInternational Law: Starvation as War CrimeSupporting Sources:* Rome Statute of the International Criminal Court, Article 8(2)(b)(xxv): Case Matrix Network documenting “Intentionally using starvation of civilians as a method of warfare” as war crime casematrixnetwork​* D'Alessandra, Federica and Matthew Gillett. “The war crime of starvation in non-international armed conflict.” Oxford Blavatnik School of Government Working Paper BSG-WP-2019-031 (November 2019) bsg.ox​Counter/Nuance Source:* Lieber Institute West Point. “The War Crime of Starvation – The Irony of Grasping at Low Hanging Fruit” (September 2024): Notes starvation crime requires armed conflict context and specific intent elements; discusses challenges of prosecution lieber.westpoint​SNAP Shutdown & November 2025 Funding CrisisSupporting Sources:* CBS News. “SNAP funding is set to lapse Nov. 1, leaving recipients empty-handed” (October 30, 2025): USDA memo states “the well has run dry” and “At this time, there will be no benefits issued November 01”; 42 million Americans affected cbsnews+1​* NBC News. “Government shutdown effects bear down on millions more people after a crucial Nov. 1 deadline passes” (November 1, 2025): Despite judge's ruling, Trump administration indicated November SNAP payments likely delayed nbcnews​Counter/Nuance Source:* NBC News. “Federal judge orders Trump administration to pay SNAP benefits out of contingency fund” (October 31, 2025): Rhode Island Judge McConnell and Massachusetts Judge Talwani ruled USDA must use $5.25B contingency fund; creates uncertainty about timing rather than total cutoff nbcnews​Government Shutdown Timeline & StatusSupporting Sources:* Wikipedia. “2025 United States federal government shutdown” (updated November 2025): Documents shutdown began 12:01 AM EDT October 1, 2025; became second-longest (22 days) on October 22; resulted from partisan disagreements over spending, foreign aid, and ACA health subsidies wikipedia​* CBS News. “The 2025 U.S government shutdown, by the numbers” (October 30, 2025): Senate has voted 13 times on House-passed continuing resolution; all failed to reach 60-vote threshold needed to overcome filibuster cbsnews​Counter/Nuance Source:* NPR. “The federal government is still shut down. Here's what that means across the country” (October 30, 2025): Notes Republicans blame Democrats for voting against funding 14 times; Democrats counter that GOP refuses to address expiring ACA tax credits affecting 24 million Americans npr​USDA Refusal to Use Emergency FundsSupporting Sources:* Texas Tribune. “The federal shutdown will halt November SNAP benefits” (October 28, 2025): USDA Secretary Brooke Rollins stated October 27 via USDA website that no November 2025 SNAP benefits would be issued; agency memo says “contingency funds are not legally available to cover regular benefits” texastribune​* USA Today. “Government shutdown live updates” (November 2, 2025): Documents that USDA claimed $5.25 billion contingency fund reserved for disasters, not regular benefits; judges ordered use anyway usatoday​Counter/Nuance Source:* Fortune. “Judges order Trump administration to use emergency reserves for SNAP payments during the shutdown” (October 31, 2025): Federal courts rejected USDA legal interpretation; Massachusetts Judge Talwani ruled government “obligated to deploy contingency funds as necessary” fortune​Social Security & Trump WarningsSupporting Sources:* Newsweek. “Social Security, Medicare are ‘going to be gone,' Donald Trump warns” (October 21, 2025): Reports Trump statement during shutdown linking Democratic opposition to potential program loss newsweek​* Duke University Government Relations. “Fall 2025 Government Shutdown Updates” (October 31, 2025): Notes “Social Security ‘could vanish,' Trump warns” among shutdown impacts; documents 31-day shutdown status governmentrelations.duke​Counter/Nuance Source:* American Progress. “The Trump Administration's Plans To Covertly Cut Social Security Disability Benefits” (October 2025): Distinguishes between shutdown rhetoric and separate regulatory changes to tighten disability eligibility criteria americanprogress​Autism Employment & Benefit DependencySupporting Sources:* Autism Society. “Employment Statistics” (October 2025): Reports up to 85% of autistic adults with college degrees unemployed or underemployed; notes 40% lower earnings than peers with other disabilities autismsociety​* Kids Club ABA. “Autism Unemployment Rate” (May 2025): Cites National Autism Indicators Report showing 14-16% full-time employment among autistic adults kidsclubaba​Counter/Nuance Source:* Reddit r/autism. “PSA: The ‘85% autism unemployment rate' isn't accurate” (July 2024): Statistical critique noting figure conflates unemployment, underemployment, and labor force non-participation; argues if 85% of autistic adults were unemployed, they'd represent 94% of all unemployed at 4% national rate reddit​“Useless Eaters” & Eugenic RhetoricSupporting Sources:* Mostert, Mark P. “Useless Eaters: Disability as Genocidal Marker in Nazi Germany.” Documents Binding & Hoche 1920 tract; eugenic progression from efficiency language to T-4 program catholicculture+2​* NIH/PMC. “Confronting the Legacy of Eugenics and Ableism” (December 2023): Shows Industrial Revolution capitalist productivity models reframed disability as state cost pmc.ncbi.nlm.nih​Counter/Nuance Source:* Migration journal. “Reconsidering the history of eugenics and discrimination” (December 2024): Notes eugenic ideas were “deeply intertwined” with race, gender, class and disability—varied significantly across national contexts academic.oup​Boomerang Effect & Internal ColonialismSupporting Sources:* Wikipedia. “Imperial boomerang”: Documents Césaire's “terrific boomerang” thesis from Discourse on Colonialism (1950); Foucault's “Society Must Be Defended” lecture (1976) on colonial tactics returning home wikipedia​* Osun Global Commons. “Césaire's Boomerang Effect on the Streets of Berlin” (March 2023): Analyzes how European bourgeoisie “tolerated Nazism before it was inflicted on them” because it targeted non-Europeans first osunglobalcommons​Counter/Nuance Source:* Reality Studies. “The Department of War on American Cities, Ukraine, Gaza, and the Imperial Boomerang” (September 2025): Cautions against deterministic causation in linking colonial and domestic tactics realitystudies​Britain: Colonial Policing to Domestic ControlSupporting Sources:* Wikipedia. “Aliens Act 1905”: Documents how British emergency powers and crowd-control from Ireland informed domestic legislation wikipedia​* Human Rights Watch. “This Alien Legacy: The Origins of ‘Sodomy' Laws in British Colonialism” (December 2008): Shows British colonial legal mechanisms later echoed in domestic law hrw​Counter/Nuance Source:* Past & Present. “Aliens in a Revolutionary World” (April 2022): Notes British Alien Act 1793 “fell into disuse” post-Napoleonic Wars, complicating narrative of automatic domestic adoption academic.oup​France/Algeria: Torture Techniques to ParisSupporting Sources:* World Socialist Web Site. “Maurice Papon and the October 1961 massacre of Paris” (October 2021): Documents Papon's 1956-58 Algeria torture role, then as Paris police chief applied “same methods” in 1961 massacre wsws​* BBC. “How a massacre of Algerians in Paris was covered up” (October 2021): Confirms Papon supervised “repression and torture” in Algeria 1956; police records show he directed 1961 Paris massacre tactics bbc​Counter/Nuance Source:* LA Review of Books. “How to Forget a Massacre” (October 2019): Emphasizes Papon's individual agency empowered by de Gaulle rather than systemic inevitability; many police refused participation lareviewofbooks​U.S. Philippines to Domestic Militarized PolicingSupporting Sources:* The Diplomat. “How America's Wars in Asia Militarized the Police at Home” (June 2020): Documents Philippine Constabulary (1901) as hybrid military-police; veterans imported counterinsurgency techniques to U.S. law enforcement thediplomat​* Brown University Costs of War. “How the United States' Post-9/11 Wars Helped Militarize U.S. Police” (September 2020): Traces “colonial and anti-Black roots” through Philippines to 1033 program watson.brown​Counter/Nuance Source:* Jacobin. “Policing Empire” (September 2014): Argues policing-empire link involves domestic political contestation each era, not automatic transfer jacobin​Ottoman Empire: Genocides & StarvationSupporting Sources:* USHMM Holocaust Encyclopedia. “The Armenian Genocide (1915-16): In Depth” (August 2023): Documents centralized CUP deportation orders as “death warrant”; forced marches caused starvation, dehydration, exposure deaths encyclopedia.ushmm​* Genocide Education Project. “Brief History” (February 2016): Estimates 1.5M Armenians killed, 2M+ Christians total including Greeks and Assyrians genocideeducation​Counter/Nuance Source:* University of South Florida Genocide Studies. “The Ottoman Genocide of the Assyrians”: Notes genocides were “culmination of series of policies”; emphasizes WWI context and CUP nationalist ideology as distinct causal streams digitalcommons.usf​Black Radical Thought & Internal ColonialismSupporting Sources:* Gilderle hrman Institute. “Both Black and Disabled: Intersectional Experiences” (June 2022): Traces eugenic scientific racism; notes Black disabled Americans as “internal colonies” subject to extraction and surveillance gilderlehrman​* NIH/PMC. “Past Is Prologue: Dismantling Colonial Legacies to Advance Black Health” (December 2023): Argues chattel slavery was “expansive colonial project”; mass incarceration ongoing colonial project pmc.ncbi.nlm.nih​Counter/Nuance Source:* University of Miami. “The Forgotten Activists: Black People in the Disability Rights Movement” (January 2022): Notes disability movement historically “comprised of White people”; cautions against conflating marginalization without attending to specific mechanisms repository.law.miami​Food Insecurity & Violence (Structural Violence Frame)Supporting Sources:* NIH/PMC. “Association of Food Insecurity With Multiple Forms of Interpersonal Violence” (April 2023): 19 of 20 studies show food insecurity associated with increased violence; General Strain Theory supports food insecurity as stressor pmc.ncbi.nlm.nih​* Human Organization. “University Student Food Insecurity as a Form of Structural Violence” (May 2023): Uses structural violence framework for institutional food insecurity harm meridian.allenpress​Counter/Nuance Source:* CSIS. “Dangerously Hungry: The Link between Food Insecurity and Conflict” (April 2023): Notes agricultural abundance can also drive conflict; food-conflict link is “complex” https://open.substack.com/live-stream/74795?utm_source=live-stream-scheduled-upsellcsis​ This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit johnnyprofaneknapp.substack.com/subscribe

Analytic Dreamz: Notorious Mass Effect
"KODAK BLACK - JUST GETTING STARTED"

Analytic Dreamz: Notorious Mass Effect

Play Episode Listen Later Oct 31, 2025 7:14


Linktree: ⁠https://linktr.ee/Analytic⁠Join The Normandy For Additional Bonus Audio And Visual Content For All Things Nme+! Join Here: ⁠https://ow.ly/msoH50WCu0K⁠In this segment of Notorious Mass Effect, Analytic Dreamz dissects Kodak Black's highly anticipated album Just Getting Started, dropping October 31, 2025, via Vulture Love / Capitol Records. Announced October 10, the 20-track project clocks in at ~10 minutes, with pre-save live on Apple Music and Amazon Music Unlimited. The lead single “Still Get Chanel” featuring Chance the Rapper—produced by Dr. Zeus—reunites the duo since 2017, blending romantic soul with heavy drums and lyrics on perseverance. Trailer co-sign from NFL MVP Lamar Jackson, Kodak's Pompano Beach native and pardon advocate, amplifies hype. Analytic Dreamz breaks down Kodak's 2024 momentum: Dieuson Octave, Trill Bill, and Christmas drop Gift for the Streets (12 tracks with Lil Yachty, Veeze, Rob49). 2025 singles like “Imma Shoot,” “Keys to the City,” and collabs on Sniper Gang & JACKBOYS 2 showcase relentless output. From “Tunnel Vision” (#6 Hot 100) to Dying to Live (#1 Billboard 200, “ZEZE” 6× Platinum), Kodak's 7 albums, 11 mixtapes, 36 singles, and 25B+ streams underline resilience post-2021 pardon. Analytic Dreamz analyzes creative evolution, vulnerability, and potential 2026 dominance. Support this podcast at — https://redcircle.com/analytic-dreamz-notorious-mass-effect/donationsAdvertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy

Startup Inside Stories
ROBOTS en CASA, Burbuja de TRILLONES y el "Valor Ficticio" de la IA

Startup Inside Stories

Play Episode Listen Later Oct 30, 2025 69:14


En la tertulia de esta semana, el debate se calienta. Analizamos la nueva estructura corporativa de OpenAI (que por fin es una "empresa normal") y el reparto de su nuevo cap table: 27% para Microsoft, 26% para empleados, 2% para Jony Ive... ¿y Sam Altman?.Discutimos sobre el "bucle infinito" de valor y la anécdota de la startup de IA valorada en $Billones cuya tecnología fue replicada por Bain en solo dos semanas. ¿Estamos ante una burbuja de trillones?.Además, César defiende su visión sobre los Agentes IA frente a los workflows. Hablamos del futuro de los robots humanoides de Nvidia , los despidos masivos en Amazon y el nuevo unicornio español, Securitae, que sale a bolsa valorado en $1.25B.

Foundr Magazine Podcast with Nathan Chan
599: They Rejected Her Idea, She Turned it into a BILLION Dollar Business | Suneera Madhani (Best of Foundr)

Foundr Magazine Podcast with Nathan Chan

Play Episode Listen Later Oct 23, 2025 40:53


Suneera Madhani built Stax from an idea her employer rejected into a $1B fintech unicorn processing over $25B in payments. In this interview, the Stax co-founder shares how she went from selling credit card terminals out of her car to pioneering the first subscription-based payment processor, raising over $500M in capital, and scaling a company now generating $120M+ in revenue. From turning down a $17.5M acquisition offer to building an omnichannel platform before “fintech” was even a word, Suneera breaks down the strategies, resilience, and leadership lessons that took her from a scrappy founder to one of the most successful female entrepreneurs in tech. What you'll learn from this interview: • How Suneera turned a rejected idea into a billion-dollar company • The scrappy marketing tactics that got her first 250 customers • Why she turned down a $17.5M acquisition offer early on • The lessons from raising over $500M in funding and navigating investors • How to build an MVP using white-label solutions and customer feedback • The importance of execution and focus over “big ideas” • Why resilience, intuition, and community are critical for long-term success • The rebrand from FatMerchant to Stax and what it taught her about scaling By the end of this interview, you'll walk away with proven insights for scaling a fintech or SaaS company from zero to unicorn status—so you can apply the same strategies to grow your own business with focus and resilience. SAVE 50% ON OMNISEND FOR 3 MONTHS Get 50% off your first 3 months of email and SMS marketing with Omnisend with the code FOUNDR50. Just head to https://your.omnisend.com/foundrhttps://your.omnisend.com/foundr to get started. HOW WE CAN HELP YOU SCALE YOUR BUSINESS FASTER Learn directly from 7, 8 & 9-figure founders inside Foundr+ Start your $1 trial → https://www.foundr.com/startdollartrial PREFER A CUSTOM ROADMAP AND 1-ON-1 COACHING? → Starting from scratch? Apply here → https://foundr.com/pages/coaching-start-application → Already have a store? Apply here → https://foundr.com/pages/coaching-growth-application CONNECT WITH NATHAN CHAN Instagram → https://www.instagram.com/nathanchan LinkedIn → https://www.linkedin.com/in/nathanhchan/ CONNECT WITH SUNEERA MADHANI Website → https://staxpayments.com/ Instagram → https://www.instagram.com/suneeramadhani/ LinkedIn → https://www.linkedin.com/in/suneeramadhani/ FOLLOW FOUNDR FOR MORE BUSINESS GROWTH STRATEGIES YouTube → https://bit.ly/2uyvzdt Website → https://www.foundr.com Instagram → https://www.instagram.com/foundr/ Facebook → https://www.facebook.com/foundr Twitter → https://www.twitter.com/foundr LinkedIn → https://www.linkedin.com/company/foundr/ Podcast → https://www.foundr.com/podcast

Solar Maverick Podcast
SMP 243: Energy Market Shake-Ups & the Rise of Solar Repowering

Solar Maverick Podcast

Play Episode Listen Later Oct 20, 2025 5:37


This is episode 36 of The League, hosts David Magid and Benoy Thanjan (aka The Solar Maverick) break down the biggest clean energy headlines of the week. They cover: TotalEnergies' $1.25B sale to KKR and what it signals about renewable asset valuations. The collapse of $24B in U.S. Hydrogen Hub contracts and the broader implications for hydrogen's future. The IEA's downgraded global renewable forecast—and why solar still leads the way. The growing opportunity in solar repowering, where upgrading aging assets can boost returns at a fraction of the cost. Host Bio: David Magid David Magid is a seasoned renewable energy executive with deep expertise in solar development, financing, and operations. He has worked across the clean energy value chain, leading teams that deliver distributed generation and community solar projects. David is widely recognized for his strategic insights on interconnection, market economics, and policy trends shaping the U.S. solar industry. Connect with David on LinkedIn: https://www.linkedin.com/in/davidmagid/   Host Bio: Benoy Thanjan Benoy Thanjan is the Founder and CEO of Reneu Energy, solar developer and consulting firm, and a strategic advisor to multiple cleantech startups. Over his career, Benoy has developed over 100 MWs of solar projects across the U.S., helped launch the first residential solar tax equity funds at Tesla, and brokered $45 million in Renewable Energy Credits (“REC”) transactions. Prior to founding Reneu Energy, Benoy was the Environmental Commodities Trader in Tesla's Project Finance Group, where he managed one of the largest environmental commodities portfolios. He originated REC trades and co-developed a monetization and hedging strategy with senior leadership to enter the East Coast market. As Vice President at Vanguard Energy Partners, Benoy crafted project finance solutions for commercial-scale solar portfolios. His role at Ridgewood Renewable Power, a private equity fund with 125 MWs of U.S. renewable assets, involved evaluating investment opportunities and maximizing returns. He also played a key role in the sale of the firm's renewable portfolio. Earlier in his career, Benoy worked in Energy Structured Finance at Deloitte & Touche and Financial Advisory Services at Ernst & Young, following an internship on the trading floor at D.E. Shaw & Co., a multi billion dollar hedge fund. Benoy holds an MBA in Finance from Rutgers University and a BS in Finance and Economics from NYU Stern, where he was an Alumni Scholar. Connect with Benoy on LinkedIn: https://www.linkedin.com/in/benoythanjan/ Learn more: https://reneuenergy.com   If you have any questions or comments, you can email us at info@reneuenergy.com.

Unchained
The Chopping Block: Perp Wars & Stablecoin Battles: Hyperliquid, Aster, Tether - Ep. 911

Unchained

Play Episode Listen Later Sep 26, 2025 62:47


 Welcome to The Chopping Block – where crypto insiders Haseeb Qureshi, Tom Schmidt, Tarun Chitra, and Robert Leshner chop it up about the latest in crypto. This week, we're joined by Farooq Malik, co-founder and CEO of Rain, as two parallel wars erupt across crypto: the Perp DEX war between Hyperliquid and the CZ-backed Aster, and the deepening battle for stablecoin dominance. As Aster rockets to $30B in daily volume, we debate whether it's real adoption or points-fueled froth — and what it means for Hyperliquid's lead. Then we dive into Tether's shocking $500B valuation play, Circle's shrinking moat, and how Rain is building real-world rails for stablecoin payments. If crypto has two new battlegrounds — trading venues and money itself — this is where the future is getting drawn. Show highlights

Business Lunch
The Loyalty Illusion: Why Points Don't Create Love

Business Lunch

Play Episode Listen Later Sep 19, 2025 47:04


Roland Frasier and Ryan Deiss break down the “loyalty illusion”—why points and perks often backfire, how spreadsheet thinking killed customer love, and a practical framework to audit or rebuild a program that actually increases retention, spend, and referrals.What you'll learnWhy “loyalty penalties” drive your best customers awayThe airline/credit-card miles economics—and how devaluation erodes $25B in perceived valueThe 5-Question Loyalty Audit (value, simplicity, frequency of wins, emotion vs. switching cost, financial sanity)What great looks like: status, access, and convenience (not discounts)A 7-step roadmap to design (or reset) your programTimestamps00:00 Cold open: founders' meeting recap, wine cellar banter02:05 The hook: the “loyalty illusion” and why consumers feel trapped05:20 Consumer POV: when complexity makes customers give up08:10 Finance-driven devaluation: how “pencil-whipping” kills goodwill09:45 Airlines > miles > credit cards: the $25B machine and breakage12:40 From distance flown to dollars spent: fallout and backlash15:05 “Loyalty penalty”: new-customer offers vs. existing customers16:50 The 5-Question Loyalty Audit (red flags & benchmarks)19:30 Simplicity wins: JetBlue/Southwest lessons (and where they slipped)22:15 Frequency of wins: Starbucks habit loop vs. margin compression25:20 Luxury model: status & access (Hermès, Four Seasons, 100 Acre)28:40 Access > discounts: Wynn Private Access, line-skip convenience31:10 Choosing your currency: points, status, experiences (Sephora case)34:35 Setting earn ratios: 2–5% cost with outsized perceived value37:10 Tiering for aspiration: Prime renewals, why Amazon is an outlier39:20 7-Step Roadmap: objective → currency → earn ratio → tiers → early wins → daily integration → quarterly audits43:30 Operator action items; consumer playbook (negotiate, switch, diversify)46:10 Ultimate test: does your program create love—or hostages?47:40 Closing thoughts & invitations to share experiencesTakeawaysDiscounts train delay; access creates desire.If

Dr. Howard Smith Oncall
Middlefield Original Cheese Co-Op's Products Have Listeria Contamination

Dr. Howard Smith Oncall

Play Episode Listen Later Aug 23, 2025 1:35


Vidcast:  https://www.instagram.com/p/DNrymjdWsfn/This bacterium causes a severe and sometimes fatal systemic infection in the very young, older frail individuals, and those with weakened immune systems.  Listeria can also trigger miscarriages and stillbirths.  The affected items include 100% Grass-Fed Pepper Jack Cheese with Lot Code 251661; Copia Collective 100% Grass-Fed Pepper Jack Cheese with Lot Code 251661; Horseradish Flavored Cheese with Lot Code 2524061; Monterey Jack Cheese with Lot Code 251672 and s dated 7-16-25B; and Farmers Cheese in the same sizes and codes.About 5,433 pounds of cheese were sold in Ohio between July 14 and August 7, 2025. The products were shipped to manufacturers, distributors, and sold in retail stores across the state.Do not not eat these cheeses. Return them to the place of purchase for a refund. For questions, contact Middlefield Original Cheese Co-Op at 1-440-632-5567 or via the email nevinbyler@middlefieldcheese.com.https://www.fda.gov/safety/recalls-market-withdrawals-safety-alerts/middlefield-original-cheese-co-op-recalls-100-grass-fed-pepper-jack-cheese-and-horseradish-flavored#middlefield #cheese #listeria #infection #recall

Wait...What? #sportsbiz chat with DP & McGhee

Episode 122 | DP & McGheeCo-hosts David Paro and Tim McGhee open with a sharp look at recent headline-making moves in sports media. They break down the NFL's $2.5B stake in ESPN—part of a massive $25B valuation—and Fox Sports' strategic investment in IndyCar, exploring how these deals could reshape the sports broadcast and rights landscape.The discussion then tackles the troubling persistence of sexism and misogyny in sports. DP & McGhee address incidents like sex toys being thrown onto the court at WNBA games, and examine both the progress and pushback faced by Jen Pawol, MLB's first female umpire, who recently made history in Atlanta.This week's guest is Liz DiLullo Brown, EVP and Chief Marketing & Business Relationships Officer for Little League International. Liz shares insights from the just-completed Little League Softball World Series and previews the Little League World Series, beginning August 13 in Williamsport, PA. She explains how Little League is evolving to stay culturally relevant and details the organization's growing partnership with Major League Baseball to inspire the next generation of athletes.

Cyber Briefing
August 05, 2025 - Cyber Briefing

Cyber Briefing

Play Episode Listen Later Aug 5, 2025 9:05


EUVC
VC | E537 | This Week in European Tech with Dan, Mads & Lomax

EUVC

Play Episode Listen Later Aug 4, 2025 68:32


Welcome back to another episode of Upside at the EUVC Podcast, where Dan Bowyer, Mads Jensen of SuperSeed and Lomax from Outsized Ventures unpack what's happening in European tech and venture capital.This week: Why Meta and Microsoft are minting cash from AI, what Figma's IPO signals for SaaS, whether the EU got rolled in its new trade deal with the US, and how Europe's AI scene is finally delivering billion‑dollar exits. Plus: OpenAI's new “Study Mode” and Harry Stebbings' Project Europe—an “anti‑YC” deep‑tech accelerator for founders under 25.

Squawk on the Street
SOTS 2nd Hour: Meta Expectations, Fed in Focus, and the View From The C-Suite – w/GSK & Hershey CEOs 7/30/25

Squawk on the Street

Play Episode Listen Later Jul 30, 2025 42:59


Stocks hovering around record highs ahead of a Fed decision and key report cards out of Big Tech: Sara Eisen and David Faber broke down the latest on the data front (Q2 GDP, new payrolls data, and pending home sales at the top of the hour) along with some new commentary around prices and tariffs from consumer-facing earnings. RBC Tech analyst Brad Erickson broke down his bull case for Meta ahead of results tonight, while former Fed President Esther George discussed her predictions when it comes to Fed Chair Powell and rates.  Plus: the view from the C-Suite… This hour: the CEO of pharmaceutical giant GlaxoSmithKline talked her expectations for tariffs on the industry; hear the CEO of Starbucks' take on competition, as same-store sales there disappoint; the CEO of Hershey joined the team for her last broadcast interview in the role with the her latest on the consumer, M&A expectations, and legacy; and more from the CEO of Palo Alto as the company announces plans to acquire CyberArk for ~$25B. Squawk on the Street Disclaimer

On The Chain - Blockchain and Cryptocurrency News + Opinion
XRP: Ready to Explode? | Ripple's $1.25B DeFi Play Could Unlock Trillions

On The Chain - Blockchain and Cryptocurrency News + Opinion

Play Episode Listen Later Jul 26, 2025 84:45


XRP: Ready to Explode? | Ripple's $1.25B DeFi Play Could Unlock Trillions Ripple's massive move — acquiring prime broker Hidden Road in a $1.25B deal that could reshape institutional crypto forever. Brad Garlinghouse breaks it down in “Crypto in One Minute,” but here's what the XRP Army really needs to know: this isn't just another acquisition — it's a DeFi power play that could unlock trillions in trade volume, transform TradFi clearing, and position XRP and RLUSD at the heart of institutional adoption. ✅ Ripple's global prime brokerage strategy ✅ How XRP + RLUSD enable cross-margining ✅ Doppler's new RLUSD use case ✅ XRP charts: 21 EMA and bullish crossover incoming ✅ 39% of U.S. crypto holders already spending crypto ✅ FireAid scandal – $100M missing? ✅ Trump, Nvidia, Powell & the AI race

DeFi Slate
How Wall Street Learned To Love Tokenization with Michael Sonnenshein

DeFi Slate

Play Episode Listen Later Jul 18, 2025 34:27


The tokenized RWA market just hit $25B. Is this institutional adoption or just hype dressed up in compliance theater?In today's episode, we sit down with Michael Sonnenshein, COO of Securitize, to explore what's really happening. With nearly $4B in tokenized assets and partnerships with BlackRock and Apollo, Securitize is leading the charge.We dive into why Wall Street is embracing tokenization now, what makes their "native" approach different, and the risks everyone's overlooking in the rush to tokenize everything.Let's get into it.The Rollup---Newton is the trust layer for autonomous finance. Smart. Secure. Verifiable. Built for a future where AI agents replace apps and interfaces. Learn more here: https://www.magicnewton.com/Get effortless access to crypto's best DeFi yields. Continually rebalanced by AI powered Keepers to earn you more while saving you time and reducing costs. Learn more here: https://summer.fi/earn?referralCode=2000096----Website: https://therollup.co/Spotify: https://open.spotify.com/show/1P6ZeYd9vbF3hJA2n7qoL5?si=7230787bb90947efPodcast: https://therollup.co/category/podcastFollow us on X: https://www.x.com/therollupcoFollow Rob on X: https://www.x.com/robbie_rollupFollow Andy on X: https://www.x.com/ayyyeandyJoin our TG group: https://t.me/+8ARkR_YZixE5YjBhThe Rollup Disclosures: https://therollup.co/the-rollup-discl

Furthermore with Amanda Head
Exclusive with President Trump: Clinton cover-up, FBI corruption & Powell getting fired, no topic off limits

Furthermore with Amanda Head

Play Episode Listen Later Jul 16, 2025 30:53


On this episode of the podcast, President Donald J. Trump joins host Amanda Head and her “Just The News, No Noise” TV news co-host John Solomon to discuss a wide array of issues that are currently dominating the headlines.President Trump discusses the FBI's investigation into the weaponization against him and expresses support for a special prosecutor. Additionally, the 45th and 47th President supported the declassification of documents like the Hillary Clinton email annex, as well as the intercept where Hillary Clinton approves hanging the Russian collusion bombshell on Trump's first presidential campaign in 2016.The President criticized our current voting system, emphasized the need for secure borders, and highlighted the success of his tariff implementation, mentioning a $25B surplus last month. Trump also criticized Fed Chair Jerome Powell's policies and discussed whether or not he would take action to fire and replace Powell.You can watch Amanda Head and John Solomon every weekday evening at 6PM ET on the Real America's Voice Network. You can also follow them on your favorite social media channel by searching for their respective handles: @AmandaHead @FurthermorePod @JSolomonReports @JustTheNewsSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

TD Ameritrade Network
Chart of the Day: GOOGL $25B Data Center Spend

TD Ameritrade Network

Play Episode Listen Later Jul 15, 2025 3:43


Alphabet (GOOGL) announced plans to spend up to $25B to expand its data center presence in America. Ben Watson drops by to look at the technical chart perspective on the Mag 7 stock. In the past 5 days, he looks at a range between $177-$182. On a longer-term chart, he sees a sideways trend over the last year. Ben says to watch tests of the $182 level, but also cautions of some bearish divergence in the RSI momentum study.======== Schwab Network ========Empowering every investor and trader, every market day.Subscribe to the Market Minute newsletter - https://schwabnetwork.com/subscribeDownload the iOS app - https://apps.apple.com/us/app/schwab-network/id1460719185Download the Amazon Fire Tv App - https://www.amazon.com/TD-Ameritrade-Network/dp/B07KRD76C7Watch on Sling - https://watch.sling.com/1/asset/191928615bd8d47686f94682aefaa007/watchWatch on Vizio - https://www.vizio.com/en/watchfreeplus-exploreWatch on DistroTV - https://www.distro.tv/live/schwab-network/Follow us on X – / schwabnetwork Follow us on Facebook – / schwabnetwork Follow us on LinkedIn - / schwab-network About Schwab Network - https://schwabnetwork.com/about

Grain Markets and Other Stuff
"China, Get the Hell Out of American Agriculture!!" - US Senator

Grain Markets and Other Stuff

Play Episode Listen Later Jul 9, 2025 13:04


Joe's Premium Subscription: www.standardgrain.comGrain Markets and Other Stuff Links-Apple PodcastsSpotifyTikTokYouTubeFutures and options trading involves risk of loss and is not suitable for everyone.0:00 China, Get the Hell Out!3:58 Tuesday Selloff7:52 Brazil Export Problems9:09 Tariff Update10:38 Wheat Purchase Agreements11:47 Flash Sales

Ones Ready
Ops Brief 067: Daily Drop - 27 June 2025 (“Razin Caine” Bombs Iran & Roasts Top Gun)

Ones Ready

Play Episode Listen Later Jun 27, 2025 14:57


Send us a textThe Pentagon dropped a $962B budget bomb, and Razin Caine dropped an actual bomb—well, more like 125 aircraft dropping GBU-57 bunker busters on Iranian nuclear sites. In this no-fluff Daily Drop, Jared unpacks the FY26 defense budget, the rise of the F-47, the death of the A-10, and why space is the new high ground (sorry, Wedgetail). He also calls out bureaucratic nonsense, praises enlisted studs like Tech Sgt. Montoya, and side-eyes yet another “brilliant” plan to split the Air Force into four separate services. Meanwhile, Hoist is still the drink of choice, even if Congress can't get theirs together.

Bankless
ROLLUP: Circle's $25B IPO Mania | Genius Act Passes? | Stripe Buys Privy | Polymarket Meets Twitter

Bankless

Play Episode Listen Later Jun 13, 2025


This week, Ryan and David unpack Circle's stunning IPO debut, soaring to a $25B market cap and triggering Wall Street's stablecoin frenzy. They break down the landmark Genius Act vote, bringing unprecedented regulatory clarity to stablecoins, and dive into Stripe's latest crypto bet, acquiring wallet innovator Privy. Polymarket goes mainstream in an official integration with Twitter, while SEC Chair Paul Atkins embraces DeFi as fundamentally American. Plus, Gemini quietly files for IPO, Plasma's billion-dollar Tether chain ignites, and Trump unexpectedly endorses crypto at Coinbase. ------

Grain Markets and Other Stuff
Tight Stocks, Surging Crude, Weak Dollar - Why Can't Corn Rally???

Grain Markets and Other Stuff

Play Episode Listen Later Jun 13, 2025 22:14


Joe's Premium Subscription: www.standardgrain.comGrain Markets and Other Stuff Links-Apple PodcastsSpotifyTikTokYouTubeFutures and options trading involves risk of loss and is not suitable for everyone.0:00 Friendly Corn Numbers (USDA)4:12 Crude SURGES6:19 US Dollar is Weak9:25 US Drought / Weather14:46 Biofuel News / RVOs16:49 Export Sales18:40 ICE and Ag

This Week in Pre-IPO Stocks
E206: Scale AI gets $14.3B from Meta, hits $29B valuation; Starlink doubles subs to 6M, adds 100K in Africa; SpaceX expands Starship launch capacity in Florida; Databricks adds Google Gemini, hits $72.8B valuation; Perplexity partners with Nvidia, eyes $1

This Week in Pre-IPO Stocks

Play Episode Listen Later Jun 13, 2025 10:15


Send us a text00:00 - Intro00:51 - Scale AI gets $14.3B from Meta, hits $29B valuation02:03 - Starlink doubles subs to 6M, adds 100K in Africa03:22 - SpaceX expands Starship launch capacity in Florida04:08 - Databricks adds Google Gemini, hits $72.8B valuation05:09 - Perplexity partners with Nvidia, eyes $14B raise06:08 - Glean raises $150M at $7.2B valuation07:13 - Mistral hits $6B valuation, expands sovereign AI reach08:32 - Gecko Robotics doubles to $1.25B valuation09:28 - Bullish files confidentially for US IPO

In Depth
Inside Linear: Why craft and focus still win in product building | Karri Saarinen (Co-founder and CEO)

In Depth

Play Episode Listen Later Jun 10, 2025 93:04


Karri Saarinen is the co-founder and CEO of Linear, the project management tool built for high-performance software teams. Since its founding in 2019, Linear has achieved a valuation of $1.25B as of 10th June 2025 and now counts companies like OpenAI, Ramp and Vercel as customers. Before founding Linear, Karri led design at Airbnb and Coinbase, and previously co-founded Kippt, a bookmarking tool acquired by Coinbase. In today's episode, we discuss Karri's childhood love for computers that shaped his career The lessons he learned from a failed first startup Linear's founding principles The early validation strategies used to shape the product Why Karri believes in small teams And much more… Referenced Airbnb Brian Armstrong Brian Chesky Coinbase Jori Lallo Linear Tuomas Artman Y Combinator Where to find Karri LinkedIn Twitter/X Where to find Brett LinkedIn Twitter/X Where to find First Round Capital Website First Round Review Twitter/X YouTube Timestamps (1:37) Childhood roots in computers and design (6:54) Founding Kippt and lessons from a failed bookmarking startup (13:14) Lessons from a serial entrepreneur (19:32) Why teams shouldn't grow too quickly (25: 03) Linear's early beginnings (36:55) The unexpected power of intuition (42:41) Linear's unusual approach to user growth (47:29) What shaped Linear's early product roadmap (52:02) Startups shouldn't try to boil the ocean (57:30) The power of extreme focus (59:18) Design “something for someone” (1:04:29) Flexibility vs. simplicity (1:17:27) Lead your team with strong principles (1:24:45) Design founders vs. engineering founders

Unchained
Decentralization Used to Mean Something. Now It's Just a Vibe. – The Chopping Block - Ep. 842

Unchained

Play Episode Listen Later May 29, 2025 53:20


Welcome to The Chopping Block – where crypto insiders Haseeb Qureshi, Tom Schmidt, Tarun Chitra, and Robert Leshner chop it up about the latest in crypto. In this episode, the gang reunites to confront a troubling pattern: we're making the same mistakes all over again. From the $223 million Sui hack and validator-led censorship to Coinbase's insider data breach and the Trump token dinner spectacle, this week feels like a remix of the industry's most painful lessons. The crew reflects on how decentralization is being quietly redefined, why newer chains ignore crypto's origin story, and what it means when memecoins are the new access pass to political influence. Also: James Wynn's billion-dollar trades, fading cypherpunk values, and a creeping sense that the crypto future looks a lot like its past. Show highlights

Paul's Security Weekly
AI in AppSec: Agentic Tools, Vibe Coding Risks & Securing Non-Human Identities - Mo Aboul-Magd, Shahar Man, Brian Fox, Mark Lambert - ASW #332

Paul's Security Weekly

Play Episode Listen Later May 27, 2025 64:35


ArmorCode unveils Anya—the first agentic AI virtual security champion designed specifically for AppSec and product security teams. Anya brings together conversation and context to help AppSec, developers and security teams cut through the noise, prioritize risks, and make faster, smarter decisions across code, cloud, and infrastructure. Built into the ArmorCode ASPM Platform and backed by 25B findings, 285+ integrations, natural language intelligence, and role-aware insights, Anya turns complexity into clarity, helping teams scale securely and close the security skills gap. Anya is now generally available and included as part of the ArmorCode ASPM Platform. Visit https://securityweekly.com/armorcodersac to request a demo! As 'vibe coding", the practice of using AI tools with specialized coding LLMs to develop software, is making waves, what are the implications for security teams? How can this new way of developing applications be made secure? Or have the horses already left the stable? Segment Resources: https://www.backslash.security/press-releases/backslash-security-reveals-in-new-research-that-gpt-4-1-other-popular-llms-generate-insecure-code-unless-explicitly-prompted https://www.backslash.security/blog/vibe-securing-4-1-pillars-of-appsec-for-vibe-coding This segment is sponsored by Backslash. Visit https://securityweekly.com/backslashrsac to learn more about them! The rise of AI has largely mirrored the early days of open source software. With rapid adoption amongst developers who are trying to do more with less time, unmanaged open source AI presents serious risks to organizations. Brian Fox, CTO & Co-founder of Sonatype, will dive into the risks associated with open source AI and best practices to secure it. Segment Resources: https://www.sonatype.com/solutions/open-source-ai https://www.sonatype.com/blog/beyond-open-vs.-closed-understanding-the-spectrum-of-ai-transparency https://www.sonatype.com/resources/whitepapers/modern-development-in-ai-era This segment is sponsored by Sonatype. Visit https://securityweekly.com/sonatypersac to learn more about Sonatype's AI SCA solutions! The surge in AI agents is creating a vast new cyber attack surface with Non-Human Identities (NHIs) becoming a prime target. This segment will explore how SandboxAQ's AQtive Guard Discover platform addresses this challenge by providing real-time vulnerability detection and mitigation for NHIs and cryptographic assets. We'll discuss the platform's AI-driven approach to inventory, threat detection, and automated remediation, and its crucial role in helping enterprises secure their AI-driven future. To take control of your NHI security and proactively address the escalating threats posed by AI agents, visit https://securityweekly.com/sandboxaqrsac to schedule an early deployment and risk assessment. Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-332

Paul's Security Weekly TV
AI in AppSec: Agentic Tools, Vibe Coding Risks & Securing Non-Human Identities - Mo Aboul-Magd, Brian Fox, Mark Lambert, Shahar Man - ASW #332

Paul's Security Weekly TV

Play Episode Listen Later May 27, 2025 64:35


ArmorCode unveils Anya—the first agentic AI virtual security champion designed specifically for AppSec and product security teams. Anya brings together conversation and context to help AppSec, developers and security teams cut through the noise, prioritize risks, and make faster, smarter decisions across code, cloud, and infrastructure. Built into the ArmorCode ASPM Platform and backed by 25B findings, 285+ integrations, natural language intelligence, and role-aware insights, Anya turns complexity into clarity, helping teams scale securely and close the security skills gap. Anya is now generally available and included as part of the ArmorCode ASPM Platform. Visit https://securityweekly.com/armorcodersac to request a demo! As 'vibe coding", the practice of using AI tools with specialized coding LLMs to develop software, is making waves, what are the implications for security teams? How can this new way of developing applications be made secure? Or have the horses already left the stable? Segment Resources: https://www.backslash.security/press-releases/backslash-security-reveals-in-new-research-that-gpt-4-1-other-popular-llms-generate-insecure-code-unless-explicitly-prompted https://www.backslash.security/blog/vibe-securing-4-1-pillars-of-appsec-for-vibe-coding This segment is sponsored by Backslash. Visit https://securityweekly.com/backslashrsac to learn more about them! The rise of AI has largely mirrored the early days of open source software. With rapid adoption amongst developers who are trying to do more with less time, unmanaged open source AI presents serious risks to organizations. Brian Fox, CTO & Co-founder of Sonatype, will dive into the risks associated with open source AI and best practices to secure it. Segment Resources: https://www.sonatype.com/solutions/open-source-ai https://www.sonatype.com/blog/beyond-open-vs.-closed-understanding-the-spectrum-of-ai-transparency https://www.sonatype.com/resources/whitepapers/modern-development-in-ai-era This segment is sponsored by Sonatype. Visit https://securityweekly.com/sonatypersac to learn more about Sonatype's AI SCA solutions! The surge in AI agents is creating a vast new cyber attack surface with Non-Human Identities (NHIs) becoming a prime target. This segment will explore how SandboxAQ's AQtive Guard Discover platform addresses this challenge by providing real-time vulnerability detection and mitigation for NHIs and cryptographic assets. We'll discuss the platform's AI-driven approach to inventory, threat detection, and automated remediation, and its crucial role in helping enterprises secure their AI-driven future. To take control of your NHI security and proactively address the escalating threats posed by AI agents, visit https://securityweekly.com/sandboxaqrsac to schedule an early deployment and risk assessment. Show Notes: https://securityweekly.com/asw-332

Application Security Weekly (Audio)
AI in AppSec: Agentic Tools, Vibe Coding Risks & Securing Non-Human Identities - Mo Aboul-Magd, Shahar Man, Brian Fox, Mark Lambert - ASW #332

Application Security Weekly (Audio)

Play Episode Listen Later May 27, 2025 64:35


ArmorCode unveils Anya—the first agentic AI virtual security champion designed specifically for AppSec and product security teams. Anya brings together conversation and context to help AppSec, developers and security teams cut through the noise, prioritize risks, and make faster, smarter decisions across code, cloud, and infrastructure. Built into the ArmorCode ASPM Platform and backed by 25B findings, 285+ integrations, natural language intelligence, and role-aware insights, Anya turns complexity into clarity, helping teams scale securely and close the security skills gap. Anya is now generally available and included as part of the ArmorCode ASPM Platform. Visit https://securityweekly.com/armorcodersac to request a demo! As 'vibe coding", the practice of using AI tools with specialized coding LLMs to develop software, is making waves, what are the implications for security teams? How can this new way of developing applications be made secure? Or have the horses already left the stable? Segment Resources: https://www.backslash.security/press-releases/backslash-security-reveals-in-new-research-that-gpt-4-1-other-popular-llms-generate-insecure-code-unless-explicitly-prompted https://www.backslash.security/blog/vibe-securing-4-1-pillars-of-appsec-for-vibe-coding This segment is sponsored by Backslash. Visit https://securityweekly.com/backslashrsac to learn more about them! The rise of AI has largely mirrored the early days of open source software. With rapid adoption amongst developers who are trying to do more with less time, unmanaged open source AI presents serious risks to organizations. Brian Fox, CTO & Co-founder of Sonatype, will dive into the risks associated with open source AI and best practices to secure it. Segment Resources: https://www.sonatype.com/solutions/open-source-ai https://www.sonatype.com/blog/beyond-open-vs.-closed-understanding-the-spectrum-of-ai-transparency https://www.sonatype.com/resources/whitepapers/modern-development-in-ai-era This segment is sponsored by Sonatype. Visit https://securityweekly.com/sonatypersac to learn more about Sonatype's AI SCA solutions! The surge in AI agents is creating a vast new cyber attack surface with Non-Human Identities (NHIs) becoming a prime target. This segment will explore how SandboxAQ's AQtive Guard Discover platform addresses this challenge by providing real-time vulnerability detection and mitigation for NHIs and cryptographic assets. We'll discuss the platform's AI-driven approach to inventory, threat detection, and automated remediation, and its crucial role in helping enterprises secure their AI-driven future. To take control of your NHI security and proactively address the escalating threats posed by AI agents, visit https://securityweekly.com/sandboxaqrsac to schedule an early deployment and risk assessment. Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-332

The Opperman Report
Claudio Bono - Founder Givearoof.Org

The Opperman Report

Play Episode Listen Later May 17, 2025 51:01


Claudio Bono - Founder Givearoof.OrgCitizen-Led Plan to End Homelessness Gains Traction—Will Leaders Act? Roof, Rehab, Rebuild: End Homelessness Now! My nonprofit, http://GiveARoof.org, has a bold plan to raise $25B yearly—without taxpayer funds—using credit card points, airline miles, & hotel loyalty programs. By uniting nonprofits via a shared IT system, we'll pool resources, identify causes, & act: reintegrate the willing, treat mental health & addiction, & build homes. With grants & local chamber of commerce partnerships, we'll create Data to share locally, federally, and statewide, and create “welcome lounges” to process, clean, & groom the unhoused, preparing them for jobs & stability, and join forces with schools. This citizen-led solution, praised as “game-changing” by congressmen, hit 3M views on X. California has spent billions on homelessness, with no progress—conditions have only worsened. It's time to demand accountability from officials and implement a proven, actionable plan. Stop squandering funds on ineffective measures without data or results. Let's solve homelessness for good!TwitterVideoBecome a supporter of this podcast: https://www.spreaker.com/podcast/the-opperman-report--1198501/support.

This Week in Startups
Chime's IPO, Databricks' $1B Acquisition & Dave Rubin's Media Empire | E2126

This Week in Startups

Play Episode Listen Later May 15, 2025 66:37


Today's show: Chime is finally going public with strong financials and a shot at matching its $25B 2021 valuation, signaling real momentum in the IPO market. Databricks just made a $1B bet on agentic AI by acquiring Neon, a Postgres-as-a-service startup riding the new database wave. Then, Dave Rubin joins to share how he built and sold Locals, his uncancellable creator platform, all while navigating the intense media landscape.Timestamps:(0:00) Episode Teaser(1:14) Jason and Alex open the show(1:42) Why Chime's IPO is such a promising sign(7:22) Chime's financials and valuation(10:12) Squarespace - Use offer code TWIST to save 10% off your first purchase of a website or domain at https://www.Squarespace.com/TWIST(12:17) So why did Databricks buy Neon?(13:22) Where is the AI Integration Desktop App?(17:29) Jason's plan to bring Americans back to the movies(20:10) Northwest Registered Agent. Form your entire business identity in just 10 clicks and 10 minutes. Get more privacy, more options, and more done—visit northwestregisteredagent.com/twist today!(21:58) Make movies All-you-can-eat!(26:26) Special Guest: Dave Rubin(30:39) Lemon.io - Get 15% off your first 4 weeks of developer time at https://Lemon.io/twist(31:41) Why Dave Rubin goes phone free for weeks at a time(45:24) Why Identity politics is killing business and sports(49:23) Can Locals reinvent subscription models?Subscribe to the TWiST500 newsletter: https://ticker.thisweekinstartups.comCheck out the TWIST500: https://www.twist500.comSubscribe to This Week in Startups on Apple: https://rb.gy/v19fcpLinks from episode:Rubin Report on Locals: https://rubinreport.locals.com/Follow Dave:X: https://x.com/RubinReportYouTube: https://www.youtube.com/channel/UCJdKr0Bgd_5saZYqLCa9mngFollow Lon:X: https://x.com/lonsFollow Alex:X: https://x.com/alexLinkedIn: ⁠https://www.linkedin.com/in/alexwilhelmFollow Jason:X: https://twitter.com/JasonLinkedIn: https://www.linkedin.com/in/jasoncalacanisThank you to our partners:(10:12) Squarespace - Use offer code TWIST to save 10% off your first purchase of a website or domain at https://www.Squarespace.com/TWIST(20:10) Northwest Registered Agent. Form your entire business identity in just 10 clicks and 10 minutes. Get more privacy, more options, and more done—visit northwestregisteredagent.com/twist today!(30:39) Lemon.io - Get 15% off your first 4 weeks of developer time at https://Lemon.io/twistGreat TWIST interviews: Will Guidara, Eoghan McCabe, Steve Huffman, Brian Chesky, Bob Moesta, Aaron Levie, Sophia Amoruso, Reid Hoffman, Frank Slootman, Billy McFarlandCheck out Jason's suite of newsletters: https://substack.com/@calacanisFollow TWiST:Twitter: https://twitter.com/TWiStartupsYouTube: https://www.youtube.com/thisweekinInstagram: https://www.instagram.com/thisweekinstartupsTikTok: https://www.tiktok.com/@thisweekinstartupsSubstack: https://twistartups.substack.comSubscribe to the Founder University Podcast: https://www.youtube.com/@founderuniversity1916

Strategy Simplified
S17E27: Walmart Raises Prices and Lessons from the Rodeo (Market Outsiders: May 15, 2025)

Strategy Simplified

Play Episode Listen Later May 15, 2025 19:29


Send us a textIn this Thursday Market Outsiders, Namaan and Jenny Rae analyze Walmart's price hikes amid looming tariffs, noting an increase in U.S. sales and an e-commerce surge as signs of consumer strength. Jenny Rae then draws business acumen lessons from a rodeo, spotlighting Quanta Services, a $25B electrical contractor behind utility power lines, discovered via a sponsor flag. They explore how curiosity about everyday experiences - like rodeo sponsors - builds market insights, urging listeners to question headlines and connect real-world dots.Join Market Outsiders live every weekday at 9:15AM ET on LinkedIn and YouTube - and now, episodes are also available on Strategy Simplified every Monday, Tuesday, and Thursday.Want the full daily experience? Follow the new Market Outsiders podcast to get every episode, Monday through Friday.Subscribe to the new Market Outsiders feed (Apple, Spotify)Follow Management Consulted on LinkedIn and subscribe on YouTubeConnect with Namaan and Jenny Rae on LinkedInConnect With Management Consulted Schedule free 15min consultation with the MC Team. Watch the video version of the podcast on YouTube! Follow us on LinkedIn, Instagram, and TikTok for the latest updates and industry insights! Join an upcoming live event - case interviews demos, expert panels, and more. Email us (team@managementconsulted.com) with questions or feedback.

Insightful Investor
#70 - David Powers: Wasatch Culture, Market Cycles, Long-Term

Insightful Investor

Play Episode Listen Later May 13, 2025 55:03


Dave is Senior Portfolio Manager at Wasatch Global Investors, a $25B equity manager based in Salt Lake City (as of 3/31/25). He discusses Wasatch's collaborative culture, lessons learned from navigating market cycles, and the discipline behind a long-term investment approach.

WBSRocks: Business Growth with ERP and Digital Transformation
WBSP712: Grow Your Business by Learning from Enterprise Software Stories - Jan 2025, Week 1, an Objective Panel Discussion

WBSRocks: Business Growth with ERP and Digital Transformation

Play Episode Listen Later Apr 29, 2025 60:29


Send us a textThe tech and enterprise software landscape is undergoing rapid transformation, marked by a mix of bold acquisitions and cautious public market strategies. While Databricks' CEO candidly states that “it's dumb to IPO this year,” citing market volatility and strategic timing, major players are making aggressive moves elsewhere. From Thomson Reuters' $600M acquisition of tax automation firm SafeSend to WWT's $1.25B blockbuster deal to acquire Softchoice, the momentum is clearly shifting toward consolidation and capability expansion. Microsoft's Satya Nadella has also sparked debate with his prediction that AI agents could overtake traditional SaaS models by 2025, signaling a fundamental shift in how software is built and consumed. Meanwhile, companies like SPS Commerce, Cass Information Systems, and Later are strengthening their portfolios through targeted acquisitions, reflecting a broader trend of investing in specialized tools and emerging platforms. Together, these developments hint at a future shaped less by IPOs and more by ecosystem dominance and AI-powered disruption.In today's episode, we invited a panel of industry analysts for a live discussion on LinkedIn to analyze current enterprise software stories. We covered many grounds, including the direction and roadmaps of each enterprise software vendor. Finally, we analyzed future trends and how they might shape the enterprise software industry.Background Soundtrack: Away From You – Mauro SommFor more information on growth strategies for SMBs using ERP and digital transformation, visit our community at wbs. rocks or elevatiq.com. To ensure that you never miss an episode of the WBS podcast, subscribe on your favorite podcasting platform. 

Unchained
The Chopping Block: Bitcoin's 200K Dream, Tariff Nightmares, & the Altcoin Exodus - Ep. 815

Unchained

Play Episode Listen Later Apr 10, 2025 65:19


Welcome to The Chopping Block – where crypto insiders Haseeb Qureshi, Tom Schmidt, Tarun Chitra, and Robert Leshner chop it up about the latest in crypto. In this episode, the crew is joined by Jeff Park, Alpha Liaison at Bitwise, for a deep dive into the chaos gripping global markets and what it all means for crypto. With tariffs ripping through equities and whispers of stagflation on the rise, Jeff breaks down why Bitcoin might still be headed for $200K – and why MicroStrategy might be the new altcoin. They also unpack Circle's delayed IPO, Ripple's $1.25B acquisition, and whether capital markets are finally warming up to crypto. Show highlights

Thinking Crypto Interviews & News

Crypto News: Ripple announced it is acquiring Hidden Road for $1.25B– becoming the first crypto company to own and operate a global, multi-asset prime broker. Standard Chartered sees XRP jumping over 500% to $12.50 by 2028, expects XRP ETF approval in Q3 2025.Show Sponsor -